Closerlookfewshot

source code to ICLR'19, 'A Closer Look at Few-shot Classification'
Alternatives To Closerlookfewshot
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Label Studio13,243314 hours ago159June 16, 2022562apache-2.0Python
Label Studio is a multi-type data labeling and annotation tool with standardized output format
Awesome Project Ideas6,856
3 months ago1mit
Curated list of Machine Learning, NLP, Vision, Recommender Systems Project Ideas
Face_classification5,312
7 months ago54mitPython
Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.
Data Competition Topsolution2,847
3 years ago5
Data competition Top Solution 数据竞赛top解决方案开源整理
Cnn_sentence1,873
5 years ago42Python
CNNs for sentence classification
Fma1,773
5 months ago10mitJupyter Notebook
FMA: A Dataset For Music Analysis
Universal Data Tool1,612
a year ago173mitJavaScript
Collaborate & label any type of data, images, text, or documents, in an easy web interface or desktop app.
3d Pointcloud1,552
11 days ago2Python
Papers and Datasets about Point Cloud.
Closerlookfewshot901
a year ago52otherPython
source code to ICLR'19, 'A Closer Look at Few-shot Classification'
K Bert793
10 months ago49Python
Source code of K-BERT (AAAI2020)
Alternatives To Closerlookfewshot
Select To Compare


Alternative Project Comparisons
Readme

A Closer Look at Few-shot Classification

This repo contains the reference source code for the paper A Closer Look at Few-shot Classification in International Conference on Learning Representations (ICLR 2019). In this project, we provide a integrated testbed for a detailed empirical study for few-shot classification.

Citation

If you find our code useful, please consider citing our work using the bibtex:

@inproceedings{
chen2019closerfewshot,
title={A Closer Look at Few-shot Classification},
author={Chen, Wei-Yu and Liu, Yen-Cheng and Kira, Zsolt and Wang, Yu-Chiang and  Huang, Jia-Bin},
booktitle={International Conference on Learning Representations},
year={2019}
}

Enviroment

  • Python3
  • Pytorch before 0.4 (for newer vesion, please see issue #3 )
  • json

Getting started

CUB

  • Change directory to ./filelists/CUB
  • run source ./download_CUB.sh

mini-ImageNet

  • Change directory to ./filelists/miniImagenet
  • run source ./download_miniImagenet.sh

(WARNING: This would download the 155G ImageNet dataset. You can comment out correponded line 5-6 in download_miniImagenet.sh if you already have one.)

mini-ImageNet->CUB (cross)

  • Finish preparation for CUB and mini-ImageNet and you are done!

Omniglot

  • Change directory to ./filelists/omniglot
  • run source ./download_omniglot.sh

Omniglot->EMNIST (cross_char)

  • Finish preparation for omniglot first
  • Change directory to ./filelists/emnist
  • run source ./download_emnist.sh

Self-defined setting

  • Require three data split json file: 'base.json', 'val.json', 'novel.json' for each dataset
  • The format should follow
    {"label_names": ["class0","class1",...], "image_names": ["filepath1","filepath2",...],"image_labels":[l1,l2,l3,...]}
    See test.json for reference
  • Put these file in the same folder and change data_dir['DATASETNAME'] in configs.py to the folder path

Train

Run python ./train.py --dataset [DATASETNAME] --model [BACKBONENAME] --method [METHODNAME] [--OPTIONARG]

For example, run python ./train.py --dataset miniImagenet --model Conv4 --method baseline --train_aug
Commands below follow this example, and please refer to io_utils.py for additional options.

Save features

Save the extracted feature before the classifaction layer to increase test speed. This is not applicable to MAML, but are required for other methods. Run python ./save_features.py --dataset miniImagenet --model Conv4 --method baseline --train_aug

Test

Run python ./test.py --dataset miniImagenet --model Conv4 --method baseline --train_aug

Results

  • The test results will be recorded in ./record/results.txt
  • For all the pre-computed results, please see ./record/few_shot_exp_figures.xlsx. This will be helpful for including your own results for a fair comparison.

References

Our testbed builds upon several existing publicly available code. Specifically, we have modified and integrated the following code into this project:

FAQ

  • Q1 Why some of my reproduced results for CUB dataset are around 4~5% with you reported result? (#31, #34, #42)

  • A1 Sorry about my reported the results on the paper may run in different epochs or episodes, please see each issue for details.

  • Q2 Why some of my reproduced results for mini-ImageNet dataset are around 1~2% different with your reported results? (#17, #40, #41 #43)

  • A2 Due to random initialization, each training process could lead to different accuracy. Also, each test time could lead to different accuracy.

  • Q3 How do you decided the mean and the standard variation for dataset normalization? (#18, #39)

  • A3 I use the mean and standard variation from ImageNet, but you can use the ones calculated from your own dataset.

  • Q4 Do you have the mini-ImageNet dataset available without downloading the whole ImageNet? (#45 #29)

  • A4 You can use the dataset here oscarknagg/few-shot, but you will need to modify filelists/miniImagenet/write_miniImagenet_filelist.py.

Popular Classification Projects
Popular Dataset Projects
Popular Data Processing Categories

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Dataset
Classification
Imagenet