Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Label Studio | 13,243 | 3 | 14 hours ago | 159 | June 16, 2022 | 562 | apache-2.0 | Python | ||
Label Studio is a multi-type data labeling and annotation tool with standardized output format | ||||||||||
Awesome Project Ideas | 6,856 | 3 months ago | 1 | mit | ||||||
Curated list of Machine Learning, NLP, Vision, Recommender Systems Project Ideas | ||||||||||
Face_classification | 5,312 | 7 months ago | 54 | mit | Python | |||||
Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV. | ||||||||||
Data Competition Topsolution | 2,847 | 3 years ago | 5 | |||||||
Data competition Top Solution 数据竞赛top解决方案开源整理 | ||||||||||
Cnn_sentence | 1,873 | 5 years ago | 42 | Python | ||||||
CNNs for sentence classification | ||||||||||
Fma | 1,773 | 5 months ago | 10 | mit | Jupyter Notebook | |||||
FMA: A Dataset For Music Analysis | ||||||||||
Universal Data Tool | 1,612 | a year ago | 173 | mit | JavaScript | |||||
Collaborate & label any type of data, images, text, or documents, in an easy web interface or desktop app. | ||||||||||
3d Pointcloud | 1,552 | 11 days ago | 2 | Python | ||||||
Papers and Datasets about Point Cloud. | ||||||||||
Closerlookfewshot | 901 | a year ago | 52 | other | Python | |||||
source code to ICLR'19, 'A Closer Look at Few-shot Classification' | ||||||||||
K Bert | 793 | 10 months ago | 49 | Python | ||||||
Source code of K-BERT (AAAI2020) |
This repo contains the reference source code for the paper A Closer Look at Few-shot Classification in International Conference on Learning Representations (ICLR 2019). In this project, we provide a integrated testbed for a detailed empirical study for few-shot classification.
If you find our code useful, please consider citing our work using the bibtex:
@inproceedings{
chen2019closerfewshot,
title={A Closer Look at Few-shot Classification},
author={Chen, Wei-Yu and Liu, Yen-Cheng and Kira, Zsolt and Wang, Yu-Chiang and Huang, Jia-Bin},
booktitle={International Conference on Learning Representations},
year={2019}
}
./filelists/CUB
source ./download_CUB.sh
./filelists/miniImagenet
source ./download_miniImagenet.sh
(WARNING: This would download the 155G ImageNet dataset. You can comment out correponded line 5-6 in download_miniImagenet.sh
if you already have one.)
./filelists/omniglot
source ./download_omniglot.sh
./filelists/emnist
source ./download_emnist.sh
Run
python ./train.py --dataset [DATASETNAME] --model [BACKBONENAME] --method [METHODNAME] [--OPTIONARG]
For example, run python ./train.py --dataset miniImagenet --model Conv4 --method baseline --train_aug
Commands below follow this example, and please refer to io_utils.py for additional options.
Save the extracted feature before the classifaction layer to increase test speed. This is not applicable to MAML, but are required for other methods.
Run
python ./save_features.py --dataset miniImagenet --model Conv4 --method baseline --train_aug
Run
python ./test.py --dataset miniImagenet --model Conv4 --method baseline --train_aug
./record/results.txt
./record/few_shot_exp_figures.xlsx
. This will be helpful for including your own results for a fair comparison.Our testbed builds upon several existing publicly available code. Specifically, we have modified and integrated the following code into this project:
Q1 Why some of my reproduced results for CUB dataset are around 4~5% with you reported result? (#31, #34, #42)
A1 Sorry about my reported the results on the paper may run in different epochs or episodes, please see each issue for details.
Q2 Why some of my reproduced results for mini-ImageNet dataset are around 1~2% different with your reported results? (#17, #40, #41 #43)
A2 Due to random initialization, each training process could lead to different accuracy. Also, each test time could lead to different accuracy.
Q3 How do you decided the mean and the standard variation for dataset normalization? (#18, #39)
A3 I use the mean and standard variation from ImageNet, but you can use the ones calculated from your own dataset.
Q4 Do you have the mini-ImageNet dataset available without downloading the whole ImageNet? (#45 #29)
A4 You can use the dataset here oscarknagg/few-shot, but you will need to modify filelists/miniImagenet/write_miniImagenet_filelist.py.