Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Optimize Js | 3,760 | 523 | 30 | 2 years ago | 4 | January 05, 2017 | 23 | apache-2.0 | JavaScript | |
Optimize a JS file for faster parsing (UNMAINTAINED) | ||||||||||
Vroom | 976 | 2 months ago | 51 | bsd-2-clause | C++ | |||||
Vehicle Routing Open-source Optimization Machine | ||||||||||
Cpp_optimizations_diary | 783 | 7 months ago | 1 | mit | C++ | |||||
Tips and tricks to optimize your C++ code | ||||||||||
Chillout | 577 | 3 | 4 | a year ago | 26 | January 23, 2020 | 8 | mit | JavaScript | |
Reduce CPU usage by non-blocking async loop and psychologically speed up in JavaScript | ||||||||||
Memo_wise | 510 | 2 | 6 days ago | 10 | April 04, 2022 | 7 | mit | Ruby | ||
The wise choice for Ruby memoization | ||||||||||
Metaoptnet | 480 | 5 months ago | 11 | apache-2.0 | Python | |||||
Meta-Learning with Differentiable Convex Optimization (CVPR 2019 Oral) | ||||||||||
Onelog | 365 | 5 | 3 | 4 years ago | May 09, 2018 | 1 | mit | Go | ||
Dead simple, super fast, zero allocation and modular logger for Golang | ||||||||||
Alive | 203 | 3 years ago | 13 | apache-2.0 | Python | |||||
Alive: Automatic LLVM's Instcombine Verifier | ||||||||||
Babel Plugin Tailcall Optimization | 162 | 135 | 39 | 4 years ago | 9 | December 15, 2018 | 2 | mit | JavaScript | |
Tail call optimization for JavaScript! | ||||||||||
Functionfqnreplacer | 156 | 6 years ago | November 28, 2020 | 4 | mit | PHP | ||||
This repository contains the code for the paper:
Meta-Learning with Differentiable Convex Optimization
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto
CVPR 2019 (Oral)
Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS and FC100 few-shot learning benchmarks.
If you use this code for your research, please cite our paper:
@inproceedings{lee2019meta,
title={Meta-Learning with Differentiable Convex Optimization},
author={Kwonjoon Lee and Subhransu Maji and Avinash Ravichandran and Stefano Soatto},
booktitle={CVPR},
year={2019}
}
Clone this repository:
git clone https://github.com/kjunelee/MetaOptNet.git
cd MetaOptNet
Download and decompress dataset files: miniImageNet (courtesy of Spyros Gidaris), tieredImageNet, FC100, CIFAR-FS
For each dataset loader, specify the path to the directory. For example, in MetaOptNet/data/mini_imagenet.py line 30:
_MINI_IMAGENET_DATASET_DIR = 'path/to/miniImageNet'
python train.py --gpu 0,1,2,3 --save-path "./experiments/miniImageNet_MetaOptNet_SVM" --train-shot 15 \
--head SVM --network ResNet --dataset miniImageNet --eps 0.1
As shown in Figure 2, of our paper, we can meta-train the embedding once with a high shot for all meta-testing shots. We don't need to meta-train with all possible meta-test shots unlike in Prototypical Networks.python train.py --gpu 0,1,2,3 --save-path "./experiments/tieredImageNet_MetaOptNet_SVM" --train-shot 10 \
--head SVM --network ResNet --dataset tieredImageNet
python train.py --gpu 0 --save-path "./experiments/CIFAR_FS_MetaOptNet_RR" --train-shot 5 \
--head Ridge --network ResNet --dataset CIFAR_FS
python train.py --gpu 0 --save-path "./experiments/FC100_MetaOptNet_RR" --train-shot 15 \
--head Ridge --network ResNet --dataset FC100
python test.py --gpu 0,1,2,3 --load ./experiments/miniImageNet_MetaOptNet_SVM/best_model.pth --episode 1000 \
--way 5 --shot 1 --query 15 --head SVM --network ResNet --dataset miniImageNet
python test.py --gpu 0,1,2,3 --load ./experiments/miniImageNet_MetaOptNet_SVM/best_model.pth --episode 1000 \
--way 5 --shot 5 --query 15 --head SVM --network ResNet --dataset miniImageNet
This code is based on the implementations of Prototypical Networks, Dynamic Few-Shot Visual Learning without Forgetting, and DropBlock.