Meta-Learning with Differentiable Convex Optimization (CVPR 2019 Oral)
Alternatives To Metaoptnet
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Optimize Js3,760523302 years ago4January 05, 201723apache-2.0JavaScript
Optimize a JS file for faster parsing (UNMAINTAINED)
2 months ago51bsd-2-clauseC++
Vehicle Routing Open-source Optimization Machine
7 months ago1mitC++
Tips and tricks to optimize your C++ code
Chillout57734a year ago26January 23, 20208mitJavaScript
Reduce CPU usage by non-blocking async loop and psychologically speed up in JavaScript
Memo_wise51026 days ago10April 04, 20227mitRuby
The wise choice for Ruby memoization
5 months ago11apache-2.0Python
Meta-Learning with Differentiable Convex Optimization (CVPR 2019 Oral)
Onelog365534 years agoMay 09, 20181mitGo
Dead simple, super fast, zero allocation and modular logger for Golang
3 years ago13apache-2.0Python
Alive: Automatic LLVM's Instcombine Verifier
Babel Plugin Tailcall Optimization162135394 years ago9December 15, 20182mitJavaScript
Tail call optimization for JavaScript!
6 years agoNovember 28, 20204mitPHP
Alternatives To Metaoptnet
Select To Compare

Alternative Project Comparisons

Meta-Learning with Differentiable Convex Optimization

This repository contains the code for the paper:
Meta-Learning with Differentiable Convex Optimization
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto
CVPR 2019 (Oral)


Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS and FC100 few-shot learning benchmarks.


If you use this code for your research, please cite our paper:

  title={Meta-Learning with Differentiable Convex Optimization},
  author={Kwonjoon Lee and Subhransu Maji and Avinash Ravichandran and Stefano Soatto},




  1. Clone this repository:

    git clone
    cd MetaOptNet
  2. Download and decompress dataset files: miniImageNet (courtesy of Spyros Gidaris), tieredImageNet, FC100, CIFAR-FS

  3. For each dataset loader, specify the path to the directory. For example, in MetaOptNet/data/ line 30:

    _MINI_IMAGENET_DATASET_DIR = 'path/to/miniImageNet'


  1. To train MetaOptNet-SVM on 5-way miniImageNet benchmark:
    python --gpu 0,1,2,3 --save-path "./experiments/miniImageNet_MetaOptNet_SVM" --train-shot 15 \
    --head SVM --network ResNet --dataset miniImageNet --eps 0.1
    As shown in Figure 2, of our paper, we can meta-train the embedding once with a high shot for all meta-testing shots. We don't need to meta-train with all possible meta-test shots unlike in Prototypical Networks.
  2. You can experiment with varying base learners by changing '--head' argument to ProtoNet or Ridge. Also, you can change the backbone architecture to vanilla 4-layer conv net by setting '--network' argument to ProtoNet. For other arguments, please see MetaOptNet/ from lines 85 to 114.
  3. To train MetaOptNet-SVM on 5-way tieredImageNet benchmark:
    python --gpu 0,1,2,3 --save-path "./experiments/tieredImageNet_MetaOptNet_SVM" --train-shot 10 \
    --head SVM --network ResNet --dataset tieredImageNet
  4. To train MetaOptNet-RR on 5-way CIFAR-FS benchmark:
    python --gpu 0 --save-path "./experiments/CIFAR_FS_MetaOptNet_RR" --train-shot 5 \
    --head Ridge --network ResNet --dataset CIFAR_FS
  5. To train MetaOptNet-RR on 5-way FC100 benchmark:
    python --gpu 0 --save-path "./experiments/FC100_MetaOptNet_RR" --train-shot 15 \
    --head Ridge --network ResNet --dataset FC100


  1. To test MetaOptNet-SVM on 5-way miniImageNet 1-shot benchmark:
python --gpu 0,1,2,3 --load ./experiments/miniImageNet_MetaOptNet_SVM/best_model.pth --episode 1000 \
--way 5 --shot 1 --query 15 --head SVM --network ResNet --dataset miniImageNet
  1. Similarly, to test MetaOptNet-SVM on 5-way miniImageNet 5-shot benchmark:
python --gpu 0,1,2,3 --load ./experiments/miniImageNet_MetaOptNet_SVM/best_model.pth --episode 1000 \
--way 5 --shot 5 --query 15 --head SVM --network ResNet --dataset miniImageNet


This code is based on the implementations of Prototypical Networks, Dynamic Few-Shot Visual Learning without Forgetting, and DropBlock.

Popular Optimization Projects
Popular Benchmark Projects
Popular Software Performance Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Image Classification
Meta Learning