Learn2learn

A PyTorch Library for Meta-learning Research
Alternatives To Learn2learn
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Nni12,662822a day ago51June 22, 2022292mitPython
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Tianshou6,0204a day ago29July 04, 202246mitPython
An elegant PyTorch deep reinforcement learning library.
Stable Baselines35,3083415 hours ago49June 14, 202278mitPython
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Ai Learn5,131
9 months ago19
人工智能学习路线图,整理近200个实战案例与项目,免费提供配套教材,零基础入门,就业实战!包括:Python,数学,机器学习,数据分析,深度学习,计算机视觉,自然语言处理,PyTorch tensorflow machine-learning,deep-learning data-analysis data-mining mathematics data-science artificial-intelligence python tensorflow tensorflow2 caffe keras pytorch algorithm numpy pandas matplotlib seaborn nlp cv等热门领域
Captum3,834234 days ago8March 03, 2022159bsd-3-clausePython
Model interpretability and understanding for PyTorch
Deep Reinforcement Learning With Pytorch2,741
8 days ago26mitPython
PyTorch implementation of DQN, AC, ACER, A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and ....
Recbole2,543
18 hours ago7February 25, 202283mitPython
A unified, comprehensive and efficient recommendation library
Cleanrl2,373
a day ago53otherPython
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Minimalrl2,317
3 months ago20mitPython
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Learn2learn2,03015 months ago18February 10, 202239mitPython
A PyTorch Library for Meta-learning Research
Alternatives To Learn2learn
Select To Compare


Alternative Project Comparisons
Readme


Test Status arXiv

learn2learn is a software library for meta-learning research.

learn2learn builds on top of PyTorch to accelerate two aspects of the meta-learning research cycle:

  • fast prototyping, essential in letting researchers quickly try new ideas, and
  • correct reproducibility, ensuring that these ideas are evaluated fairly.

learn2learn provides low-level utilities and unified interface to create new algorithms and domains, together with high-quality implementations of existing algorithms and standardized benchmarks. It retains compatibility with torchvision, torchaudio, torchtext, cherry, and any other PyTorch-based library you might be using.

To learn more, see our whitepaper: arXiv:2008.12284

Overview

  • learn2learn.data: TaskDataset and transforms to create few-shot tasks from any PyTorch dataset.
  • learn2learn.vision: Models, datasets, and benchmarks for computer vision and few-shot learning.
  • learn2learn.gym: Environment and utilities for meta-reinforcement learning.
  • learn2learn.algorithms: High-level wrappers for existing meta-learning algorithms.
  • learn2learn.optim: Utilities and algorithms for differentiable optimization and meta-descent.

Resources

Installation

pip install learn2learn

Snippets & Examples

The following snippets provide a sneak peek at the functionalities of learn2learn.

High-level Wrappers

Few-Shot Learning with MAML

For more algorithms (ProtoNets, ANIL, Meta-SGD, Reptile, Meta-Curvature, KFO) refer to the examples folder. Most of them can be implemented with with the GBML wrapper. (documentation).

maml = l2l.algorithms.MAML(model, lr=0.1)
opt = torch.optim.SGD(maml.parameters(), lr=0.001)
for iteration in range(10):
    opt.zero_grad()
    task_model = maml.clone()  # torch.clone() for nn.Modules
    adaptation_loss = compute_loss(task_model)
    task_model.adapt(adaptation_loss)  # computes gradient, update task_model in-place
    evaluation_loss = compute_loss(task_model)
    evaluation_loss.backward()  # gradients w.r.t. maml.parameters()
    opt.step()
Meta-Descent with Hypergradient

Learn any kind of optimization algorithm with the LearnableOptimizer. (example and documentation)

linear = nn.Linear(784, 10)
transform = l2l.optim.ModuleTransform(l2l.nn.Scale)
metaopt = l2l.optim.LearnableOptimizer(linear, transform, lr=0.01)  # metaopt has .step()
opt = torch.optim.SGD(metaopt.parameters(), lr=0.001)  # metaopt also has .parameters()

metaopt.zero_grad()
opt.zero_grad()
error = loss(linear(X), y)
error.backward()
opt.step()  # update metaopt
metaopt.step()  # update linear

Learning Domains

Custom Few-Shot Dataset

Many standardized datasets (Omniglot, mini-/tiered-ImageNet, FC100, CIFAR-FS) are readily available in learn2learn.vision.datasets. (documentation)

dataset = l2l.data.MetaDataset(MyDataset())  # any PyTorch dataset
transforms = [  # Easy to define your own transform
    l2l.data.transforms.NWays(dataset, n=5),
    l2l.data.transforms.KShots(dataset, k=1),
    l2l.data.transforms.LoadData(dataset),
]
taskset = TaskDataset(dataset, transforms, num_tasks=20000)
for task in taskset:
    X, y = task
    # Meta-train on the task
Environments and Utilities for Meta-RL

Parallelize your own meta-environments with AsyncVectorEnv, or use the standardized ones. (documentation)

def make_env():
    env = l2l.gym.HalfCheetahForwardBackwardEnv()
    env = cherry.envs.ActionSpaceScaler(env)
    return env

env = l2l.gym.AsyncVectorEnv([make_env for _ in range(16)])  # uses 16 threads
for task_config in env.sample_tasks(20):
    env.set_task(task)  # all threads receive the same task
    state = env.reset()  # use standard Gym API
    action = my_policy(env)
    env.step(action)

Low-Level Utilities

Differentiable Optimization

Learn and differentiate through updates of PyTorch Modules. (documentation)


model = MyModel()
transform = l2l.optim.KroneckerTransform(l2l.nn.KroneckerLinear)
learned_update = l2l.optim.ParameterUpdate(  # learnable update function
        model.parameters(), transform)
clone = l2l.clone_module(model)  # torch.clone() for nn.Modules
error = loss(clone(X), y)
updates = learned_update(  # similar API as torch.autograd.grad
    error,
    clone.parameters(),
    create_graph=True,
)
l2l.update_module(clone, updates=updates)
loss(clone(X), y).backward()  # Gradients w.r.t model.parameters() and learned_update.parameters()

Changelog

A human-readable changelog is available in the CHANGELOG.md file.

Citation

To cite the learn2learn repository in your academic publications, please use the following reference.

Arnold, Sebastien M. R., Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. 2020. “learn2learn: A Library for Meta-Learning Research.” arXiv [cs.LG]. http://arxiv.org/abs/2008.12284.

You can also use the following Bibtex entry.

@article{Arnold2020-ss,
  title         = "learn2learn: A Library for {Meta-Learning} Research",
  author        = "Arnold, S{\'e}bastien M R and Mahajan, Praateek and Datta,
                   Debajyoti and Bunner, Ian and Zarkias, Konstantinos Saitas",
  month         =  aug,
  year          =  2020,
  url           = "http://arxiv.org/abs/2008.12284",
  archivePrefix = "arXiv",
  primaryClass  = "cs.LG",
  eprint        = "2008.12284"
}

Acknowledgements & Friends

  1. TorchMeta is similar library, with a focus on datasets for supervised meta-learning.
  2. higher is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch nn.Module to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to their ArXiv paper.
  3. We are thankful to the following open-source implementations which helped guide the design of learn2learn:
Popular Pytorch Projects
Popular Algorithms Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Algorithms
Pytorch
Optimization
Meta Learning