Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Pytorch | 70,938 | 3,341 | 6,728 | 12 hours ago | 37 | May 08, 2023 | 12,734 | other | Python | |
Tensors and Dynamic neural networks in Python with strong GPU acceleration | ||||||||||
Real Time Voice Cloning | 42,864 | 2 months ago | 148 | other | Python | |||||
Clone a voice in 5 seconds to generate arbitrary speech in real-time | ||||||||||
Yolov5 | 41,717 | a day ago | 8 | September 21, 2021 | 237 | agpl-3.0 | Python | |||
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite | ||||||||||
Deepspeed | 28,452 | 53 | 14 hours ago | 68 | July 17, 2023 | 789 | apache-2.0 | Python | ||
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. | ||||||||||
Fastai | 24,500 | 184 | 145 | 3 days ago | 146 | March 28, 2023 | 182 | apache-2.0 | Jupyter Notebook | |
The fastai deep learning library | ||||||||||
Pytorch Handbook | 18,594 | 2 months ago | 52 | Jupyter Notebook | ||||||
pytorch handbook是一本开源的书籍,目标是帮助那些希望和使用PyTorch进行深度学习开发和研究的朋友快速入门,其中包含的Pytorch教程全部通过测试保证可以成功运行 | ||||||||||
Ncnn | 17,945 | 1 | 14 hours ago | 24 | November 28, 2022 | 1,044 | other | C++ | ||
ncnn is a high-performance neural network inference framework optimized for the mobile platform | ||||||||||
Dive Into Dl Pytorch | 13,747 | 2 years ago | 76 | apache-2.0 | Jupyter Notebook | |||||
本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为PyTorch实现。 | ||||||||||
Horovod | 13,564 | 20 | 11 | 2 days ago | 77 | June 12, 2023 | 358 | other | Python | |
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. | ||||||||||
Ivy | 13,352 | 6 | 12 hours ago | 20 | June 01, 2022 | 3,463 | other | Python | ||
The Unified AI Framework |
PyTorch implementation of the supervised learning experiments from the paper: Model-Agnostic Meta-Learning (MAML).
Version 1.0: Both
MiniImagenet
andOmniglot
Datasets are supported! Have Fun~
Version 2.0: Re-write meta learner and basic learner. Solved some serious bugs in version 1.0.
For Tensorflow Implementation, please visit official HERE and simplier version HERE.
For First-Order Approximation Implementation, Reptile namely, please visit HERE.
For 5-way 1-shot exp., it allocates nearly 6GB GPU memory.
miniimagenet/
├── images
├── n0210891500001298.jpg
├── n0287152500001298.jpg
...
├── test.csv
├── val.csv
└── train.csv
path
in miniimagenet_train.py
: mini = MiniImagenet('miniimagenet/', mode='train', n_way=args.n_way, k_shot=args.k_spt,
k_query=args.k_qry,
batchsz=10000, resize=args.imgsz)
...
mini_test = MiniImagenet('miniimagenet/', mode='test', n_way=args.n_way, k_shot=args.k_spt,
k_query=args.k_qry,
batchsz=100, resize=args.imgsz)
to your actual data path.
python miniimagenet_train.py
and the running screenshot is as follows:
If your reproducation perf. is not so good, maybe you can enlarge your training epoch
to get longer training. And MAML is notorious for its hard training. Therefore, this implementation only provide you a basic start point to begin your research.
and the performance below is true and achieved on my machine.
Model | Fine Tune | 5-way Acc. | 20-way Acc. | ||
---|---|---|---|---|---|
1-shot | 5-shot | 1-shot | 5-shot | ||
Matching Nets | N | 43.56% | 55.31% | 17.31% | 22.69% |
Meta-LSTM | 43.44% | 60.60% | 16.70% | 26.06% | |
MAML | Y | 48.7% | 63.11% | 16.49% | 19.29% |
Ours | Y | 46.2% | 60.3% | - | - |
run python omniglot_train.py
, the program will download omniglot
dataset automatically.
decrease the value of args.task_num
to fit your GPU memory capacity.
For 5-way 1-shot exp., it allocates nearly 3GB GPU memory.
@misc{MAML_Pytorch,
author = {Liangqu Long},
title = {MAML-Pytorch Implementation},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/dragen1860/MAML-Pytorch}},
commit = {master}
}