Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Text | 3,245 | 341 | 102 | a day ago | 22 | June 28, 2022 | 294 | bsd-3-clause | Python | |
Models, data loaders and abstractions for language processing, powered by PyTorch | ||||||||||
Pytorch Nlp | 1,929 | 9 | 8 | 2 years ago | 19 | November 04, 2019 | 16 | bsd-3-clause | Python | |
Basic Utilities for PyTorch Natural Language Processing (NLP) | ||||||||||
Pytorch Meta | 1,724 | 4 months ago | 28 | September 20, 2021 | 53 | mit | Python | |||
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch | ||||||||||
Pytorchtricks | 907 | 9 months ago | ||||||||
Some tricks of pytorch... :star: | ||||||||||
Medicaltorch | 724 | 2 years ago | 2 | November 24, 2018 | 14 | apache-2.0 | Python | |||
A medical imaging framework for Pytorch | ||||||||||
Mobilepose | 588 | 4 months ago | 12 | Jupyter Notebook | ||||||
Light-weight Single Person Pose Estimator | ||||||||||
Deepsvg | 556 | a year ago | 17 | mit | Jupyter Notebook | |||||
[NeurIPS 2020] Official code for the paper "DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation". Includes a PyTorch library for deep learning with SVG data. | ||||||||||
Monodepth Pytorch | 411 | 4 years ago | 11 | Python | ||||||
Unofficial implementation of Unsupervised Monocular Depth Estimation neural network MonoDepth in PyTorch | ||||||||||
Pytorch Unet | 357 | 3 years ago | 6 | mit | Jupyter Notebook | |||||
Simple PyTorch implementations of U-Net/FullyConvNet (FCN) for image segmentation | ||||||||||
Ssd Pytorch | 335 | a year ago | 2 | apache-2.0 | Python | |||||
SSD目标检测算法(Single Shot MultiBox Detector)(简单,明了,易用,全中文注释,单机多卡训练,视频检测)( If you train the model on a single computer and mutil GPU, this program will be your best choice , easier to use and easier to understand ) |
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning benchmarks, fully compatible with both torchvision
and PyTorch's DataLoader
.
Module
, called MetaModule
, that simplifies the creation of certain meta-learning models (e.g. gradient based meta-learning methods). See the MAML example for an example using MetaModule
.You can install Torchmeta either using Python's package manager pip, or from source. To avoid any conflict with your existing Python setup, it is suggested to work in a virtual environment with virtualenv
. To install virtualenv
:
pip install --upgrade virtualenv
virtualenv venv
source venv/bin/activate
This is the recommended way to install Torchmeta:
pip install torchmeta
You can also install Torchmeta from source. This is recommended if you want to contribute to Torchmeta.
git clone https://github.com/tristandeleu/pytorch-meta.git
cd pytorch-meta
python setup.py install
This minimal example below shows how to create a dataloader for the 5-shot 5-way Omniglot dataset with Torchmeta. The dataloader loads a batch of randomly generated tasks, and all the samples are concatenated into a single tensor. For more examples, check the examples folder.
from torchmeta.datasets.helpers import omniglot
from torchmeta.utils.data import BatchMetaDataLoader
dataset = omniglot("data", ways=5, shots=5, test_shots=15, meta_train=True, download=True)
dataloader = BatchMetaDataLoader(dataset, batch_size=16, num_workers=4)
for batch in dataloader:
train_inputs, train_targets = batch["train"]
print('Train inputs shape: {0}'.format(train_inputs.shape)) # (16, 25, 1, 28, 28)
print('Train targets shape: {0}'.format(train_targets.shape)) # (16, 25)
test_inputs, test_targets = batch["test"]
print('Test inputs shape: {0}'.format(test_inputs.shape)) # (16, 75, 1, 28, 28)
print('Test targets shape: {0}'.format(test_targets.shape)) # (16, 75)
Helper functions are only available for some of the datasets available. However, all of them are available through the unified interface provided by Torchmeta. The variable dataset
defined above is equivalent to the following
from torchmeta.datasets import Omniglot
from torchmeta.transforms import Categorical, ClassSplitter, Rotation
from torchvision.transforms import Compose, Resize, ToTensor
from torchmeta.utils.data import BatchMetaDataLoader
dataset = Omniglot("data",
# Number of ways
num_classes_per_task=5,
# Resize the images to 28x28 and converts them to PyTorch tensors (from Torchvision)
transform=Compose([Resize(28), ToTensor()]),
# Transform the labels to integers (e.g. ("Glagolitic/character01", "Sanskrit/character14", ...) to (0, 1, ...))
target_transform=Categorical(num_classes=5),
# Creates new virtual classes with rotated versions of the images (from Santoro et al., 2016)
class_augmentations=[Rotation([90, 180, 270])],
meta_train=True,
download=True)
dataset = ClassSplitter(dataset, shuffle=True, num_train_per_class=5, num_test_per_class=15)
dataloader = BatchMetaDataLoader(dataset, batch_size=16, num_workers=4)
Note that the dataloader, receiving the dataset, remains the same.
Tristan Deleu, Tobias Würfl, Mandana Samiei, Joseph Paul Cohen, and Yoshua Bengio. Torchmeta: A Meta-Learning library for PyTorch, 2019 [ArXiv]
If you want to cite Torchmeta, use the following Bibtex entry:
@misc{deleu2019torchmeta,
title={{Torchmeta: A Meta-Learning library for PyTorch}},
author={Deleu, Tristan and W\"urfl, Tobias and Samiei, Mandana and Cohen, Joseph Paul and Bengio, Yoshua},
year={2019},
url={https://arxiv.org/abs/1909.06576},
note={Available at: https://github.com/tristandeleu/pytorch-meta}
}