Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Pretrained Models.pytorch | 8,094 | 52 | 28 | 2 years ago | 16 | October 29, 2018 | 91 | bsd-3-clause | Python | |
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc. | ||||||||||
Segmentation_models.pytorch | 6,982 | 2 | 34 | 6 days ago | 10 | November 18, 2021 | 26 | mit | Python | |
Segmentation models with pretrained backbones. PyTorch. | ||||||||||
Models | 6,736 | a month ago | 869 | apache-2.0 | Python | |||||
Officially maintained, supported by PaddlePaddle, including CV, NLP, Speech, Rec, TS, big models and so on. | ||||||||||
Deep Residual Networks | 5,695 | 5 years ago | 47 | mit | ||||||
Deep Residual Learning for Image Recognition | ||||||||||
Paddleclas | 4,735 | 10 hours ago | 12 | July 07, 2022 | 155 | apache-2.0 | Python | |||
A treasure chest for visual classification and recognition powered by PaddlePaddle | ||||||||||
Simclr | 3,433 | 6 days ago | 1 | April 14, 2022 | 70 | apache-2.0 | Jupyter Notebook | |||
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners | ||||||||||
Pytorch Studiogan | 3,049 | 3 months ago | 23 | other | Python | |||||
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. | ||||||||||
Imgclsmob | 2,399 | 4 | a year ago | 67 | September 21, 2021 | 6 | mit | Python | ||
Sandbox for training deep learning networks | ||||||||||
Flops Counter.pytorch | 2,249 | 5 | 10 | 2 months ago | 16 | May 06, 2022 | 23 | mit | Python | |
Flops counter for convolutional networks in pytorch framework | ||||||||||
Mmclassification | 2,015 | 6 | 10 hours ago | 22 | June 02, 2022 | 116 | apache-2.0 | Python | ||
OpenMMLab Image Classification Toolbox and Benchmark |
This project includes the semi-supervised and semi-weakly supervised ImageNet models introduced in "Billion-scale Semi-Supervised Learning for Image Classification" https://arxiv.org/abs/1905.00546.
"Semi-supervised" (SSL) ImageNet models are pre-trained on a subset of unlabeled YFCC100M public image dataset and fine-tuned with the ImageNet1K training dataset, as described by the semi-supervised training framework in the paper mentioned above. In this case, the high capacity teacher model was trained only with labeled examples.
"Semi-weakly" supervised (SWSL) ImageNet models are pre-trained on 940 million public images with 1.5K hashtags matching with 1000 ImageNet1K synsets, followed by fine-tuning on ImageNet1K dataset. In this case, the associated hashtags are only used for building a better teacher model. During training the student model, those hashtags are ingored and the student model is pretrained with a subset of 64M images selected by the teacher model from the same 940 million public image dataset.
We are providing the following semi-supervised and semi-weakly supervised ImageNet models. The teacher models used for training these models have the ResNet-101-32x48 model architecture.
Semi-weakly supervised ResNet and ResNext models provided in the table below significantly improve the top-1 accuracy on the ImageNet validation set compared to training from scratch or other training mechanisms introduced in the literature as of September 2019. For example, We achieve state-of-the-art accuracy of 81.2% on ImageNet for the widely used/adopted ResNet-50 model architecture.
Architecture | Supervision | #Parameters | FLOPS | Top-1 Acc. | Top-5 Acc. |
---|---|---|---|---|---|
ResNet-18 | semi-supervised | 14M | 2B | 72.8 | 91.5 |
ResNet-50 | semi-supervised | 25M | 4B | 79.3 | 94.9 |
ResNeXt-50 32x4d | semi-supervised | 25M | 4B | 80.3 | 95.4 |
ResNeXt-101 32x4d | semi-supervised | 42M | 8B | 81.0 | 95.7 |
ResNeXt-101 32x8d | semi-supervised | 88M | 16B | 81.7 | 96.1 |
ResNeXt-101 32x16d | semi-supervised | 193M | 36B | 81.9 | 96.2 |
ResNet-18 | semi-weakly supervised | 14M | 2B | 73.4 | 91.9 |
ResNet-50 | semi-weakly supervised | 25M | 4B | 81.2 | 96.0 |
ResNeXt-50 32x4d | semi-weakly supervised | 25M | 4B | 82.2 | 96.3 |
ResNeXt-101 32x4d | semi-weakly supervised | 42M | 8B | 83.4 | 96.8 |
ResNeXt-101 32x8d | semi-weakly supervised | 88M | 16B | 84.3 | 97.2 |
ResNeXt-101 32x16d | semi-weakly supervised | 193M | 36B | 84.8 | 97.4 |
The models are available with torch.hub. As an example, to load the semi-weakly trained ResNet-50 model, simply run:
model = torch.hub.load('facebookresearch/semi-supervised-ImageNet1K-models', 'resnet50_swsl')
Please refer to torch.hub to see a full example of using the model to classify an image.
If you use the models released in this repository, please cite the following publication.
@misc{yalniz2019billionscale,
title={Billion-scale semi-supervised learning for image classification},
author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan},
year={2019},
eprint={1905.00546},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
The models in this repository are released under the CC-BY-NC 4.0 license. See LICENSE for additional details.