Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Tensor2tensor | 13,701 | 82 | 8 | 4 months ago | 79 | June 21, 2017 | 589 | apache-2.0 | Python | |
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. | ||||||||||
Cvat | 10,086 | 2 | a day ago | 11 | July 28, 2023 | 516 | mit | TypeScript | ||
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale. | ||||||||||
Senet.pytorch | 1,494 | 3 years ago | mit | Python | ||||||
PyTorch implementation of SENet | ||||||||||
Image Gpt | 1,119 | 3 years ago | 9 | other | Python | |||||
Noise2noise | 1,094 | 2 years ago | 5 | other | Python | |||||
Noise2Noise: Learning Image Restoration without Clean Data - Official TensorFlow implementation of the ICML 2018 paper | ||||||||||
Closerlookfewshot | 901 | 2 years ago | 52 | other | Python | |||||
source code to ICLR'19, 'A Closer Look at Few-shot Classification' | ||||||||||
Efficient Pytorch | 769 | 2 years ago | 6 | Python | ||||||
My best practice of training large dataset using PyTorch. | ||||||||||
Caffenet Benchmark | 665 | 6 years ago | 1 | Jupyter Notebook | ||||||
Evaluation of the CNN design choices performance on ImageNet-2012. | ||||||||||
Fixmatch | 550 | 3 years ago | 6 | apache-2.0 | Python | |||||
A simple method to perform semi-supervised learning with limited data. | ||||||||||
Howtotrainyourmamlpytorch | 530 | 2 years ago | 24 | other | Python | |||||
The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) paper in Pytorch. |
Status: Archive (code is provided as-is, no updates expected)
Code and models from the paper "Generative Pretraining from Pixels".
Supported Platforms:
You can get miniconda from https://docs.conda.io/en/latest/miniconda.html, or install the dependencies shown below manually.
conda create --name image-gpt python=3.7.3
conda activate image-gpt
conda install numpy=1.16.3
conda install tensorflow-gpu=1.13.1
conda install imageio=2.8.0
conda install requests=2.21.0
conda install tqdm=4.46.0
This repository is meant to be a starting point for researchers and engineers to experiment with image GPT (iGPT). Our code forks GPT-2 to highlight that it can be easily applied across domains. The diff from gpt-2/src/model.py
to image-gpt/src/model.py
includes a new activation function, renaming of several variables, and the introduction of a start-of-sequence token, none of which change the model architecture.
To download a model checkpoint, run download.py
. The --model
argument should be one of "s", "m", or "l", and the --ckpt
argument should be one of "131000", "262000", "524000", or "1000000".
python download.py --model s --ckpt 1000000
This command downloads the iGPT-S checkpoint at 1M training iterations. The default download directory is set to /root/downloads/
, and can be changed using the --download_dir
argument.
To download datasets, run download.py
with the --dataset
argument set to "imagenet" or "cifar10".
python download.py --model s --ckpt 1000000 --dataset imagenet
This command additionally downloads 32x32 ImageNet encoded with the 9-bit color palette described in the paper. The datasets we provide are center-cropped images intended for evaluation; random cropped images are required to faithfully replicate training.
To download the color cluster file defining our 9-bit color palette, run download.py
with the --clusters
flag set.
python download.py --model s --ckpt 1000000 --dataset imagenet --clusters
This command additionally downloads the color cluster file. src/run.py:sample
shows how to decode from 9-bit color to RGB and src/utils.py:color_quantize
shows how to go the other way around.
Once the desired checkpoint and color cluster file are downloaded, we can run the script in sampling mode. The following commands sample from iGPT-S, iGPT-M, and iGPT-L respectively:
python src/run.py --sample --n_embd 512 --n_head 8 --n_layer 24
python src/run.py --sample --n_embd 1024 --n_head 8 --n_layer 36
python src/run.py --sample --n_embd 1536 --n_head 16 --n_layer 48
If your data is not in /root/downloads/
, set --ckpt_path
and --color_cluster_path
manually. To run on fewer than 8 GPUs, use a command of the following form:
CUDA_VISIBLE_DEVICES=0,1 python src/run.py --sample --n_embd 512 --n_head 8 --n_layer 24 --n_gpu 2
Once the desired checkpoint and evaluation dataset are downloaded, we can run the script in evaluation mode. The following commands evaluate iGPT-S, iGPT-M, and iGPT-L on ImageNet respectively:
python src/run.py --eval --n_embd 512 --n_head 8 --n_layer 24
python src/run.py --eval --n_embd 1024 --n_head 8 --n_layer 36
python src/run.py --eval --n_embd 1536 --n_head 16 --n_layer 48
If your data is not in /root/downloads/
, set --ckpt_path
and --data_path
manually. You should see that the test generative losses are 2.0895, 2.0614, and 2.0466, matching Figure 3 in the paper.
Please use the following bibtex entry:
@article{chen2020generative,
title={Generative Pretraining from Pixels},
author={Chen, Mark and Radford, Alec and Child, Rewon and Wu, Jeff and Jun, Heewoo and Dhariwal, Prafulla and Luan, David and Sutskever, Ilya},
year={2020}
}