Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Pytorch Studiogan | 3,049 | 2 months ago | 23 | other | Python | |||||
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. | ||||||||||
Tensorflow2.0 Examples | 1,692 | 4 months ago | 99 | mit | Jupyter Notebook | |||||
🙄 Difficult algorithm, Simple code. | ||||||||||
Compare_gan | 1,555 | 4 years ago | 10 | apache-2.0 | Python | |||||
Compare GAN code. | ||||||||||
Tf Tutorials | 526 | 6 years ago | 3 | Jupyter Notebook | ||||||
A collection of deep learning tutorials using Tensorflow and Python | ||||||||||
Pytorch Spectral Normalization Gan | 421 | 5 years ago | 9 | mit | Python | |||||
Paper by Miyato et al. https://openreview.net/forum?id=B1QRgziT- | ||||||||||
Machine Learning Is All You Need | 253 | a year ago | Python | |||||||
🔥🌟《Machine Learning 格物志》: ML + DL + RL basic codes and notes by sklearn, PyTorch, TensorFlow, Keras & the most important, from scratch!💪 This repository is ALL You Need! | ||||||||||
Iseebetter | 223 | 2 years ago | 10 | mit | C++ | |||||
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press | ||||||||||
Deep Learning With Python | 197 | 2 years ago | 2 | mit | Jupyter Notebook | |||||
Deep learning codes and projects using Python | ||||||||||
Pytorch Gan Collections | 153 | a year ago | Python | |||||||
PyTorch implementation of DCGAN, WGAN-GP and SNGAN. | ||||||||||
3 Min Pytorch | 135 | 2 years ago | 9 | mit | Jupyter Notebook | |||||
<펭귄브로의 3분 딥러닝, 파이토치맛> 예제 코드 |
This repository offers TensorFlow implementations for many components related to Generative Adversarial Networks:
The code is configurable via Gin and runs on GPU/TPU/CPUs. Several research papers make use of this repository, including:
Are GANs Created Equal? A Large-Scale Study
[Code]
Mario Lucic*, Karol Kurach*, Marcin Michalski, Sylvain Gelly, Olivier
Bousquet [NeurIPS 2018]
The GAN Landscape: Losses, Architectures, Regularization, and Normalization
[Code]
[Colab]
Karol Kurach*, Mario Lucic*, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly
[ICML 2019]
Assessing Generative Models via Precision and Recall
[Code]
Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain
Gelly [NeurIPS 2018]
GILBO: One Metric to Measure Them All
[Code]
Alexander A. Alemi, Ian Fischer [NeurIPS 2018]
A Case for Object Compositionality in Deep Generative Models of Images
[Code]
Sjoerd van Steenkiste, Karol Kurach, Sylvain Gelly [2018]
On Self Modulation for Generative Adversarial Networks
[Code]
Ting Chen, Mario Lucic, Neil Houlsby, Sylvain Gelly [ICLR 2019]
Self-Supervised GANs via Auxiliary Rotation Loss
[Code]
[Colab]
Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby [CVPR
2019]
High-Fidelity Image Generation With Fewer Labels
[Code]
[Blog Post]
[Colab]
Mario Lucic*, Michael Tschannen*, Marvin Ritter*, Xiaohua Zhai, Olivier
Bachem, Sylvain Gelly [ICML 2019]
You can easily install the library and all necessary dependencies by running:
pip install -e .
from the compare_gan/
folder.
Simply run the main.py
passing a --model_dir
(this is where checkpoints are
stored) and a --gin_config
(defines which model is trained on which data set
and other training options). We provide several example configurations in the
example_configs/
folder:
To see all available options please run python main.py --help
. Main options:
--schedule=train
(default). Training is resumed
from the last saved checkpoint.--schedule=continuous_eval --eval_every_steps=0
. To evaluate only checkpoints where the step size is
divisible by 5000, use --schedule=continuous_eval --eval_every_steps=5000
.
By default, 3 averaging runs are used to estimate the Inception Score and
the FID score. Keep in mind that when running locally on a single GPU it may
not be possible to run training and evaluation simultaneously due to memory
constraints.--schedule=eval_after_train --eval_every_steps=0
.We recommend using the
ctpu tool to create
a Cloud TPU and corresponding Compute Engine VM. We use v3-128 Cloud TPU v3 Pod
for training models on ImageNet in 128x128 resolutions. You can use smaller
slices if you reduce the batch size (options.batch_size
in the Gin config) or
model parameters. Keep in mind that the model quality might change. Before
training make sure that the environment variable TPU_NAME
is set. Running
evaluation on TPUs is currently not supported. Use a VM with a single GPU
instead.
Compare GAN uses TensorFlow Datasets and
it will automatically download and prepare the data. For ImageNet you will need
to download the archive yourself. For CelebAHq you need to download and prepare
the images on your own. If you are using TPUs make sure to point the training
script to your Google Storage Bucket (--tfds_data_dir
).