Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Generative_inpainting | 2,676 | a year ago | 43 | other | Python | |||||
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral | ||||||||||
Octaveconv_pytorch | 520 | 3 years ago | 10 | mit | Python | |||||
Pytorch implementation of newly added convolution | ||||||||||
Dynamic Convolution Pytorch | 352 | 10 months ago | 13 | Python | ||||||
Pytorch!!!Pytorch!!!Pytorch!!! Dynamic Convolution: Attention over Convolution Kernels (CVPR-2020) | ||||||||||
Iter Reason | 258 | 5 years ago | 7 | mit | Python | |||||
Code for Iterative Reasoning Paper (CVPR 2018) | ||||||||||
Eco | 124 | 5 years ago | 16 | C++ | ||||||
c++ visual studio implement of ECO: Efficient Convolution Operators for Tracking | ||||||||||
Camconvs | 87 | a year ago | 6 | gpl-3.0 | Jupyter Notebook | |||||
Code for the CVPR paper "CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth" | ||||||||||
Bsconv | 66 | 3 years ago | 4 | September 22, 2020 | bsd-3-clause-clear | Python | ||||
Reference implementation for Blueprint Separable Convolutions (CVPR 2020) | ||||||||||
Sknet Pytorch | 39 | a year ago | 1 | Python | ||||||
Nearly Perfect & Easily Understandable PyTorch Implementation of SKNet | ||||||||||
Cs Stereo | 24 | 7 months ago | 2 | mit | Python | |||||
Deep Material-aware Cross-spectral Stereo Matching (CVPR 2018) | ||||||||||
Idn Tensorflow | 21 | 4 years ago | Python | |||||||
Tensorflow implementation of IDN (CVPR 2018) |
An open source framework for generative image inpainting task, with the support of Contextual Attention (CVPR 2018) and Gated Convolution (ICCV 2019 Oral).
For the code of previous version (DeepFill v1), please checkout branch v1.0.0.
CVPR 2018 Paper | ICCV 2019 Oral Paper | Project | Demo | YouTube v1 | YouTube v2 | BibTex
Free-form image inpainting results by our system built on gated convolution. Each triad shows original image, free-form input and our result from left to right.
pip install git+https://github.com/JiahuiYu/neuralgym
).python train.py
.python train.py
.python test.py --image examples/input.png --mask examples/mask.png --output examples/output.png --checkpoint model_logs/your_model_dir
.Download the model dirs and put it under model_logs/
(rename checkpoint.txt
to checkpoint
because google drive automatically add ext after download). Run testing or resume training as described above. All models are trained with images of resolution 256x256 and largest hole size 128x128, above which the results may be deteriorated. We provide several example test cases. Please run:
# Places2 512x680 input
python test.py --image examples/places2/case1_input.png --mask examples/places2/case1_mask.png --output examples/places2/case1_output.png --checkpoint_dir model_logs/release_places2_256
# CelebA-HQ 256x256 input
# Please visit CelebA-HQ demo at: jhyu.me/deepfill
Note: Please make sure the mask file completely cover the masks in input file. You may check it with saving a new image to visualize cv2.imwrite('new.png', img - mask)
.
Visualization on TensorBoard for training and validation is supported. Run tensorboard --logdir model_logs --port 6006
to view training progress.
CC 4.0 Attribution-NonCommercial International
The software is for educational and academic research purposes only.
@article{yu2018generative,
title={Generative Image Inpainting with Contextual Attention},
author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
journal={arXiv preprint arXiv:1801.07892},
year={2018}
}
@article{yu2018free,
title={Free-Form Image Inpainting with Gated Convolution},
author={Yu, Jiahui and Lin, Zhe and Yang, Jimei and Shen, Xiaohui and Lu, Xin and Huang, Thomas S},
journal={arXiv preprint arXiv:1806.03589},
year={2018}
}