Awesome Open Source
Awesome Open Source

PyTorch C++ Samples

These are Deep Learning sample programs of PyTorch written in C++.

Description

PyTorch is famous as a kind of Deep Learning Frameworks.
Among them, Python source code is overflowing on the Web, so we can easily write the source code of Deep Learning in Python.
However, there is very little source code written in C++ of compiler language.
Therefore, I hope this repository will help many programmers by providing PyTorch sample programs written in C++.
In addition, I might adapt programs to the latest version.

Implementation

Multiclass Classification

Model Paper Conference/Journal Code Release Version
AlexNet A. Krizhevsky et al. NeurIPS 2012 AlexNet v1.7.0
VGGNet K. Simonyan et al. ICLR 2015 VGGNet v1.7.0
ResNet K. He et al. CVPR 2016 ResNet v1.7.0
Discriminator A. Radford et al. ICLR 2016 Discriminator v1.8.1(Latest)

Dimensionality Reduction

Model Paper Conference/Journal Code Release Version
Autoencoder G. E. Hinton et al. Science 2006 AE1d v1.8.1(Latest)
AE2d v1.5.0
Denoising Autoencoder P. Vincent et al. ICML 2008 DAE2d v1.7.0

Generative Modeling

Model Paper Conference/Journal Code Release Version
Variational Autoencoder D. P. Kingma et al. ICLR 2014 VAE2d v1.5.1
DCGAN A. Radford et al. ICLR 2016 DCGAN v1.5.1
Wasserstein Autoencoder I. Tolstikhin et al. ICLR 2018 WAE2d GAN v1.7.0
WAE2d MMD

Image-to-Image Translation

Model Paper Conference/Journal Code Release Version
U-Net O. Ronneberger et al. MICCAI 2015 U-Net Regression v1.5.1
pix2pix P. Isola et al. CVPR 2017 pix2pix v1.5.1

Semantic Segmentation

Model Paper Conference/Journal Code Release Version
SegNet V. Badrinarayanan et al. CVPR 2015 SegNet v1.7.0
U-Net O. Ronneberger et al. MICCAI 2015 U-Net Classification v1.5.1

Object Detection

Model Paper Conference/Journal Code Release Version
YOLOv1 J. Redmon et al. CVPR 2016 YOLOv1 v1.8.0
YOLOv2 J. Redmon et al. CVPR 2017 YOLOv2 v1.8.0

Anomaly Detection

Model Paper Conference/Journal Code Release Version
AnoGAN T. Schlegl et al. IPMI 2017 AnoGAN2d v1.7.0
DAGMM B. Zong et al. ICLR 2018 DAGMM2d v1.6.0
EGBAD H. Zenati et al. ICLR Workshop 2018 EGBAD2d v1.7.0
GANomaly S. Akçay et al. ACCV 2018 GANomaly2d v1.7.0
Skip-GANomaly S. Akçay et al. IJCNN 2019 Skip-GANomaly2d v1.7.0

Requirement

1. PyTorch C++

Please select the environment to use as follows on PyTorch official.
PyTorch official : https://pytorch.org/


PyTorch Build : Stable (1.8.1)
Your OS : Linux
Package : LibTorch
Language : C++ / Java
CUDA : 10.2
Run this Command : Download here (cxx11 ABI)
GPU : https://download.pytorch.org/libtorch/cu102/libtorch-cxx11-abi-shared-with-deps-1.8.1.zip
CPU : https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.8.1%2Bcpu.zip


2. OpenCV

version : 3.0.0 or more
This is used for pre-processing and post-processing.
Please refer to other sites for more detailed installation method.

3. OpenMP

This is used to load data in parallel.
(It may be installed on standard Linux OS.)

4. Boost

This is used for command line arguments, etc.

$ sudo apt install libboost-dev libboost-all-dev

5. Gnuplot

This is used to display loss graph.

$ sudo apt install gnuplot

6. libpng/png++/zlib

This is used to load and save index-color image in semantic segmentation.

$ sudo apt install libpng-dev libpng++-dev zlib1g-dev

Preparation

1. Git Clone

$ git clone https://github.com/koba-jon/pytorch_cpp.git
$ cd pytorch_cpp

2. Path Setting

$ vi utils/CMakeLists.txt

Please change the 4th line of "CMakeLists.txt" according to the path of the directory "libtorch".
The following is an example where the directory "libtorch" is located directly under the directory "HOME".

3: # LibTorch
4: set(LIBTORCH_DIR $ENV{HOME}/libtorch)
5: list(APPEND CMAKE_PREFIX_PATH ${LIBTORCH_DIR})

3. Compiler Install

If you don't have g++ version 8 or above, install it.

$ sudo apt install g++-8

4. Execution

Please move to the directory of each model and refer to "README.md".

Utility

1. Making Original Dataset

Please create a link for the original dataset.
The following is an example of "AE2d" using "celebA" Dataset.

$ cd Dimensionality_Reduction/AE2d/datasets
$ ln -s <dataset_path> ./celebA_org

You should substitute the path of dataset for "<dataset_path>".
Please make sure you have training or test data directly under "<dataset_path>".

$ vi ../../../scripts/hold_out.sh

Please edit the file for original dataset.

#!/bin/bash

SCRIPT_DIR=$(cd $(dirname $0); pwd)

python3 ${SCRIPT_DIR}/hold_out.py \
    --input_dir "celebA_org" \
    --output_dir "celebA" \
    --train_rate 9 \
    --valid_rate 1

By running this file, you can split it into training and validation data.

$ sudo apt install python3 python3-pip
$ pip3 install natsort
$ sh ../../../scripts/hold_out.sh
$ cd ../../..

2. Data Input System

There are transform, dataset and dataloader for data input in this repository.
It corresponds to the following source code in the directory, and we can add new function to the source code below.

  • transforms.cpp
  • transforms.hpp
  • datasets.cpp
  • datasets.hpp
  • dataloader.cpp
  • dataloader.hpp

3. Check Progress

There are a feature to check progress for training in this repository.
We can watch the number of epoch, loss, time and speed in training.
util1
It corresponds to the following source code in the directory.

  • progress.cpp
  • progress.hpp

4. Monitoring System

There are monitoring system for training in this repository.
We can watch output image and loss graph.
The feature to watch output image is in the "samples" in the directory "checkpoints" created during training.
The feature to watch loss graph is in the "graph" in the directory "checkpoints" created during training.
util2
It corresponds to the following source code in the directory.

  • visualizer.cpp
  • visualizer.hpp

Conclusion

I hope this repository will help many programmers by providing PyTorch sample programs written in C++.
If you have any problems with the source code of this repository, please feel free to "issue".
Let's have a good development and research life!

License

You can feel free to use all source code in this repository.
(Click here for details.)

But if you exploit external libraries (e.g. redistribution), you should be careful.
At a minimum, the license notation at the following URL is required.
In addition, third party copyrights belong to their respective owners.


Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
c-plus-plus (18,517
deep-learning (3,917
linux (2,460
pytorch (2,328
cpp (1,314
object-detection (475
semantic-segmentation (240
yolo (125
anomaly-detection (91
autoencoder (79
vae (69
dcgan (49
pix2pix (45
dimensionality-reduction (35
image-to-image-translation (33
u-net (28