Tensorflow Implementation of the Semantic Segmentation DeepLab_V3 CNN
Alternatives To Deeplab_v3
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Mit Deep Learning9,328
5 months ago15mitJupyter Notebook
Tutorials, assignments, and competitions for MIT Deep Learning related courses.
Awesome Semantic Segmentation8,065
2 years ago13
:metal: awesome-semantic-segmentation
Background Matting4,622
4 months ago43Python
Background Matting: The World is Your Green Screen
Segmentation_models4,09512125 months ago8January 10, 2020237mitPython
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.
6 months ago174otherPython
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
2 years ago143apache-2.0Python
Mask RCNN in TensorFlow
Imgclsmob2,3994a year ago67September 21, 20216mitPython
Sandbox for training deep learning networks
Semantic Segmentation Suite2,311
2 years ago83Python
Semantic Segmentation Suite in TensorFlow. Implement, train, and test new Semantic Segmentation models easily!
Lanenet Lane Detection1,873
7 months agoapache-2.0Python
Unofficial implemention of lanenet model for real time lane detection using deep neural network model
3 years ago85gpl-3.0Python
Generic U-Net Tensorflow implementation for image segmentation
Alternatives To Deeplab_v3
Select To Compare

Alternative Project Comparisons


DeepLab_V3 Image Semantic Segmentation Network

Implementation of the Semantic Segmentation DeepLab_V3 CNN as described at Rethinking Atrous Convolution for Semantic Image Segmentation.

For a complete documentation of this implementation, check out the blog post.


  • Python 3.x
  • Numpy
  • Tensorflow 1.10.1



Pre-trained model.

Place the checkpoints folder inside ./tboard_logs. If the folder does not exist, create it.


Original datasets used for training.

Place the tfrecords files inside ./dataset/tfrecords. Create the folder if it does not exist.

Training and Eval

Once you have the training and validation TfRefords files, just run the command bellow. Before running Deeplab_v3, the code will look for the proper ResNets checkpoints inside ./resnet/checkpoints, if the folder does not exist, it will first be downloaded.

python --starting_learning_rate=0.00001 --batch_norm_decay=0.997 --crop_size=513 --gpu_id=0 --resnet_model=resnet_v2_50

Check out the file for more input argument options. Each run produces a folder inside the tboard_logs directory (create it if not there).

To evaluate the model, run the file passing to it the model_id parameter (the name of the folder created inside tboard_logs during training).

Note: Make sure the test.tfrecords is downloaded and placed inside ./dataset/tfrecords.

python --model_id=16645


To use a different dataset, you just need to modify the CreateTfRecord.ipynb notebook inside the dataset/ folder, to suit your needs.

Also, be aware that originally Deeplab_v3 performs random crops of size 513x513 on the input images. This crop_size parameter can be configured by changing the crop_size hyper-parameter in


To create the dataset, first make sure you have the Pascal VOC 2012 and/or the Semantic Boundaries Dataset and Benchmark datasets downloaded.

Note: You do not need both datasets.

  • If you just want to test the code with one of the datasets (say the SBD), run the notebook normally, and it should work.

After, head to dataset/ and run the CreateTfRecord.ipynb notebook.

The custom_train.txt file contains the name of the images selected for training. This file is designed to use the Pascal VOC 2012 set as a TESTING set. Therefore, it doesn't contain any images from the VOC 2012 val dataset. For more info, see the Training section of Deeplab Image Semantic Segmentation Network.

Obs. You can skip that part and direct download the datasets used in this experiment - See the Downloads section


For full documentation on serving this Semantic Segmentation CNN, refer to How to deploy TensorFlow models to production using TF Serving.

All the serving scripts are placed inside: ./serving/.

To export the model and to perform client requests do the following:

  1. Create a python3 virtual environment and install the dependencies from the serving_requirements.txt file;

  2. Using the python3 env, run The exported model should reside into ./serving/model/;

  3. Create a python2 virtual environment and install the dependencies from the client_requirements.txt file;

  4. From the python2 env, run the deeplab_client.ipynb notebook;


  • Pixel accuracy: ~91%
  • Mean Accuracy: ~82%
  • Mean Intersection over Union (mIoU): ~74%
  • Frequency weighed Intersection over Union: ~86


Popular Segmentation Projects
Popular Tensorflow Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Jupyter Notebook
Deep Learning
Convolutional Neural Networks
Semantic Segmentation