|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Mit Deep Learning||9,328||5 months ago||15||mit||Jupyter Notebook|
|Tutorials, assignments, and competitions for MIT Deep Learning related courses.|
|Awesome Semantic Segmentation||8,065||2 years ago||13|
|Background Matting||4,622||4 months ago||43||Python|
|Background Matting: The World is Your Green Screen|
|Segmentation_models||4,095||12||12||5 months ago||8||January 10, 2020||237||mit||Python|
|Segmentation models with pretrained backbones. Keras and TensorFlow Keras.|
|Pointnet||3,907||6 months ago||174||other||Python|
|PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation|
|Fastmaskrcnn||3,041||2 years ago||143||apache-2.0||Python|
|Mask RCNN in TensorFlow|
|Imgclsmob||2,399||4||a year ago||67||September 21, 2021||6||mit||Python|
|Sandbox for training deep learning networks|
|Semantic Segmentation Suite||2,311||2 years ago||83||Python|
|Semantic Segmentation Suite in TensorFlow. Implement, train, and test new Semantic Segmentation models easily!|
|Lanenet Lane Detection||1,873||7 months ago||apache-2.0||Python|
|Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/|
|Tf_unet||1,582||3 years ago||85||gpl-3.0||Python|
|Generic U-Net Tensorflow implementation for image segmentation|
Implementation of the Semantic Segmentation DeepLab_V3 CNN as described at Rethinking Atrous Convolution for Semantic Image Segmentation.
For a complete documentation of this implementation, check out the blog post.
Place the checkpoints folder inside
./tboard_logs. If the folder does not exist, create it.
Original datasets used for training.
Place the tfrecords files inside
./dataset/tfrecords. Create the folder if it does not exist.
Once you have the training and validation TfRefords files, just run the command bellow. Before running Deeplab_v3, the code will look for the proper
ResNets checkpoints inside
./resnet/checkpoints, if the folder does not exist, it will first be downloaded.
python train.py --starting_learning_rate=0.00001 --batch_norm_decay=0.997 --crop_size=513 --gpu_id=0 --resnet_model=resnet_v2_50
Check out the train.py file for more input argument options. Each run produces a folder inside the tboard_logs directory (create it if not there).
To evaluate the model, run the test.py file passing to it the model_id parameter (the name of the folder created inside tboard_logs during training).
Note: Make sure the
test.tfrecords is downloaded and placed inside
python test.py --model_id=16645
To use a different dataset, you just need to modify the
CreateTfRecord.ipynb notebook inside the
dataset/ folder, to suit your needs.
Also, be aware that originally Deeplab_v3 performs random crops of size 513x513 on the input images. This crop_size parameter can be configured by changing the
crop_size hyper-parameter in train.py.
To create the dataset, first make sure you have the Pascal VOC 2012 and/or the Semantic Boundaries Dataset and Benchmark datasets downloaded.
Note: You do not need both datasets.
After, head to
dataset/ and run the
custom_train.txt file contains the name of the images selected for training. This file is designed to use the Pascal VOC 2012 set as a TESTING set. Therefore, it doesn't contain any images from the VOC 2012 val dataset. For more info, see the Training section of Deeplab Image Semantic Segmentation Network.
Obs. You can skip that part and direct download the datasets used in this experiment - See the Downloads section
For full documentation on serving this Semantic Segmentation CNN, refer to How to deploy TensorFlow models to production using TF Serving.
All the serving scripts are placed inside:
To export the model and to perform client requests do the following:
Create a python3 virtual environment and install the dependencies from the
Using the python3 env, run
deeplab_saved_model.py. The exported model should reside into
Create a python2 virtual environment and install the dependencies from the
From the python2 env, run the