Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Cvat | 9,048 | a day ago | 2 | September 08, 2022 | 487 | mit | TypeScript | |||
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale. | ||||||||||
Awesome Semantic Segmentation | 8,065 | 2 years ago | 13 | |||||||
:metal: awesome-semantic-segmentation | ||||||||||
Segmentation_models.pytorch | 6,974 | 2 | 34 | a day ago | 10 | November 18, 2021 | 26 | mit | Python | |
Segmentation models with pretrained backbones. PyTorch. | ||||||||||
Pytorch Unet | 6,465 | 18 days ago | 49 | gpl-3.0 | Python | |||||
PyTorch implementation of the U-Net for image semantic segmentation with high quality images | ||||||||||
Mmsegmentation | 5,450 | 2 | a day ago | 30 | July 01, 2022 | 291 | apache-2.0 | Python | ||
OpenMMLab Semantic Segmentation Toolbox and Benchmark. | ||||||||||
Gluon Cv | 5,422 | 15 | 44 | 2 months ago | 1,514 | July 07, 2022 | 61 | apache-2.0 | Python | |
Gluon CV Toolkit | ||||||||||
Semantic Segmentation Pytorch | 4,559 | 2 years ago | 1 | September 09, 2021 | 56 | bsd-3-clause | Python | |||
Pytorch implementation for Semantic Segmentation/Scene Parsing on MIT ADE20K dataset | ||||||||||
Pytorch Semseg | 3,297 | 2 months ago | 3 | February 09, 2018 | 131 | mit | Python | |||
Semantic Segmentation Architectures Implemented in PyTorch | ||||||||||
Imgclsmob | 2,399 | 4 | a year ago | 67 | September 21, 2021 | 6 | mit | Python | ||
Sandbox for training deep learning networks | ||||||||||
Awesome Semantic Segmentation Pytorch | 2,399 | 3 months ago | 114 | apache-2.0 | Python | |||||
Semantic Segmentation on PyTorch (include FCN, PSPNet, Deeplabv3, Deeplabv3+, DANet, DenseASPP, BiSeNet, EncNet, DUNet, ICNet, ENet, OCNet, CCNet, PSANet, CGNet, ESPNet, LEDNet, DFANet) |
This repository provides the ResNet-101-based model trained on PASCAL VOC from the paper RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation
(the provided weights achieve 80.5% mean IoU on the validation set in the single scale setting)
RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation
Guosheng Lin, Anton Milan, Chunhua Shen, Ian Reid
In CVPR 2017
For flawless reproduction of our results, the Ubuntu OS is recommended. The model have been tested using Python 3.6.
pip3
torch>=0.4.0
To install required Python packages, please run pip3 install -r requirements3.txt
(Python3) - use the flag -u
for local installation.
The given examples can be run with, or without GPU.
For the ease of reproduction, we have embedded all our examples inside Jupyter notebooks. One can either download them from this repository and proceed working with them on his/her local machine/server, or can resort to online version supported by the Google Colab service.
If all the installation steps have been smoothly executed, you can proceed with running any of the notebooks provided in the examples/notebooks
folder.
To start the Jupyter Notebook server, on your local machine run jupyter notebook
. This will open a web page inside your browser. If it did not open automatically, find the port number from the command's output and paste it into your browser manually.
After that, navigate to the repository folder and choose any of the examples given.
Inside the notebook, one can try out their own images, write loops to iterate over videos / whole datasets / streams (e.g., from webcam). Feel free to contribute your cool use cases of the notebooks!
Coming soon
Please refer to the training scripts for Light-Weight-RefineNet
Light-Weight-RefineNet - compact version of RefineNet running in real-time with minimal decrease in accuracy (3x decrease in the number of parameters, 5x decrease in the number of FLOPs)
For academic usage, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial usage, please contact the authors.