Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Medicaldetectiontoolkit | 1,209 | 5 months ago | 44 | apache-2.0 | Python | |||||
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images. | ||||||||||
Tusimple Duc | 575 | 2 years ago | 6 | apache-2.0 | Python | |||||
Understanding Convolution for Semantic Segmentation | ||||||||||
Refinenet | 500 | 5 years ago | other | MATLAB | ||||||
RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation | ||||||||||
Cascaded Fcn | 283 | 6 years ago | 7 | other | Jupyter Notebook | |||||
Source code for the MICCAI 2016 Paper "Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional NeuralNetworks and 3D Conditional Random Fields" | ||||||||||
Kiu Net Pytorch | 245 | 2 years ago | 8 | mit | Python | |||||
Official Pytorch Code of KiU-Net for Image/3D Segmentation - MICCAI 2020 (Oral), IEEE TMI | ||||||||||
Esanet | 206 | a month ago | 32 | other | Python | |||||
ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | ||||||||||
Keras Unet | 194 | 3 years ago | 7 | July 27, 2020 | 8 | mit | Python | |||
Helper package with multiple U-Net implementations in Keras as well as useful utility tools helpful when working with image semantic segmentation tasks. This library and underlying tools come from multiple projects I performed working on semantic segmentation tasks | ||||||||||
Fully Convolutional Point Network | 73 | 5 years ago | 2 | mit | Python | |||||
Fully-Convolutional Point Networks for Large-Scale Point Clouds | ||||||||||
3d Semantic Segmentation | 69 | 5 years ago | 5 | mit | Python | |||||
This work is based on our paper Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds, which is appeared at the IEEE International Conference on Computer Vision (ICCV) 2017, 3DRMS Workshop. | ||||||||||
Texturenet | 68 | 4 years ago | mit | C++ | ||||||
TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes |
A MATLAB based framework for semantic image segmentation and general dense prediction tasks on images.
This is the source code for the following paper and its extension:
This codebase only provides MATLAB and MatConvNet based implementation.
Vladimir Nekrasov kindly provides a Pytorch implementation and a light-weight version of RefineNet at:
DrSleep/refinenet-pytorch
new!
) 13 Feb 2018:
Testing
section below for more details.Important notes
in each section below.new!
) Trained models for the following datasets are available for download.PASCAL VOC 2012
Cityscapes
NYUDv2
Person_Parts
PASCAL_Context
SUNRGBD
ADE20k
./model_trained/
new!
) RefineNet models using ResNet-101
: Google Drive or Baidu Pan
new!
) RefineNet models using ResNet-152
: Google Drive or Baidu Pan
NYUDv2, Person_Parts, PASCAL_Context, SUNRGBD, ADE20k
.
These models will give better performance than the reported results in our CVPR paper.
Please also refer to the Network architecture
section below for more details about improved pooling.VOC2012
is updated. We previously uploaded a wrong model.net_graphs
. Please refer to our paper for more details.My_sum_layer.m
.Install MatConvNet and CuDNN. We have modified MatConvNet for our task. A modified copy of MatConvNet is provided in ./lib/
. You need to compile the provided MatConvNet before running. Details of this modification and compiling can be found in main/my_matconvnet_resnet/README.md
.
An example script for exporting lib paths is
main/my_matlab.sh
Download the following ImageNet pre-trained models and place them in ./model_trained/
:
imagenet-resnet-50-dag, imagenet-resnet-101-dag, imagenet-resnet-152-dag
.They can be downloaded from: MatConvNet, we also have a copy in Google Drive, Baidu Pan.
new!
)
First download the trained models and put them in ./model_trained/
. Please refer to the above section Trained Models
.
Then refer to the below example scripts for prediction on your images:
demo_predict_mscale_[dataset name].m
demo_predict_mscale_voc.m
, demo_predict_mscale_nyud
, demo_predict_mscale_person_parts
You may need to carefully read through the comments in these demo scripts before using.
Important notes:
uint8
with values in [0 255]. You need to cast them into double
and normalize into [0 1] if you want to use them.Trained models
for more details.Single scale prediction and evaluation can be done by changing the scale setting in the multi-scale prediction demo files. Please refer the the above section for multi-scale prediction.
We also provide simplified demo files for prediction with much less configurations. They are only for single scale prediction. Examples can be found at: demo_test_simple_voc.m
and demo_test_simple_city.m
.
new!
)
demo_fuse_saved_prediction_voc.m
: fuse multiple cached predictions to generate the final predictiondemo_evaluate_saved_prediction_voc.m
: evaluate the segmentation performance, e.g., in terms of IoU scores.demo_refinenet_train.m
demo_refinenet_train_reduce_learning_rate.m
If you find the code useful, please cite our work as
@inproceedings{Lin:2017:RefineNet,
title = {Refine{N}et: {M}ulti-Path Refinement Networks for High-Resolution Semantic Segmentation},
shorttitle = {RefineNet: Multi-Path Refinement Networks},
booktitle = {CVPR},
author = {Lin, G. and Milan, A. and Shen, C. and Reid, I.},
month = jul,
year = {2017}
}
and
@article{lin2019refinenet,
title={RefineNet: Multi-Path Refinement Networks for Dense Prediction},
author={Lin, Guosheng and Liu, Fayao and Milan, Anton and Shen, Chunhua and Reid, Ian},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2019},
publisher={IEEE},
doi={10.1109/TPAMI.2019.2893630},
}
For academic usage, the code is released under the permissive BSD license. For any commercial purpose, please contact the authors.