Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Tensorflow Examples | 42,312 | 7 months ago | 218 | other | Jupyter Notebook | |||||
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2) | ||||||||||
Pytorch Cyclegan And Pix2pix | 19,434 | 3 months ago | 476 | other | Python | |||||
Image-to-Image Translation in PyTorch | ||||||||||
Datasets | 16,342 | 9 | 208 | 6 hours ago | 52 | June 15, 2022 | 616 | apache-2.0 | Python | |
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools | ||||||||||
Tensor2tensor | 13,663 | 82 | 11 | 6 days ago | 79 | June 17, 2020 | 589 | apache-2.0 | Python | |
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. | ||||||||||
First Order Model | 13,373 | a month ago | 296 | other | Jupyter Notebook | |||||
This repository contains the source code for the paper First Order Motion Model for Image Animation | ||||||||||
Label Studio | 13,148 | 3 | 6 hours ago | 159 | June 16, 2022 | 552 | apache-2.0 | Python | ||
Label Studio is a multi-type data labeling and annotation tool with standardized output format | ||||||||||
Pix2code | 11,584 | 3 months ago | 6 | apache-2.0 | Python | |||||
pix2code: Generating Code from a Graphical User Interface Screenshot | ||||||||||
Fashion Mnist | 9,856 | a year ago | 24 | mit | Python | |||||
A MNIST-like fashion product database. Benchmark :point_down: | ||||||||||
Cvat | 9,446 | 6 hours ago | 2 | September 08, 2022 | 492 | mit | TypeScript | |||
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale. | ||||||||||
Pix2pix | 8,452 | 2 years ago | 76 | other | Lua | |||||
Image-to-image translation with conditional adversarial nets |
UPSNet is initially described in a CVPR 2019 oral paper.
This repository is tested under Python 3.6, PyTorch 0.4.1. And model training is done with 16 GPUs by using horovod. It should also work under Python 2.7 / PyTorch 1.0 and with 4 GPUs.
© Uber, 2018-2019. Licensed under the Uber Non-Commercial License.
If you find UPSNet is useful in your research, please consider citing:
@inproceedings{xiong19upsnet,
Author = {Yuwen Xiong, Renjie Liao, Hengshuang Zhao, Rui Hu, Min Bai, Ersin Yumer, Raquel Urtasun},
Title = {UPSNet: A Unified Panoptic Segmentation Network},
Conference = {CVPR},
Year = {2019}
}
COCO 2017 (trained on train-2017 set)
test split | PQ | SQ | RQ | PQTh | PQSt | |
---|---|---|---|---|---|---|
UPSNet-50 | val | 42.5 | 78.0 | 52.4 | 48.5 | 33.4 |
UPSNet-101-DCN | test-dev | 46.6 | 80.5 | 56.9 | 53.2 | 36.7 |
Cityscapes
PQ | SQ | RQ | PQTh | PQSt | |
---|---|---|---|---|---|
UPSNet-50 | 59.3 | 79.7 | 73.0 | 54.6 | 62.7 |
UPSNet-101-COCO (ms test) | 61.8 | 81.3 | 74.8 | 57.6 | 64.8 |
We recommend using Anaconda3 as it already includes many common packages.
We recommend using 4~16 GPUs with at least 11 GB memory to train our model.
Clone this repo to $UPSNet_ROOT
Run init.sh
to build essential C++/CUDA modules and download pretrained model.
For Cityscapes:
Assuming you already downloaded Cityscapes dataset at $CITYSCAPES_ROOT
and TrainIds label images are generated, please create a soft link by ln -s $CITYSCAPES_ROOT data/cityscapes
under UPSNet_ROOT
, and run init_cityscapes.sh
to prepare Cityscapes dataset for UPSNet.
For COCO:
Assuming you already downloaded COCO dataset at $COCO_ROOT
and have annotations
and images
folders under it, please create a soft link by ln -s $COCO_ROOT data/coco
under UPSNet_ROOT
, and run init_coco.sh
to prepare COCO dataset for UPSNet.
Training:
python upsnet/upsnet_end2end_train.py --cfg upsnet/experiments/$EXP.yaml
Test:
python upsnet/upsnet_end2end_test.py --cfg upsnet/experiments/$EXP.yaml
We provide serveral config files (16/4 GPUs for Cityscapes/COCO dataset) under upsnet/experiments folder.
The model weights that can reproduce numbers in our paper are available now. Please follow these steps to use them:
Run download_weights.sh
to get trained model weights for Cityscapes and COCO.
For Cityscapes:
python upsnet/upsnet_end2end_test.py --cfg upsnet/experiments/upsnet_resnet50_cityscapes_16gpu.yaml --weight_path ./model/upsnet_resnet_50_cityscapes_12000.pth
python upsnet/upsnet_end2end_test.py --cfg upsnet/experiments/upsnet_resnet101_cityscapes_w_coco_16gpu.yaml --weight_path ./model/upsnet_resnet_101_cityscapes_w_coco_3000.pth
For COCO:
python upsnet/upsnet_end2end_test.py --cfg upsnet/experiments/upsnet_resnet50_coco_16gpu.yaml --weight_path model/upsnet_resnet_50_coco_90000.pth
python upsnet/upsnet_end2end_test.py --cfg upsnet/experiments/upsnet_resnet101_dcn_coco_3x_16gpu.yaml --weight_path model/upsnet_resnet_101_dcn_coco_270000.pth