Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Darknet | 20,416 | 3 months ago | n,ull | other | C | |||||
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet ) | ||||||||||
Cvpr2023 Papers With Code | 12,014 | 3 days ago | 10 | |||||||
CVPR 2023 论文和开源项目合集 | ||||||||||
Cvpr2023 Paper Code Interpretation | 11,480 | 2 months ago | 40 | |||||||
cvpr2022/cvpr2021/cvpr2020/cvpr2019/cvpr2018/cvpr2017 论文/代码/解读/直播合集,极市团队整理 | ||||||||||
Cvat | 9,446 | 7 hours ago | 2 | September 08, 2022 | 492 | mit | TypeScript | |||
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale. | ||||||||||
Computervision Recipes | 8,950 | 4 months ago | 65 | mit | Jupyter Notebook | |||||
Best Practices, code samples, and documentation for Computer Vision. | ||||||||||
Pytorch Grad Cam | 7,414 | 1 | 13 days ago | 25 | May 20, 2022 | 71 | mit | Python | ||
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more. | ||||||||||
Ailab | 7,403 | a day ago | 80 | mit | C# | |||||
Experience, Learn and Code the latest breakthrough innovations with Microsoft AI | ||||||||||
Awesome Object Detection | 6,707 | a year ago | 6 | |||||||
Awesome Object Detection based on handong1587 github: https://handong1587.github.io/deep_learning/2015/10/09/object-detection.html | ||||||||||
Jetson Inference | 6,428 | 2 days ago | 212 | mit | C++ | |||||
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. | ||||||||||
Autogluon | 5,788 | 9 hours ago | 232 | apache-2.0 | Python | |||||
AutoGluon: AutoML for Image, Text, Time Series, and Tabular Data |
One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain level of spatial misalignment in predictions between the two tasks. In this work, we propose a Task-aligned One-stage Object Detection (TOOD) that explicitly aligns the two tasks in a learning-based manner. First, we design a novel Task-aligned Head (T-Head) which offers a better balance between learning task-interactive and task-specific features, as well as a greater flexibility to learn the alignment via a task-aligned predictor. Second, we propose Task Alignment Learning (TAL) to explicitly pull closer (or even unify) the optimal anchors for the two tasks during training via a designed sample assignment scheme and a task-aligned loss. Extensive experiments are conducted on MS-COCO, where TOOD achieves a 51.1 AP at single-model single-scale testing. This surpasses the recent one-stage detectors by a large margin, such as ATSS (47.7 AP), GFL (48.2 AP), and PAA (49.0 AP), with fewer parameters and FLOPs. Qualitative results also demonstrate the effectiveness of TOOD for better aligning the tasks of object classification and localization.
MMDetection version 2.14.0.
Please see get_started.md for installation and the basic usage of MMDetection.
# assume that you are under the root directory of this project,
# and you have activated your virtual environment if needed.
# and with COCO dataset in 'data/coco/'.
./tools/dist_train.sh configs/tood/tood_r50_fpn_1x_coco.py 4
./tools/dist_test.sh configs/tood/tood_r50_fpn_1x_coco.py work_dirs/tood_r50_fpn_1x_coco/epoch_12.pth 4 --eval bbox
For your convenience, we provide the following trained models (TOOD). All models are trained with 16 images in a mini-batch.
Model | Anchor | MS train | DCN | Lr schd | AP (minival) | AP (test-dev) | Config | Download |
---|---|---|---|---|---|---|---|---|
TOOD_R_50_FPN_1x | Anchor-free | No | No | 1x | 42.5 | 42.7 | config | google / baidu |
TOOD_R_50_FPN_anchor_based_1x | Anchor-based | No | No | 1x | 42.4 | 42.8 | config | google / baidu |
TOOD_R_101_FPN_2x | Anchor-free | Yes | No | 2x | 46.2 | 46.7 | config | google / baidu |
TOOD_X_101_FPN_2x | Anchor-free | Yes | No | 2x | 47.6 | 48.5 | config | google / baidu |
TOOD_R_101_dcnv2_FPN_2x | Anchor-free | Yes | Yes | 2x | 49.2 | 49.6 | config | google / baidu |
TOOD_X_101_dcnv2_FPN_2x | Anchor-free | Yes | Yes | 2x | 50.5 | 51.1 | config | google / baidu |
[0] All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..
[1] dcnv2
denotes deformable convolutional networks v2.
[2] Refer to more details in config files in config/tood/
.
[3] Extraction code of baidu netdisk: tood.
Thanks MMDetection team for the wonderful open source project!
If you find TOOD useful in your research, please consider citing:
@inproceedings{feng2021tood,
title={TOOD: Task-aligned One-stage Object Detection},
author={Feng, Chengjian and Zhong, Yujie and Gao, Yu and Scott, Matthew R and Huang, Weilin},
booktitle={ICCV},
year={2021}
}