Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Alfred | 817 | a month ago | 104 | July 03, 2022 | 10 | gpl-3.0 | Python | |||
alfred-py: A deep learning utility library for **human**, more detail about the usage of lib to: https://zhuanlan.zhihu.com/p/341446046 | ||||||||||
Siamrpn_plus_plus_pytorch | 390 | 4 years ago | 11 | Python | ||||||
SiamRPN, SiamRPN++, unofficial implementation of "SiamRPN++" (CVPR2019), multi-GPUs, LMDB. | ||||||||||
Multiview Human Pose Estimation Pytorch | 388 | 2 years ago | 6 | mit | Python | |||||
This is an official Pytorch implementation of "Cross View Fusion for 3D Human Pose Estimation, ICCV 2019". | ||||||||||
Ffa Net | 286 | a year ago | 16 | Python | ||||||
FFA-Net: Feature Fusion Attention Network for Single Image Dehazing | ||||||||||
Vit Explain | 260 | a year ago | 8 | mit | Python | |||||
Explainability for Vision Transformers | ||||||||||
Dfnet | 203 | 5 months ago | other | Jupyter Notebook | ||||||
Deep Fusion Network for Image Completion - ACMMM 2019 | ||||||||||
Dss Pytorch | 141 | 4 years ago | 25 | mit | Jupyter Notebook | |||||
:star: PyTorch implement of Deeply Supervised Salient Object Detection with Short Connection | ||||||||||
Rtfnet | 102 | 9 months ago | 2 | mit | Python | |||||
RGB-Thermal Fusion Network for Semantic Segmentation of Urban Scenes | ||||||||||
Df Net | 85 | a year ago | Python | |||||||
Open source code for ACL 2020 Paper "Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog" | ||||||||||
Imagefusion Rfn Nest | 63 | 16 days ago | 8 | Python | ||||||
RFN-Nest(Information Fusion, 2021) - PyTorch =1.5,Python=3.7 |
Pytorch Implementation of Google's TFT
Original Github link: https://github.com/google-research/google-research/tree/master/tft
Paper link: https://arxiv.org/pdf/1912.09363.pdf
Abstract Multi-horizon forecasting problems often contain a complex mix of inputs -- including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically -- without any prior information on how they interact with the target. While several deep learning models have been proposed for multi-step prediction, they typically comprise black-box models which do not account for the full range of inputs present in common scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) -- a novel attention-based architecture which combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, the TFT utilizes recurrent layers for local processing and interpretable self-attention layers for learning long-term dependencies. The TFT also uses specialized components for the judicious selection of relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of regimes. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and showcase three practical interpretability use-cases of TFT.