Pytorch Implementation of Google's TFT
Alternatives To Temporal_fusion_transform
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
a month ago104July 03, 202210gpl-3.0Python
alfred-py: A deep learning utility library for **human**, more detail about the usage of lib to: https://zhuanlan.zhihu.com/p/341446046
4 years ago11Python
SiamRPN, SiamRPN++, unofficial implementation of "SiamRPN++" (CVPR2019), multi-GPUs, LMDB.
Multiview Human Pose Estimation Pytorch388
2 years ago6mitPython
This is an official Pytorch implementation of "Cross View Fusion for 3D Human Pose Estimation, ICCV 2019".
Ffa Net286
a year ago16Python
FFA-Net: Feature Fusion Attention Network for Single Image Dehazing
Vit Explain260
a year ago8mitPython
Explainability for Vision Transformers
5 months agootherJupyter Notebook
Deep Fusion Network for Image Completion - ACMMM 2019
Dss Pytorch141
4 years ago25mitJupyter Notebook
:star: PyTorch implement of Deeply Supervised Salient Object Detection with Short Connection
9 months ago2mitPython
RGB-Thermal Fusion Network for Semantic Segmentation of Urban Scenes
Df Net85
a year agoPython
Open source code for ACL 2020 Paper "Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog"
Imagefusion Rfn Nest63
16 days ago8Python
RFN-Nest(Information Fusion, 2021) - PyTorch =1.5,Python=3.7
Alternatives To Temporal_fusion_transform
Select To Compare

Alternative Project Comparisons


Pytorch Implementation of Google's TFT

Original Github link: https://github.com/google-research/google-research/tree/master/tft

Paper link: https://arxiv.org/pdf/1912.09363.pdf

Abstract Multi-horizon forecasting problems often contain a complex mix of inputs -- including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically -- without any prior information on how they interact with the target. While several deep learning models have been proposed for multi-step prediction, they typically comprise black-box models which do not account for the full range of inputs present in common scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) -- a novel attention-based architecture which combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, the TFT utilizes recurrent layers for local processing and interpretable self-attention layers for learning long-term dependencies. The TFT also uses specialized components for the judicious selection of relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of regimes. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and showcase three practical interpretability use-cases of TFT.

Popular Pytorch Projects
Popular Fusion Projects
Popular Machine Learning Categories

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Jupyter Notebook