Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Awesome Transformer Attention | 3,895 | 4 months ago | 15 | |||||||
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites | ||||||||||
X Transformers | 3,840 | 10 | 4 months ago | 317 | December 02, 2023 | 55 | mit | Python | ||
A simple but complete full-attention transformer with a set of promising experimental features from various papers | ||||||||||
Awesome Speech Recognition Speech Synthesis Papers | 2,869 | 7 months ago | 2 | mit | ||||||
Automatic Speech Recognition (ASR), Speaker Verification, Speech Synthesis, Text-to-Speech (TTS), Language Modelling, Singing Voice Synthesis (SVS), Voice Conversion (VC) | ||||||||||
Eeg Dl | 563 | a year ago | 10 | May 13, 2020 | 4 | mit | Python | |||
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow. | ||||||||||
Triplet Attention | 383 | 3 years ago | mit | Jupyter Notebook | ||||||
Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021] | ||||||||||
Action Recognition Visual Attention | 338 | 8 years ago | 8 | Jupyter Notebook | ||||||
Action recognition using soft attention based deep recurrent neural networks | ||||||||||
Linformer Pytorch | 323 | 2 years ago | 41 | October 10, 2020 | 2 | mit | Python | |||
My take on a practical implementation of Linformer for Pytorch. | ||||||||||
Attention_is_all_you_need | 293 | 7 years ago | 1 | bsd-3-clause | Jupyter Notebook | |||||
Transformer of "Attention Is All You Need" (Vaswani et al. 2017) by Chainer. | ||||||||||
Multi Scale Attention | 166 | 4 years ago | Python | |||||||
Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation" | ||||||||||
Atpapers | 105 | 3 years ago | ||||||||
Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力机制、Transformer和预训练语言模型论文与相关资源集合 |