Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for self attention
self-attention
x
112 search results found
Leedl Tutorial
⭐
8,682
《李宏毅深度学习教程》,PDF下载地址:https://github.com/datawhalech
Informer2020
⭐
4,553
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
Awesome Transformer Attention
⭐
3,895
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Gat
⭐
2,078
Graph Attention Networks (https://arxiv.org/abs/1710.10903)
Codesearchnet
⭐
2,054
Datasets, tools, and benchmarks for representation learning of code.
Pytorch Gat
⭐
1,815
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Pygat
⭐
1,684
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
Deberta
⭐
1,673
The implementation of DeBERTa
Transformer In Vision
⭐
1,220
Recent Transformer-based CV and related works.
Ccnet
⭐
1,061
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).
Bert_language_understanding
⭐
886
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Awesome Fast Attention
⭐
717
list of efficient attention modules
Speech Transformer
⭐
714
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
How Do Vits Work
⭐
571
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
Self Attention Cv
⭐
550
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Fastervit
⭐
539
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Text Classification Pytorch
⭐
538
Text classification using deep learning models in Pytorch
Transformer
⭐
505
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
Gcvit
⭐
414
[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
Structured Self Attention
⭐
412
A Structured Self-attentive Sentence Embedding
Graph Transformer
⭐
408
Universal Graph Transformer Self-Attention Networks (TheWebConf WWW 2022) (Pytorch and Tensorflow)
Fan
⭐
389
Official PyTorch implementation of Fully Attentional Networks
Deep_learning_nlp
⭐
359
Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
Soft
⭐
286
[NeurIPS 2021 Spotlight] SOFT: Softmax-free Transformer with Linear Complexity
Relational Rnn Pytorch
⭐
228
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.
Dsmil Wsi
⭐
224
DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image
Master Pytorch
⭐
210
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Awesome Visual Representation Learning With Transformers
⭐
209
Awesome Transformers (self-attention) in Computer Vision
Tokengt
⭐
209
[NeurIPS'22] Tokenized Graph Transformer (TokenGT), in PyTorch
Multimodal_bigmodels_survey
⭐
207
[MIR-2023] A continuously updated paper list for multi-modal pre-trained big models
Saits
⭐
201
The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) model with efficiency for time series imputation (imputing multivariate incomplete time series containing missing data/values). https://arxiv.org/abs/2202.08516
Self Attentive Tensorflow
⭐
190
Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"
Neat Vision
⭐
175
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Stand Alone Self Attention
⭐
171
Implementing Stand-Alone Self-Attention in Vision Models using Pytorch
Mrc2018
⭐
138
2018百度机器阅读理解技术竞赛
Attnsleep
⭐
113
[TNSRE 2021] "An Attention-based Deep Learning Approach for Sleep Stage Classification with Single-Channel EEG"
Perceiver Io
⭐
110
Unofficial implementation of Perceiver IO
Pytorch Question Answering
⭐
102
Important paper implementations for Question Answering using PyTorch
Fagan
⭐
100
A variant of the Self Attention GAN named: FAGAN (Full Attention GAN)
Pdformer
⭐
98
[AAAI2023] A PyTorch implementation of PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction.
Pytorch Psetae
⭐
98
PyTorch implementation of the model presented in "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"
Robustness Vit
⭐
97
Contains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Self Attentive Emb Tf
⭐
91
Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)
Awesome Vision Transformer Collection
⭐
81
Variants of Vision Transformer and its downstream tasks
Parallel Tacotron2
⭐
80
PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
Self Attention Keras
⭐
77
自注意力与文本分类
Query Selector
⭐
73
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION
Utae Paps
⭐
71
PyTorch implementation of U-TAE and PaPs for satellite image time series panoptic segmentation.
Transformercpi
⭐
71
TransformerCPI: Improving compound–protein interaction prediction by sequence-based deep learning with self-attention mechanism and label reversal experiments(BIOINFORMATICS 2020) https://doi.org/10.1093/bioinformatics/btaa524
R Men
⭐
70
Transformer-based Memory Networks for Knowledge Graph Embeddings (ACL 2020) (Pytorch and Tensorflow)
Crabnet
⭐
65
Predict materials properties using only the composition information!
Egt_pytorch
⭐
58
Edge-Augmented Graph Transformer
Lambdanetworks
⭐
50
Fs Eend
⭐
50
The official Pytorch implementation of "Frame-wise streaming end-to-end speaker diarization with non-autoregressive self-attention-based attractors". [ICASSP 2024]
Global Self Attention Network
⭐
49
A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Transformerx
⭐
49
Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow ✅, Pytorch 🔜, and Jax 🔜)
Dysat
⭐
42
Representation learning on dynamic graphs using self-attention networks
Scite
⭐
42
Causality Extraction based on Self-Attentive BiLSTM-CRF with Transferred Embeddings
Seq2seq Pytorch
⭐
42
Sequence to Sequence Models in PyTorch
Describing_a_knowledge_base
⭐
42
Code for Describing a Knowledge Base
Biam
⭐
41
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages
Relational_deep_reinforcement_learning
⭐
41
Hot
⭐
40
[NeurIPS'21] Higher-order Transformers for sets, graphs, and hypergraphs, in PyTorch
Transformer Physx
⭐
40
Transformers for modeling physical systems
Generative_mlzsl
⭐
40
[TPAMI 2023] Generative Multi-Label Zero-Shot Learning
Attention Augmented Conv
⭐
39
Implementation from the paper Attention Augmented Convolutional Networks in Tensorflow (https://arxiv.org/pdf/1904.09925v1.pdf)
Iperceive
⭐
36
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Ubisoft Laforge Daft Exprt
⭐
34
PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis
Multiturndialogzoo
⭐
32
Multi-turn dialogue baselines written in PyTorch
Gatraj
⭐
31
[ISPRS 2023]Official PyTorch Implementation of "GATraj: A Graph- and Attention-based Multi-Agent Trajectory Prediction Model"
Vaenar Tts
⭐
25
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.
Walk Transformer
⭐
24
From Random Walks to Transformer for Learning Node Embeddings (ECML-PKDD 2020) (In Pytorch and Tensorflow)
Adast
⭐
23
[IEEE TETCI] "ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with Iterative Self-Training"
Structured Self Attentive Sentence Embedding
⭐
23
Re-Implementation of "A Structured Self-Attentive Sentence Embedding" by Lin et al., 2017
Lightweight Temporal Attention Pytorch
⭐
23
A PyTorch implementation of the Light Temporal Attention Encoder (L-TAE) for satellite image time series. classification
Tf Transformer
⭐
22
TensorFlow 2 implementation of Transformer (Attention is all you need).
Cmsa Mtpt 4 Medicalvqa
⭐
21
[ICMR'21, Best Poster Paper Award] Medical Visual Question Answering with Multi-task Pre-training and Cross-modal Self-attention
Darecnet Bs
⭐
21
DARecNet-BS: Unsupervised Dual Attention Reconstruction Network for Hyperspectral Band Selection
Gcvit Tf
⭐
20
Tensorflow 2.0 Implementation of GCViT: Global Context Vision Transformer
Transformer
⭐
20
Chatbot using Tensorflow (Model is transformer) ko
Object And Semantic Part Detection Pytorch
⭐
18
Joint detection of Object and its Semantic parts using Attention-based Feature Fusion on PASCAL Parts 2010 dataset
Psa Gan
⭐
16
PSA-GAN implementation in pytorch
Fusion
⭐
15
PyTorch code for NeurIPSW 2020 paper (4th Workshop on Meta-Learning) "Few-Shot Unsupervised Continual Learning through Meta-Examples"
Text Classification Pytorch
⭐
14
Text Classification in PyTorch
Attnpinn For Rul Estimation
⭐
14
A Framework for Remaining Useful Life Prediction Based on Self-Attention and Physics-Informed Neural Networks
Cvdd Pytorch
⭐
14
A PyTorch implementation of Context Vector Data Description (CVDD), a method for Anomaly Detection on text.
Mairl
⭐
13
[RA-L & ICRA 2021] Adversarial Inverse Reinforcement Learning with Self-attention Dynamics Model
Transformer In Pytorch
⭐
13
Transformer/Transformer-XL/R-Transformer examples and explanations
Multi Hop Knowledge Paths Human Needs
⭐
12
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs
Egt
⭐
11
Edge-Augmented Graph Transformer
Zam
⭐
11
ZAM: Zero parameter Attention Module
Histoseg
⭐
11
HistoSeg is an Encoder-Decoder DCNN which utilizes the novel Quick Attention Modules and Multi Loss function to generate segmentation masks from histopathological images with greater accuracy. This repo contains the code to Test and Train the HistoSeg
Hybrid Self Attention Neat
⭐
10
This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm. It contains the code to reproduce the results presented in the original paper: https://link.springer.com/article/10.1007/s12530-0
Attentionvisualizer
⭐
10
A simple library to showcase highest scored words using RoBERTa model
Ss Sfda Self Supervised Source Free Domain Adaptation For Road Segmentation In Hazardous Environme
⭐
10
Codebase for the paper 'SS SFDA: Self-Supervised Source Free Domain Adaptation for Road Segmentation in Hazardous Environments'
Gappy Mwes
⭐
10
Code for NAACL 2019 paper: "Bridging the Gap: Attending to Discontinuity in Identification of Multiword Expressions"
Self Attention
⭐
9
Transformer的完整实现。详细构建Encoder、Decoder、Self-attentio
Pytorch Attentive Lm
⭐
9
Pytorch Implementation of an A-RNN-LM
Cait Tf
⭐
9
Implementation of CaiT models in TensorFlow and ImageNet-1k checkpoints. Includes code for inference and fine-tuning.
Textclassification
⭐
9
Repository of state of the art text/documentation classification algorithms in Pytorch.
1-100 of 112 search results
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.