Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for python self attention
python
x
self-attention
x
88 search results found
Informer2020
⭐
4,553
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
Gat
⭐
2,078
Graph Attention Networks (https://arxiv.org/abs/1710.10903)
Codesearchnet
⭐
2,054
Datasets, tools, and benchmarks for representation learning of code.
Pytorch Gat
⭐
1,815
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
Pygat
⭐
1,684
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
Deberta
⭐
1,673
The implementation of DeBERTa
Ccnet
⭐
1,061
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).
Bert_language_understanding
⭐
886
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Awesome Fast Attention
⭐
717
list of efficient attention modules
Speech Transformer
⭐
714
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
How Do Vits Work
⭐
571
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
Self Attention Cv
⭐
550
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Fastervit
⭐
539
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
Text Classification Pytorch
⭐
538
Text classification using deep learning models in Pytorch
Structured Self Attention
⭐
412
A Structured Self-attentive Sentence Embedding
Graph Transformer
⭐
408
Universal Graph Transformer Self-Attention Networks (TheWebConf WWW 2022) (Pytorch and Tensorflow)
Gcvit
⭐
399
[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
Fan
⭐
389
Official PyTorch implementation of Fully Attentional Networks
Deep_learning_nlp
⭐
359
Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
Soft
⭐
286
[NeurIPS 2021 Spotlight] SOFT: Softmax-free Transformer with Linear Complexity
Relational Rnn Pytorch
⭐
228
An implementation of DeepMind's Relational Recurrent Neural Networks in PyTorch.
Dsmil Wsi
⭐
224
DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image
Master Pytorch
⭐
210
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
Tokengt
⭐
209
[NeurIPS'22] Tokenized Graph Transformer (TokenGT), in PyTorch
Saits
⭐
201
The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) model with efficiency for time series imputation (imputing multivariate incomplete time series containing missing data/values). https://arxiv.org/abs/2202.08516
Self Attentive Tensorflow
⭐
190
Tensorflow implementation of "A Structured Self-Attentive Sentence Embedding"
Stand Alone Self Attention
⭐
171
Implementing Stand-Alone Self-Attention in Vision Models using Pytorch
Mrc2018
⭐
138
2018百度机器阅读理解技术竞赛
Attnsleep
⭐
113
[TNSRE 2021] "An Attention-based Deep Learning Approach for Sleep Stage Classification with Single-Channel EEG"
Perceiver Io
⭐
110
Unofficial implementation of Perceiver IO
Fagan
⭐
100
A variant of the Self Attention GAN named: FAGAN (Full Attention GAN)
Pytorch Psetae
⭐
98
PyTorch implementation of the model presented in "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"
Pdformer
⭐
98
[AAAI2023] A PyTorch implementation of PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction.
Self Attentive Emb Tf
⭐
91
Simple Tensorflow Implementation of "A Structured Self-attentive Sentence Embedding" (ICLR 2017)
Parallel Tacotron2
⭐
80
PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
Self Attention Keras
⭐
77
自注意力与文本分类
Query Selector
⭐
73
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION
Transformercpi
⭐
71
TransformerCPI: Improving compound–protein interaction prediction by sequence-based deep learning with self-attention mechanism and label reversal experiments(BIOINFORMATICS 2020) https://doi.org/10.1093/bioinformatics/btaa524
R Men
⭐
70
Transformer-based Memory Networks for Knowledge Graph Embeddings (ACL 2020) (Pytorch and Tensorflow)
Crabnet
⭐
65
Predict materials properties using only the composition information!
Egt_pytorch
⭐
58
Edge-Augmented Graph Transformer
Lambdanetworks
⭐
50
Fs Eend
⭐
50
The official Pytorch implementation of "Frame-wise streaming end-to-end speaker diarization with non-autoregressive self-attention-based attractors". [ICASSP 2024]
Transformerx
⭐
49
Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow ✅, Pytorch 🔜, and Jax 🔜)
Global Self Attention Network
⭐
49
A Pytorch implementation of Global Self-Attention Network, a fully-attention backbone for vision tasks
Describing_a_knowledge_base
⭐
42
Code for Describing a Knowledge Base
Dysat
⭐
42
Representation learning on dynamic graphs using self-attention networks
Seq2seq Pytorch
⭐
42
Sequence to Sequence Models in PyTorch
Biam
⭐
41
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages
Relational_deep_reinforcement_learning
⭐
41
Transformer Physx
⭐
40
Transformers for modeling physical systems
Hot
⭐
40
[NeurIPS'21] Higher-order Transformers for sets, graphs, and hypergraphs, in PyTorch
Generative_mlzsl
⭐
40
[TPAMI 2023] Generative Multi-Label Zero-Shot Learning
Attention Augmented Conv
⭐
39
Implementation from the paper Attention Augmented Convolutional Networks in Tensorflow (https://arxiv.org/pdf/1904.09925v1.pdf)
Iperceive
⭐
36
Applying Common-Sense Reasoning to Multi-Modal Dense Video Captioning and Video Question Answering | Python3 | PyTorch | CNNs | Causality | Reasoning | LSTMs | Transformers | Multi-Head Self Attention | Published in IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
Ubisoft Laforge Daft Exprt
⭐
34
PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis
Multiturndialogzoo
⭐
32
Multi-turn dialogue baselines written in PyTorch
Gatraj
⭐
31
[ISPRS 2023]Official PyTorch Implementation of "GATraj: A Graph- and Attention-based Multi-Agent Trajectory Prediction Model"
Vaenar Tts
⭐
25
PyTorch Implementation of VAENAR-TTS: Variational Auto-Encoder based Non-AutoRegressive Text-to-Speech Synthesis.
Walk Transformer
⭐
24
From Random Walks to Transformer for Learning Node Embeddings (ECML-PKDD 2020) (In Pytorch and Tensorflow)
Structured Self Attentive Sentence Embedding
⭐
23
Re-Implementation of "A Structured Self-Attentive Sentence Embedding" by Lin et al., 2017
Adast
⭐
23
[IEEE TETCI] "ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with Iterative Self-Training"
Tf Transformer
⭐
22
TensorFlow 2 implementation of Transformer (Attention is all you need).
Cmsa Mtpt 4 Medicalvqa
⭐
21
[ICMR'21, Best Poster Paper Award] Medical Visual Question Answering with Multi-task Pre-training and Cross-modal Self-attention
Transformer
⭐
20
Chatbot using Tensorflow (Model is transformer) ko
Object And Semantic Part Detection Pytorch
⭐
18
Joint detection of Object and its Semantic parts using Attention-based Feature Fusion on PASCAL Parts 2010 dataset
Cvdd Pytorch
⭐
14
A PyTorch implementation of Context Vector Data Description (CVDD), a method for Anomaly Detection on text.
Text Classification Pytorch
⭐
14
Text Classification in PyTorch
Attnpinn For Rul Estimation
⭐
14
A Framework for Remaining Useful Life Prediction Based on Self-Attention and Physics-Informed Neural Networks
Transformer In Pytorch
⭐
13
Transformer/Transformer-XL/R-Transformer examples and explanations
Mairl
⭐
13
[RA-L & ICRA 2021] Adversarial Inverse Reinforcement Learning with Self-attention Dynamics Model
Multi Hop Knowledge Paths Human Needs
⭐
12
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs
Histoseg
⭐
11
HistoSeg is an Encoder-Decoder DCNN which utilizes the novel Quick Attention Modules and Multi Loss function to generate segmentation masks from histopathological images with greater accuracy. This repo contains the code to Test and Train the HistoSeg
Egt
⭐
11
Edge-Augmented Graph Transformer
Zam
⭐
11
ZAM: Zero parameter Attention Module
Hybrid Self Attention Neat
⭐
10
This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm. It contains the code to reproduce the results presented in the original paper: https://link.springer.com/article/10.1007/s12530-0
Gappy Mwes
⭐
10
Code for NAACL 2019 paper: "Bridging the Gap: Attending to Discontinuity in Identification of Multiword Expressions"
Attentionvisualizer
⭐
10
A simple library to showcase highest scored words using RoBERTa model
Pytorch Attentive Lm
⭐
9
Pytorch Implementation of an A-RNN-LM
Self Attention
⭐
9
Transformer的完整实现。详细构建Encoder、Decoder、Self-attentio
Illume
⭐
8
To miss-attend is to misalign! Residual Self-Attentive Feature Alignment for Adapting Object Detectors, WACV 2022
Sem
⭐
8
SEM can automatically decide to select and integrate attention operators to compute attention maps.
Toy Model For Nli
⭐
8
My toy model for natural language inference task.
Arenets
⭐
7
Tensorflow-based framework which lists attentive implementation of the conventional neural network models (CNN, RNN-based), applicable for Relation Extraction classification tasks as well as API for custom model implementation
Drophead Pytorch
⭐
7
An implementation of drophead regularization for pytorch transformers
Mustang
⭐
7
Multi-stain graph self attention multiple instance learning for histopathology Whole Slide Images - BMVC 2023
Self Supervised Monocular Trained Depth Estimation Using Self Attention And Discrete Disparity Volum
⭐
7
Reproduction of the CVPR 2020 paper - Self-supervised monocular trained depth estimation using self-attention and discrete disparity volume
Mispronunciation Detection
⭐
6
Mispronunciation detection code for jingju singing voice
Uniwin
⭐
5
PyTorch implementation of Uniwin("Image Super-resolution with Unified Window Attention".
Related Searches
Python Django (28,897)
Python Machine Learning (20,195)
Python Deep Learning (18,274)
Python Pytorch (17,873)
Python Flask (17,643)
Python Dataset (14,960)
Python Tensorflow (14,085)
Python Docker (13,757)
Python Command Line (13,351)
Python Jupyter Notebook (12,976)
1-88 of 88 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.