Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for attention mechanism
attention-mechanism
x
649 search results found
Local Attention
⭐
270
An implementation of local windowed attention for language modeling
Memory Efficient Attention Pytorch
⭐
267
Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"
Eqtransformer
⭐
260
EQTransformer, a python package for earthquake signal detection and phase picking using AI.
Seq2seq Summarizer
⭐
259
Pointer-generator reinforced seq2seq summarization in PyTorch
Q Transformer
⭐
253
Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Functions, out of Google Deepmind
Metal Flash Attention
⭐
252
Faster alternative to Metal Performance Shaders
Magvit2 Pytorch
⭐
244
Implementation of MagViT2 Tokenizer in Pytorch
Hierarchical Multi Label Text Classification
⭐
239
The code of CIKM'19 paper《Hierarchical Multi-label Text Classification: An Attention-based Recurrent Network Approach》
Lightnetplusplus
⭐
238
LightNet++: Boosted Light-weighted Networks for Real-time Semantic Segmentation
Da Rnn
⭐
234
📃 **Unofficial** PyTorch Implementation of DA-RNN (arXiv:1704.02971)
Scribe
⭐
234
Realistic Handwriting with Tensorflow
Attentionalpoolingaction
⭐
228
Code/Model release for NIPS 2017 paper "Attentional Pooling for Action Recognition"
Pan
⭐
227
(ECCV2020 Workshops) Efficient Image Super-Resolution Using Pixel Attention.
Yolo Multi Backbones Attention
⭐
223
Model Compression—YOLOv3 with multi lightweight backbones(ShuffleNetV2 HuaWei GhostNet), attention, prune and quantization
Attentive Gan Derainnet
⭐
222
Unofficial tensorflow implemention of "Attentive Generative Adversarial Network for Raindrop Removal from A Single Image (CVPR 2018) " model https://maybeshewill-cv.github.io/attentive-gan-de
Pytorch Attention
⭐
222
🦖Pytorch implementation of popular Attention Mechanisms, Vision Transformers, MLP-Like models and CNNs.🔥🔥🔥
Rt 2
⭐
215
Democratization of RT-2 "RT-2: New model translates vision and language into action"
Colt5 Attention
⭐
207
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
Equiformer Pytorch
⭐
207
Implementation of the Equiformer, SE3/E3 equivariant attention network that reaches new SOTA, and adopted for use by EquiFold for protein folding
Multimodal Sentiment Analysis
⭐
205
Attention-based multimodal fusion for sentiment analysis
Se3 Transformer Pytorch
⭐
205
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
Routing Transformer
⭐
202
Fully featured implementation of Routing Transformer
Saits
⭐
201
The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) model with efficiency for time series imputation (imputing multivariate incomplete time series containing missing data/values). https://arxiv.org/abs/2202.08516
Sca Cnn.cvpr17
⭐
201
Image Captions Generation with Spatial and Channel-wise Attention
Ttslearn
⭐
197
ttslearn: Library for Pythonで学ぶ音声合成 (Text-to-speech with Python)
Mega Pytorch
⭐
195
Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena
Linformer
⭐
194
Implementation of Linformer for Pytorch
En Transformer
⭐
192
Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network
Sparse Structured Attention
⭐
190
Sparse and structured neural attention mechanisms
Deformable Attention
⭐
190
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"
Attention
⭐
187
This repository will house a visualization that will attempt to convey instant enlightenment of how Attention works to someone not working in artificial intelligence, with 3Blue1Brown as inspiration
Guided Attention Inference Network
⭐
187
Contains implementation of Guided Attention Inference Network (GAIN) presented in Tell Me Where to Look(CVPR 2018). This repository aims to apply GAIN on fcn8 architecture used for segmentation.
Pycontinual
⭐
185
PyContinual (An Easy and Extendible Framework for Continual Learning)
Hnatt
⭐
182
Train and visualize Hierarchical Attention Networks
Palm Jax
⭐
181
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)
Sinkhorn Transformer
⭐
178
Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention
Neat Vision
⭐
175
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)
Flash Cosine Sim Attention
⭐
173
Implementation of fused cosine similarity attention in the same style as Flash Attention
Multihead Siamese Nets
⭐
173
Implementation of Siamese Neural Networks built upon multihead attention mechanism for text semantic similarity task.
Simple Hierarchical Transformer
⭐
172
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
Datastories Semeval2017 Task4
⭐
171
Deep-learning model presented in "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".
Recurrent Interface Network Pytorch
⭐
170
Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in Pytorch
Amfmn
⭐
170
The source code of AMFMN and the dataset RSITMD
Tracer
⭐
169
TRACER: Extreme Attention Guided Salient Object Tracing Network (AAAI 2022) implementation in PyTorch
Document Classifier Lstm
⭐
167
A bidirectional LSTM with attention for multiclass/multilabel text classification.
Yolov3 Point
⭐
167
从零开始学习YOLOv3代码
Multi Scale Attention
⭐
166
Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation"
Block Recurrent Transformer Pytorch
⭐
164
Implementation of Block Recurrent Transformer - Pytorch
Slot_filling_intent_joint_model
⭐
161
attention based joint model for intent detection and slot filling
Graph_attention_pool
⭐
161
Attention over nodes in Graph Neural Networks using PyTorch (NeurIPS 2019)
At4chinesener
⭐
160
Adversarial Transfer Learning for Chinese Named Entity Recognition with Self-Attention Mechanism
Chinese Text Classification Pytorch
⭐
156
中文文本分类任务,基于PyTorch实现(TextCNN,TextRNN,FastText,Text DPCNN, Transformer,Bert,ERNIE),开箱即用!
A Pytorch Tutorial To Text Classification
⭐
154
Hierarchical Attention Networks | a PyTorch Tutorial to Text Classification
Poetry Seq2seq
⭐
154
Chinese Poetry Generation
Style Transformer
⭐
154
Official implementation for "Style Transformer for Image Inversion and Editing" (CVPR 2022)
Sa Tensorflow
⭐
153
Soft attention mechanism for video caption generation
Galerkin Transformer
⭐
152
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax
Aoanet
⭐
149
Code for paper "Attention on Attention for Image Captioning". ICCV 2019
Csa Inpainting
⭐
149
Coherent Semantic Attention for image inpainting(ICCV 2019)
Speech2text
⭐
148
A Deep-Learning-Based Persian Speech Recognition System
Picanet Implementation
⭐
147
Pytorch Implementation of PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
Bs Roformer
⭐
144
Implementation of Band Split Roformer, SOTA Attention network for music source separation out of ByteDance AI Labs
Hart
⭐
144
Hierarchical Attentive Recurrent Tracking
Palm E
⭐
143
Implementation of "PaLM-E: An Embodied Multimodal Language Model"
Visualization
⭐
142
a collection of visualization function
Text_recognition_toolbox
⭐
141
text_recognition_toolbox: The reimplementation of a series of classical scene text recognition papers with Pytorch in a uniform way.
Axial Attention
⭐
140
Implementation of Axial attention - attending to multi-dimensional data efficiently
Matnet
⭐
137
Motion-Attentive Transition for Zero-Shot Video Object Segmentation (AAAI2020)
Prediction Flow
⭐
136
Deep-Learning based CTR models implemented by PyTorch
Nlp De Cero A Cien
⭐
135
Curso práctico: NLP de cero a cien 🤗
Stf
⭐
132
Pytorch implementation of the paper "The Devil Is in the Details: Window-based Attention for Image Compression".
Fast Transformer
⭐
132
An implementation of Fastformer: Additive Attention Can Be All You Need, a Transformer Variant in TensorFlow
Lstm_attention
⭐
131
attention-based LSTM/Dense implemented by Keras
Dl4mt Multi
⭐
129
Deep Summarization
⭐
126
Uses Recurrent Neural Network (LSTM/GRU/basic_RNN units) for summarization of amazon reviews
Triton Transformer
⭐
125
Implementation of a Transformer, but completely in Triton
Dhf1k
⭐
125
Revisiting Video Saliency: A Large-scale Benchmark and a New Model (CVPR18, PAMI19)
Attribute Aware Attention
⭐
124
[ACM MM 2018] Attribute-Aware Attention Model for Fine-grained Representation Learning
Flash Attention Jax
⭐
123
Implementation of Flash Attention in Jax
Fusilli
⭐
120
A Python package housing a collection of deep-learning multi-modal data fusion method pipelines! From data loading, to training, to evaluation - fusilli's got you covered 🌸
Transformer In Generating Dialogue
⭐
117
An Implementation of 'Attention is all you need' with Chinese Corpus
Ylg
⭐
115
[CVPR 2020] Official Implementation: "Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models".
Abstractive Summarization
⭐
113
Implementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
Segformer Pytorch
⭐
111
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch
Nystrom Attention
⭐
111
Implementation of Nyström Self-attention, from the paper Nyströmformer
Deep_sort_yolov3_pytorch
⭐
109
Add attention blocks such as cbam, se. Add deep sort, sort and some tracking algorithm using opencv
Selfattentive
⭐
108
Implementation of A Structured Self-attentive Sentence Embedding
Linear Attention Recurrent Neural Network
⭐
107
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Atpapers
⭐
105
Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力机制、Transformer和预训练语言模型论文与相关资源集合
Deep Learning Image Classification Models Based Cnn Or Attention
⭐
104
This project organizes classic images classification neural networks based on convolution or attention, and writes training and inference python scripts
Macarico
⭐
104
learning to search in pytorch
Awesome Transformer In Medical Imaging
⭐
103
[MedIA Journal] An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Absa_keras
⭐
103
Keras Implementation of Aspect based Sentiment Analysis
Pytorch Question Answering
⭐
102
Important paper implementations for Question Answering using PyTorch
Hypertransformer
⭐
102
[CVPR'22] HyperTransformer: A Textural and Spectral Feature Fusion Transformer for Pansharpening
Drln
⭐
101
Densely Residual Laplacian Super-resolution, IEEE Pattern Analysis and Machine Intelligence (TPAMI), 2020
H Transformer 1d
⭐
100
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
Adnet
⭐
99
Attention-guided CNN for image denoising(Neural Networks,2020)
Transformers Rl
⭐
96
An easy PyTorch implementation of "Stabilizing Transformers for Reinforcement Learning"
Bidirectional Cross Attention
⭐
95
A simple cross attention that updates both the source and target in one step
Related Searches
Python Attention Mechanism (517)
Deep Learning Attention Mechanism (370)
101-200 of 649 search results
< Previous
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.