Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for attention
attention
x
2,121 search results found
Eeg Dl
⭐
563
A Deep Learning library for EEG Tasks (Signals) Classification, based on TensorFlow.
Adaptive Span
⭐
557
Transformer training code for sequential tasks
Stereo Transformer
⭐
555
Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers. (ICCV 2021 Oral)
Self Attention Cv
⭐
550
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Attention Cnn
⭐
550
Source code for "On the Relationship between Self-Attention and Convolutional Layers"
Classifier Multi Label
⭐
548
多标签文本分类,多标签分类,文本分类, multi-label, classifier, text classification, BERT, seq2seq,attention, multi-label-classification
Residual Attention Network
⭐
547
Residual Attention Network for Image Classification
Exbert
⭐
541
A Visual Analysis Tool to Explore Learned Representations in Transformers Models
Text Classification Pytorch
⭐
538
Text classification using deep learning models in Pytorch
Punctuator2
⭐
532
A bidirectional recurrent neural network model with attention mechanism for restoring missing punctuation in unsegmented text
Ban Vqa
⭐
527
Bilinear attention networks for visual question answering
Chinesenre
⭐
523
中文实体关系抽取,pytorch,bilstm+attention
Keras_cv_attention_models
⭐
523
Keras beit,caformer,CMT,CoAtNet,convnext,davit,dino,effi alias kecam
Emanet
⭐
507
The code for Expectation-Maximization Attention Networks for Semantic Segmentation (ICCV'2019 Oral)
Seq2seq
⭐
502
Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch
Draw
⭐
501
TensorFlow Implementation of "DRAW: A Recurrent Neural Network For Image Generation"
Attentionocr
⭐
488
Scene text recognition
Medical Transformer
⭐
484
Official Pytorch Code for "Medical Transformer: Gated Axial-Attention for Medical Image Segmentation" - MICCAI 2021
Transformers.jl
⭐
479
Julia Implementation of Transformer models
Attention Networks For Classification
⭐
477
Hierarchical Attention Networks for Document Classification in PyTorch
Attn2d
⭐
475
Pervasive Attention: 2D Convolutional Networks for Sequence-to-Sequence Prediction
Self Attention Gan Tensorflow
⭐
468
Simple Tensorflow implementation of "Self-Attention Generative Adversarial Networks" (SAGAN)
Neural_sp
⭐
466
End-to-end ASR/LM implementation with PyTorch
Rnn Nlu
⭐
454
A TensorFlow implementation of Recurrent Neural Networks for Sequence Classification and Sequence Labeling
Mac Network
⭐
445
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
Scan
⭐
442
PyTorch source code for "Stacked Cross Attention for Image-Text Matching" (ECCV 2018)
Knowledge_graph_attention_network
⭐
434
KGAT: Knowledge Graph Attention Network for Recommendation, KDD2019
Bigbird
⭐
433
Transformers for Longer Sequences
Attention_keras
⭐
429
Keras Layer implementation of Attention for Sequential models
Dl4j Tutorials
⭐
429
dl4j 基础教程 配套视频:https://space.bilibili.com/327018681/#/
Source Code Notebook
⭐
428
关于一些经典论文源码的逐行中文笔记
Keras Unet Collection
⭐
428
The Tensorflow, Keras implementation of U-net, V-net, U-net++, UNET 3+, Attention U-net, R2U-net, ResUnet-a, U^2-Net, TransUNET, and Swin-UNET with optional ImageNet-trained backbones.
Residualattentionnetwork Pytorch
⭐
423
a pytorch code about Residual Attention Network. This code is based on two projects from
Dla
⭐
421
Deep learning for audio processing
Multimodal Transformer
⭐
418
[ACL'19] [PyTorch] Multimodal Transformer
Keras Text
⭐
417
Text Classification Library in Keras
Aspect Based Sentiment Analysis
⭐
413
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Fs
⭐
413
File system utilities for Clojure.
Structured Self Attention
⭐
412
A Structured Self-attentive Sentence Embedding
Keras Transformer
⭐
410
Keras library for building (Universal) Transformers, facilitating BERT and GPT models
Sqlnet
⭐
405
Neural Network for generating structured queries from natural language.
Ner Bert
⭐
404
BERT-NER (nert-bert) with google bert https://github.com/google-research.
Point Transformer Pytorch
⭐
402
Implementation of the Point Transformer layer, in Pytorch
Hierarchical Attention Networks
⭐
396
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is currently unmaintained, issues will probably not be addressed.
Attention Ocr Chinese Version
⭐
394
Attention OCR Based On Tensorflow
Time Series Autoencoder
⭐
386
📈 PyTorch Dual-Attention LSTM-Autoencoder For Multivariate Time Series 📈
Awesome Gcn
⭐
377
resources for graph convolutional networks (图卷积神经网络相关资源)
Siggraphasia2019_remastering
⭐
376
Code for the paper "DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement". http://iizuka.cs.tsukuba.ac.jp/projects/remasterin
Pytorch Original Transformer
⭐
376
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Deepxi
⭐
367
Deep Xi: A deep learning approach to a priori SNR estimation implemented in TensorFlow 2/Keras. For speech enhancement and robust ASR.
Recurrent Visual Attention
⭐
367
A PyTorch Implementation of "Recurrent Models of Visual Attention"
Gran
⭐
363
Efficient Graph Generation with Graph Recurrent Attention Networks, Deep Generative Model of Graphs, Graph Neural Networks, NeurIPS 2019
Attention Augmented Conv2d
⭐
363
Implementing Attention Augmented Convolutional Networks using Pytorch
Ai News
⭐
361
Summarize the paper and code in AI(Semantic Segmentation, Medical Segmentation,REID,Super-Resolution,Registration,CV
Sknet
⭐
360
Code for our CVPR 2019 paper: Selective Kernel Networks; See zhihu:https://zhuanlan.zhihu.com/p/59690223
Deep_learning_nlp
⭐
359
Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
Dynamic Convolution Pytorch
⭐
352
Pytorch!!!Pytorch!!!Pytorch!!! Dynamic Convolution: Attention over Convolution Kernels (CVPR-2020)
Relationprediction
⭐
347
ACL 2019: Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs
Draw
⭐
346
Reimplementation of DRAW
Nlp Projects
⭐
345
word2vec, sentence2vec, machine reading comprehension, dialog system, text classification, pretrained language model (i.e., XLNet, BERT, ELMo, GPT), sequence labeling, information retrieval, information extraction (i.e., entity, relation and event extraction), knowledge graph, text generation, network embedding
Pranet
⭐
344
PraNet: Parallel Reverse Attention Network for Polyp Segmentation, MICCAI 2020 (Oral). Code using Jittor Framework is available.
Tacotronv2_wavernn_chinese
⭐
344
tacotronV2 + wavernn 实现中文语音合成(Tensorflow + pytorch)
Sagpool
⭐
338
Official PyTorch Implementation of SAGPool - ICML 2019
Action Recognition Visual Attention
⭐
338
Action recognition using soft attention based deep recurrent neural networks
Transformer_time_series
⭐
331
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019)
Cs224n 2019 Solutions
⭐
329
Complete solutions for Stanford CS224n, winter, 2019
Uaa Behind Zuul Sample
⭐
326
Spring AuthorizationServer load balanced behind Zuul
Mprgdeeplearninglecturenotebook
⭐
324
Linformer Pytorch
⭐
323
My take on a practical implementation of Linformer for Pytorch.
Appnp
⭐
322
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).
Marktool
⭐
321
DoTAT 是一款基于web、面向领域的通用文本标注工具,支持大规模实体标注、关系标注、事件标注、文本分类、基于
Transformer Tensorflow
⭐
318
Implementation of Transformer Model in Tensorflow
Chinese Chatbot
⭐
318
中文聊天机器人,基于10万组对白训练而成,采用注意力机制,对一般问题都会生成一个有意义的答复。已上传
San
⭐
318
Second-order Attention Network for Single Image Super-resolution (CVPR-2019)
Keras Transformer
⭐
312
Transformer implemented in Keras
Text Classification Models Pytorch
⭐
309
Implementation of State-of-the-art Text Classification Models in Pytorch
Attention Analysis
⭐
309
Relation Aware Global Attention Networks
⭐
305
We design an effective Relation-Aware Global Attention (RGA) module for CNNs to globally infer the attention.
Transformer Tensorflow
⭐
304
TensorFlow implementation of 'Attention Is All You Need (2017. 6)'
Keras Gat
⭐
301
Keras implementation of the graph attention networks (GAT) by Veličković et al. (2017; https://arxiv.org/abs/1710.10903)
Youmaynotneedattention
⭐
301
Code for the Eager Translation Model from the paper You May Not Need Attention
Attentionwalk
⭐
299
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).
Abd Net
⭐
297
[ICCV 2019] "ABD-Net: Attentive but Diverse Person Re-Identification" https://arxiv.org/abs/1908.01114
Atrank
⭐
297
An Attention-Based User Behavior Modeling Framework for Recommendation
Var Attn
⭐
296
Latent Alignment and Variational Attention
Stanet
⭐
296
official implementation of the spatial-temporal attention neural network (STANet) for remote sensing image change detection
Disan
⭐
295
Code of Directional Self-Attention Network (DiSAN)
Attention_is_all_you_need
⭐
293
Transformer of "Attention Is All You Need" (Vaswani et al. 2017) by Chainer.
Cm3leon
⭐
288
An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal AI that uses just a decoder to generate both text and images
Adaptiveattention
⭐
288
Implementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"
Biformer
⭐
288
[CVPR 2023] Official code release of our paper "BiFormer: Vision Transformer with Bi-Level Routing Attention"
Deep Time Series Prediction
⭐
287
Seq2Seq, Bert, Transformer, WaveNet for time series prediction.
Tf Keras Vis
⭐
286
Neural network visualization toolkit for tf.keras
Slot Attention
⭐
286
Implementation of Slot Attention from GoogleAI
Ffa Net
⭐
286
FFA-Net: Feature Fusion Attention Network for Single Image Dehazing
Graphtransformer
⭐
285
Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
Hiecoattenvqa
⭐
284
Open Musiclm
⭐
281
Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.
Attention Based Bilstm Relation Extraction
⭐
279
Tensorflow Implementation of "Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification" (ACL 2016)
Linear Attention Transformer
⭐
278
Transformer based on a variant of attention that is linear complexity in respect to sequence length
Related Searches
Python Attention (2,327)
Pytorch Attention (645)
Jupyter Notebook Attention (557)
101-200 of 2,121 search results
< Previous
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.