Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Bi Att Flow | 1,510 | a year ago | 73 | apache-2.0 | Python | |||||
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization. | ||||||||||
Sparse_attention | 1,002 | 4 years ago | 10 | Python | ||||||
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers" | ||||||||||
Keras Attention | 656 | 5 years ago | 22 | agpl-3.0 | Python | |||||
Visualizing RNNs using the attention mechanism | ||||||||||
Dl4j Tutorials | 429 | 3 years ago | mit | Java | ||||||
dl4j 基础教程 配套视频:https://space.bilibili.com/327018681/#/ | ||||||||||
Attention_keras | 429 | a year ago | 11 | mit | Python | |||||
Keras Layer implementation of Attention for Sequential models | ||||||||||
Neurst | 232 | 2 years ago | 3 | April 14, 2022 | 9 | other | Python | |||
Neural end-to-end Speech Translation Toolkit | ||||||||||
Struct Attn | 221 | 7 years ago | 1 | mit | Lua | |||||
Code for Structured Attention Networks https://arxiv.org/abs/1702.00887 | ||||||||||
Mcan Vqa | 181 | 4 years ago | 2 | apache-2.0 | Python | |||||
Deep Modular Co-Attention Networks for Visual Question Answering | ||||||||||
Multihead Siamese Nets | 173 | a year ago | 12 | mit | Jupyter Notebook | |||||
Implementation of Siamese Neural Networks built upon multihead attention mechanism for text semantic similarity task. | ||||||||||
Image Local Attention | 93 | 2 years ago | 2 | Python | ||||||
A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude. |