Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for linear attention
linear-attention
x
7 search results found
Rwkv Lm
⭐
10,705
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Taylor Series Linear Attention
⭐
61
Explorations into the recently proposed Taylor Series Linear Attention
Agent Attention Pytorch
⭐
57
Implementation of Agent Attention in Pytorch
Autoregressive Linear Attention Cuda
⭐
35
CUDA implementation of autoregressive linear attention, with all the latest research findings
Multi Attention Network
⭐
22
The semantic segmentation of remote sensing images
Maresu Net
⭐
21
The semantic segmentation of remote sensing images
Leap
⭐
5
LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference
Related Searches
Python Linear Attention (5)
Deep Learning Linear Attention (4)
1-7 of 7 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.