Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for deep learning linear attention
deep-learning
x
linear-attention
x
5 search results found
Rwkv Lm
⭐
10,705
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Taylor Series Linear Attention
⭐
61
Explorations into the recently proposed Taylor Series Linear Attention
Agent Attention Pytorch
⭐
57
Implementation of Agent Attention in Pytorch
Autoregressive Linear Attention Cuda
⭐
35
CUDA implementation of autoregressive linear attention, with all the latest research findings
Leap
⭐
5
LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference
Related Searches
Python Deep Learning (19,470)
Jupyter Notebook Deep Learning (10,328)
Deep Learning Tensorflow (5,868)
Deep Learning Neural Network (5,801)
Deep Learning Pytorch (5,563)
Deep Learning Convolutional Neural Networks (3,932)
Deep Learning Neural (3,734)
Network Deep Learning (3,532)
Deep Learning Computer Vision (3,365)
Deep Learning Keras (3,258)
1-5 of 5 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.