Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Annotated_deep_learning_paper_implementations | 22,464 | 1 | a month ago | 76 | June 27, 2022 | 17 | mit | Jupyter Notebook | ||
🧑🏫 59 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠 | ||||||||||
Vit Pytorch | 14,120 | 3 | 19 days ago | 143 | June 30, 2022 | 106 | mit | Python | ||
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch | ||||||||||
Nlp Tutorial | 12,403 | 2 months ago | 34 | mit | Jupyter Notebook | |||||
Natural Language Processing Tutorial for Deep Learning Researchers | ||||||||||
External Attention Pytorch | 8,745 | 22 days ago | 61 | mit | Python | |||||
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐ | ||||||||||
Attention Is All You Need Pytorch | 7,444 | a month ago | 68 | mit | Python | |||||
A PyTorch implementation of the Transformer model in "Attention is All You Need". | ||||||||||
Espnet | 6,652 | 3 | a day ago | 27 | May 28, 2022 | 473 | apache-2.0 | Python | ||
End-to-End Speech Processing Toolkit | ||||||||||
Dalle Pytorch | 5,213 | 15 days ago | 172 | May 30, 2022 | 124 | mit | Python | |||
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch | ||||||||||
Bertviz | 5,184 | 1 | a month ago | 5 | April 02, 2022 | 7 | apache-2.0 | Python | ||
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.) | ||||||||||
Pytorch Seq2seq | 4,548 | 6 days ago | 56 | mit | Jupyter Notebook | |||||
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText. | ||||||||||
Informer2020 | 3,421 | 2 months ago | 39 | apache-2.0 | Python | |||||
The GitHub repository for the paper "Informer" accepted by AAAI 2021. |
This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, arxiv, 2017).
A novel sequence to sequence framework utilizes the self-attention mechanism, instead of Convolution operation or Recurrent structure, and achieve the state-of-the-art performance on WMT 2014 English-to-German translation task. (2017/06/12)
The official Tensorflow Implementation can be found in: tensorflow/tensor2tensor.
To learn more about self-attention mechanism, you could read "A Structured Self-attentive Sentence Embedding".
The project support training and translation with trained model now.
Note that this project is still a work in progress.
BPE related parts are not yet fully tested.
If there is any suggestion or error, feel free to fire an issue to let me know. :)
An example of training for the WMT'16 Multimodal Translation task (http://www.statmt.org/wmt16/multimodal-task.html).
# conda install -c conda-forge spacy
python -m spacy download en
python -m spacy download de
python preprocess.py -lang_src de -lang_trg en -share_vocab -save_data m30k_deen_shr.pkl
python train.py -data_pkl m30k_deen_shr.pkl -log m30k_deen_shr -embs_share_weight -proj_share_weight -label_smoothing -output_dir output -b 256 -warmup 128000 -epoch 400
python translate.py -data_pkl m30k_deen_shr.pkl -model trained.chkpt -output prediction.txt
Since the interfaces is not unified, you need to switch the main function call from
main_wo_bpe
tomain
.
python preprocess.py -raw_dir /tmp/raw_deen -data_dir ./bpe_deen -save_data bpe_vocab.pkl -codes codes.txt -prefix deen
python train.py -data_pkl ./bpe_deen/bpe_vocab.pkl -train_path ./bpe_deen/deen-train -val_path ./bpe_deen/deen-val -log deen_bpe -embs_share_weight -proj_share_weight -label_smoothing -output_dir output -b 256 -warmup 128000 -epoch 400