Awesome Open Source
Awesome Open Source


This is a pytorch seq2seq tutorial for Formosa Speech Grand Challenge, which is modified from pratical-pytorch seq2seq-translation-batched.
Tutorial introducing this repo from pytorch official website, Tutorial in Chinese.


A new version is already implemented in branch "dev".


  • python 3.5+
  • pytorch 0.4.0
  • tqdm

Get started

Clone the repository

git clone


In the corpus file, the input-output sequence pairs should be in the adjacent lines. For example,

I'll see you next time.
Sure. Bye.
How are you?
Better than ever.

The corpus files should be placed under a path like,

pytorch-chatbot/data/<corpus file name>

Otherwise, the corpus file will be tracked by git.

Pretrained Model

The pretrained model on movie_subtitles corpus with an bidirectional rnn layer and hidden size 512 can be downloaded in this link. The pretrained model file should be placed in directory as followed.

mkdir -p save/model/movie_subtitles/1-1_512
mv 50000_backup_bidir_model.tar save/model/movie_subtitles/1-1_512


Run this command to start training, change the argument values in your own need.

python -tr <CORPUS_FILE_PATH> -la 1 -hi 512 -lr 0.0001 -it 50000 -b 64 -p 500 -s 1000

Continue training with saved model.

python -tr <CORPUS_FILE_PATH> -l <MODEL_FILE_PATH> -lr 0.0001 -it 50000 -b 64 -p 500 -s 1000

For more options,

python -h


Models will be saved in pytorch-chatbot/save/model while training, and this can be changed in
Evaluate the saved model with input sequences in the corpus.


Test the model with input sequence manually.


Beam search with size k.

python -te <MODEL_FILE_PATH> -c <CORPUS_FILE_PATH> -be k [-i] 

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
python (51,962
pytorch (2,279
chatbot (293
seq2seq (102
pytorch-tutorial (70
sequence-to-sequence (30
beam-search (20

Find Open Source By Browsing 7,000 Topics Across 59 Categories