Awesome Open Source
Awesome Open Source


An open source Neural Machine Translation toolkit developed by the NLPLAB of Xiamen University.


  • Multi-GPU support
  • Builtin validation functionality


This tutorial describes how to train an NMT model on WMT17's EN-DE data using this repository.


You must install TensorFlow (>=1.4.0) first to use this library.

Download Data

The preprocessed data can be found at here.

Data Preprocessing

  1. Byte Pair Encoding
  • The most common approach to achieve open vocabulary is to use Byte Pair Encoding (BPE). The codes of BPE can be found at here.
  • To encode the training corpora using BPE, you need to generate BPE operations first. The following command will create a file named "bpe32k", which contains 32k BPE operations along with two dictionaries named "vocab.en" and "".
python subword-nmt/ --input -s 32000 -o bpe32k --write-vocabulary vocab.en
  • You still need to encode the training corpora, validation set and test set using the generated BPE operations and dictionaries.
python subword-nmt/ -c bpe32k --vocabulary vocab.en --vocabulary-threshold 50 < > corpus.bpe32k.en
python subword-nmt/ -c bpe32k --vocabulary --vocabulary-threshold 50 < >
python subword-nmt/ -c bpe32k --vocabulary vocab.en --vocabulary-threshold 50 < > newstest2016.bpe32k.en
python subword-nmt/ -c bpe32k --vocabulary --vocabulary-threshold 50 < >
python subword-nmt/ -c bpe32k --vocabulary vocab.en --vocabulary-threshold 50 < > newstest2017.bpe32k.en
  1. Environment Variables
  • Before using XMUNMT, you need to add the path of XMUNMT to PYTHONPATH environment variable. Typically, this can be done by adding the following line to the .bashrc file in your home directory.
  1. Build vocabulary
  • To train an NMT, you need to build vocabularies first. To build a shared source and target vocabulary, you can use the following script:
cat corpus.bpe32k.en > corpus.bpe32k.all
python XMUNMT/xmunmt/scripts/ corpus.bpe32k.all vocab.shared32k.txt
  1. Shuffle corpus
  • It is beneficial to shuffle the training corpora before training.
python XMUNMT/xmunmt/scripts/ --corpus corpus.bpe32k.en --seed 1234
  • The above command will create two new files named "corpus.bpe32k.en.shuf" and "".


  • Finally, we can start the training stage. The recommended hyper-parameters are described below.
python XMUNMT/xmunmt/bin/
  --model rnnsearch
  --output train 
  --input corpus.bpe32k.en.shuf
  --vocabulary vocab.shared32k.txt vocab.shared32k.txt
  --validation newstest2016.bpe32k.en
  • Change the argument of "device_list" to select GPU or use multiple GPUs. The above command will create a directory named "train". The best model can be found at "train/eval"


  • The decoding command is quite simple.
python XMUNMT/xmunmt/bin/
  --models rnnsearch
  --checkpoints train/eval
  --input newstest2017.bpe32k.en
  --output test.txt
  --vocabulary vocab.shared32k.txt vocab.shared32k.txt


The benchmark is performed on 1 GTX 1080Ti GPU with default parameters.

Dataset BLEU BLEU (cased)
WMT17 En-De 22.81 22.30
WMT17 De-En 29.01 27.69
  • More benchmarks will be added soon.


This code is written by Zhixing Tan. If you have any problems, feel free to send an email.



Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
python (53,705
deep-learning (3,923
tensorflow (2,141
seq2seq (103
neural-machine-translation (49
sequence-to-sequence (30
nmt (23