Awesome Open Source
Awesome Open Source

Chinese Poetry Generation

This project aims to implement and improve upon the classical Chinese poetry generation system proposed in "Chinese Poetry Generation with Planning based Neural Network".

Generated Sample

Sample generated Chinese poetry


Python 2.7
TensorFlow 1.2.1
Jieba 0.38
Gensim 2.0.0
pypinyin 0.23



  • [x] Bidirectional encoder
  • [x] Attention decoder

Training and Predicting:

  • [x] Alignment boosted word2vec
  • [x] Data loading mode: only keywords (no preceding sentences)
  • [x] Data loading mode: reversed
  • [x] Data loading mode: aligned
  • [x] Training mode: ground truth
  • [x] Training mode: scheduled sampling
  • [x] Predicting mode: greedy
  • [x] Predicting mode: sampling
  • [ ] Predicting mode: beam search


  • [x] Output refiner
  • [ ] Reinforcement learning tuner
  • [ ] Iterative polishing


  • [x] Evaluation: rhyming
  • [x] Evaluation: tonal structure
  • [ ] Evaluation: alignment score
  • [ ] Evaluation: BLEU score

Project Structure

data: directory for raw data, processed data, pre-processed starterkit data, and generated poetry samples
model: directory for saved neural network models
log: directory for training logs
notebooks: directory for exploratory/experimental IPython notebooks
training_scripts: directory for sample scripts used for training several basic models

Code graph definition training logic prediction logic keyword planning logic user interaction program

Data Processing

To prepare training data:


This scrip does the following in order:

  1. Parse corpus
  2. Build vocab
  3. Filter quatrains
  4. Count words
  5. Rank words
  6. Generate training data

The TextRank algorithm may take many hours to run.
Instead, you can choose to interrupt the iterations and stop it early,
when the progress shown in the terminal has remained stationary for a long time.

Then, to generate the word embedding:


As an alternative, we have also provided pre-processed data in the data/starterkit directory
You may simply perform cp data/starterkit/* data/processed to skip the data processing step


To train the default model:


To view the full list of configurable training parameters:

python -h

Thus you should almost always train a new model after modifying any of the parameters.
Models are by default saved to model/. To train a new model, you may either remove the existing model from model/
or specify a new model path during training with python --model_dir :new_model:dir:


To start the user interation program:


Similarly, to view the full list of configurable predicting parameters:

python -h

The program currently does not check that predication parameters matches corresponding training parameters.
User has to ensure, in particular, the data loading modes correspond with the ones used during traing.
(e.g. If training data is reversed and aligned, then prediction input should also be reversed and aligned.
Otherwise, results may range from subtle differences in output to total crash.


To generate sample poems for evaluation:


This script by default randomly samples 4000 poems from the training data and saves them as human poems. Then it uses entire poems as inputs to the planner, to create keywords for the predictor. The predicted poems are saved as machine poems.

To evaluate the generated poems:


Further Reading


  1. "Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks"
  2. "Sequence-to-Sequence Learning as Beam-Search Optimization"
  3. "Tuning Recurrent Neural Networks with Reinforcement Learning"
  4. "Deep Reinforcement Learning for Dialogue Generation"

Poetry Generation

  1. May 10, 2017: "Flexible and Creative Chinese Poetry Generation Using Neural Memory"
  2. Dec 7, 2016: "Chinese Poetry Generation with Planning based Neural Network"
  3. June 19, 2016: "Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test"


  1. The data processing source code is based on DevinZ1993's implementation.
  2. The neural network implementation is inspired by JayParks's work.

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
jupyter-notebook (6,027
lstm (267
rnn (166
attention-mechanism (125
seq2seq (102
poetry (25
beam-search (20

Find Open Source By Browsing 7,000 Topics Across 59 Categories