A statistical natural language generator for spoken dialogue systems
TGen is a statistical natural language generator, with two different algorithms supported:
- A statistical sentence planner based on A*-style search, with a candidate plan generator and a perceptron ranker
- A sequence-to-sequence (seq2seq) recurrent neural network architecture based on the TensorFlow toolkit
Both algoritms can be trained from pairs of source meaning representations (dialogue acts) and target sentences.
The newer seq2seq approach is preferrable: it yields higher performance in terms of both speed and quality.
Both algorithms support generating sentence plans (deep syntax trees), which are subsequently converted to text using the existing the surface realizer from Treex NLP toolkit.
The seq2seq algorithm also supports direct string generation.
For more details on the algorithms, please refer to our papers:
- For seq2seq generation, see our ACL 2016 paper.
- For an improved version of the seq2seq generation that takes previous user utterance into account to generate a more contextually-appropriate response, see our SIGDIAL 2016 paper.
- For the old A*-search-based generation, see our ACL 2015 paper.
Installation and Usage
Please refer to USAGE.md for instructions on how to use TGen.
- TGen is highly experimental and only tested on a few datasets, so bugs are inevitable. If you find a bug, feel free to contact me or open an issue.
- If you do not require a specific version of TGen, we recommended to install the current master version, which has the latest bugfixes and all the functionality of the ACL2016/SIGDIAL2016 version.
- To get the version used in our ACL 2015 paper (A*-search only), see this release.
- To get the version used in our ACL 2016 and SIGDIAL 2016 papers (seq2seq approach for generating sentence plans or strings, optionally using previous context), see this release.
If you use or refer to the seq2seq generation in TGen, please cite this paper:
- Ondřej Dušek and Filip Jurčíček (2016): Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany.
If you use or refer to the context-aware improved seq2seq generation, please cite this paper:
- Ondřej Dušek and Filip Jurčíček (2016): A Context-aware Natural Language Generator for Dialogue Systems. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Los Angeles, CA, USA.
If you use or refer morphology-aware generation (designed for Czech), please cite this paper (link coming soon):
- Ondřej Dušek and Filip Jurčíček (2019): Neural Generation for Czech: Data and Baselines. In Proceedings of INLG, Tokyo, Japan.
If you use or refer to the A*-search generation in TGen, please cite this paper:
- Ondřej Dušek and Filip Jurčíček (2015): Training a Natural Language Generator From Unaligned Data. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 451–461, Beijing, China.
Author: Ondřej Dušek
Copyright © 2014-2019 Institute of Formal and Applied Linguistics, Charles University, Prague.
Licensed under the Apache License, Version 2.0 (see LICENSE.txt).
Work on this project was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV projects 260 104 and 260 333, and GAUK grant 2058214 of Charles University in Prague, as well as Charles University project PRIMUS/19/SCI/10. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (projects LM201001 and LM2015071).