Emnlp_17_submission

The dataset and statistical analysis code released with the submission of EMNLP 2017 paper "Why We Need New Evaluation Metrics for NLG"
Alternatives To Emnlp_17_submission
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Nlg Eval1,265
4 months ago22otherPython
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
Emnlp_17_submission17
2 years agomitR
The dataset and statistical analysis code released with the submission of EMNLP 2017 paper "Why We Need New Evaluation Metrics for NLG"
Nlg Evaluation9
4 years agobsd-3-clausePython
A toolkit for evaluation of natural language generation (NLG), including BLEU, ROUGE, METEOR, and CIDEr.
Generationeval7
3 years ago3mitPython
WebNLG+ Challenge 2020: Scripts to evaluate the RDF-to-text task with automatic metrics (BLEU, METEOR, chrF++, TER and BERT-Score)
Shared Task On Nlg Evaluation5
4 years ago3
Repository to organize a shared task on NLG evaluation
Alternatives To Emnlp_17_submission
Select To Compare


Alternative Project Comparisons
Popular Nlg Projects
Popular Metrics Projects
Popular Machine Learning Categories

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
R
Paper
Metrics
Nlg