Awesome Open Source
Awesome Open Source



Given two files with the same number of lines, files2rouge calculates the average ROUGE scores of each sequence (=line). Each sequence may contain multiple sentences. In this case, the end of sentence string must be passed using the --eos flag (default: "."). Running files2rouge with a wrong eos delimiter may lead to incorrect ROUGE-L score.

You may also be interested in a Python implementation (instead of a wrapper):

$ files2rouge --help
usage: files2rouge [-h] [-v] [-a ARGS] [-s SAVETO] [-e EOS] [-m] [-i]
                   reference summary

Calculating ROUGE score between two files (line-by-line)

positional arguments:
  reference             Path of references file
  summary               Path of summary file

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         Prints ROUGE logs
  -a ARGS, --args ARGS  ROUGE Arguments
  -s SAVETO, --saveto SAVETO
                        File to save scores
  -e EOS, --eos EOS     End of sentence separator (for multisentence).
                        Default: "."
  -m, --stemming        DEPRECATED: stemming is now default behavior
  -nm, --no_stemming    Switch off stemming
  -i, --ignore_empty

Getting Started

0) Install prerequisites

pip install -U git+

(NOTE: running pip install pyrouge would not work as the package is out of date on PyPI)

1) Clone the repo, setup the module and ROUGE

git clone     
cd files2rouge
python install

Do not forget to run setup_rouge

2) Run

files2rouge references.txt summaries.txt 

Outputs: When --verbose is set, the script prints progress and remaining time on stderr. This can be changed using --verbose in order to outputs ROUGE execution logs.

Default output example:

Preparing documents...
Running ROUGE...
1 ROUGE-1 Average_R: 0.28242 ( 0.25721 - 0.30877)
1 ROUGE-1 Average_P: 0.30157 ( 0.27114 - 0.33506)
1 ROUGE-1 Average_F: 0.28196 ( 0.25704 - 0.30722)
1 ROUGE-2 Average_R: 0.10395 ( 0.08298 - 0.12600)
1 ROUGE-2 Average_P: 0.11458 ( 0.08873 - 0.14023)
1 ROUGE-2 Average_F: 0.10489 ( 0.08303 - 0.12741)
1 ROUGE-L Average_R: 0.25231 ( 0.22709 - 0.27771)
1 ROUGE-L Average_P: 0.26830 ( 0.23834 - 0.29818)
1 ROUGE-L Average_F: 0.25142 ( 0.22741 - 0.27533)

Elapsed time: 0.458 secondes

Call files2rouge from Python

import files2rouge, ref_path)


One can specify which ROUGE args to use using the flag --args (or -a).
The default behavior is equivalent to:

files2rouge reference.txt summary.txt -a "-c 95 -r 1000 -n 2 -a" # be sure to write args betwen double-quotes

You can find more informations about these arguments here

Known issues

  • - XML::Parser dependency error: see issue #9.

More informations

Related Awesome Lists
Top Programming Languages
Top Projects

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Perl (30,581
Line (17,889
Nlp (14,972
Natural Language Processing (14,972
Summarization (1,359
Text Summarization (383
Rouge (372
Rouge Metric (5