Ranx

⚡️A Blazing-Fast Python Library for Ranking Evaluation, Comparison, and Fusion 🐍
Alternatives To Ranx
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Weaviate6,122
a day ago232bsd-3-clauseGo
Weaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering with the fault-tolerance and scalability of a cloud-native database, all accessible through GraphQL, REST, and various language clients.
Catalyst3,10619102 months ago108April 29, 20225apache-2.0Python
Accelerated deep learning R&D
Ranking2,5911114 months ago19November 16, 202176apache-2.0Python
Learning to Rank in TensorFlow
Gnn Recommender Systems551
5 months ago1
An index of recommendation algorithms that are based on Graph Neural Networks.
Ranx16522 days ago32November 22, 20222mitPython
⚡️A Blazing-Fast Python Library for Ranking Evaluation, Comparison, and Fusion 🐍
Knowledge_graph_based_intent_network114
2 years agoPython
Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021
Recommender System Tutorial104
2 months ago3mitJupyter Notebook
A step-by-step tutorial on developing a practical recommendation system (retrieval and ranking) using TensorFlow Recommenders and Keras.
Matchpapers68
9 months ago
Worth-reading papers and related awesome resources on matching task. 值得一读的匹配任务相关论文与资源集合
Mixgcf65
a year ago4Python
MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems, KDD2021
Llm4rs50
10 days ago1Jupyter Notebook
the official implementation of the paper “Uncovering ChatGPT's Capabilities in Recommender Systems”
Alternatives To Ranx
Select To Compare


Alternative Project Comparisons
Readme

PyPI version Download counter Documentation Status License: MIT Open in Colab

🔥 News

  • 📌 [April 4, 2023] ranxhub, the ranx's companion repository, will be featured in SIGIR 2023!
    On ranxhub, you can download and share pre-computed runs for Information Retrieval datasets, such as MSMARCO Passage Ranking.

  • [May 1 2023] ranx 0.3.8 is out!
    This release adds early support for results plotting. Specifically, it is now possible to plot Interpolated Precision-Recall Curve. Click here for further details.

⚡️ Introduction

ranx ([raŋks]) is a library of fast ranking evaluation metrics implemented in Python, leveraging Numba for high-speed vector operations and automatic parallelization. It offers a user-friendly interface to evaluate and compare Information Retrieval and Recommender Systems. ranx allows you to perform statistical tests and export LaTeX tables for your scientific publications. Moreover, ranx provides several fusion algorithms and normalization strategies, and an automatic fusion optimization functionality. ranx was featured in ECIR 2022 and CIKM 2022.

If you use ranx to evaluate results or conducting experiments involving fusion for your scientific publication, please consider citing it: evaluation bibtex, fusion bibtex.

For a quick overview, follow the Usage section.

For a in-depth overview, follow the Examples section.

✨ Features

Metrics

The metrics have been tested against TREC Eval for correctness.

Statistical Tests

Please, refer to Smucker et al., Carterette, and Fuhr for additional information on statistical tests for Information Retrieval.

Off-the-shelf Qrels

You can load qrels from ir-datasets as simply as:

qrels = Qrels.from_ir_datasets("msmarco-document/dev")

A full list of the available qrels is provided here.

Off-the-shelf Runs

You can load runs from ranxhub as simply as:

run = Run.from_ranxhub("run-id")

A full list of the available runs is provided here.

Fusion Algorithms

Name Name Name Name Name
CombMIN CombMNZ RRF MAPFuse BordaFuse
CombMED CombGMNZ RBC PosFuse Weighted BordaFuse
CombANZ ISR WMNZ ProbFuse Condorcet
CombMAX Log_ISR Mixed SegFuse Weighted Condorcet
CombSUM LogN_ISR BayesFuse SlideFuse Weighted Sum

Please, refer to the documentation for further details.

Normalization Strategies

Please, refer to the documentation for further details.

🔌 Requirements

python>=3.8

As of v.0.3.5, ranx requires python>=3.8.

💾 Installation

pip install ranx

💡 Usage

Create Qrels and Run

from ranx import Qrels, Run

qrels_dict = { "q_1": { "d_12": 5, "d_25": 3 },
               "q_2": { "d_11": 6, "d_22": 1 } }

run_dict = { "q_1": { "d_12": 0.9, "d_23": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_32": 0.5, "d_35": 0.4  },
             "q_2": { "d_12": 0.9, "d_11": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_22": 0.5, "d_35": 0.4  } }

qrels = Qrels(qrels_dict)
run = Run(run_dict)

Evaluate

from ranx import evaluate

# Compute score for a single metric
evaluate(qrels, run, "[email protected]")
>>> 0.7861

# Compute scores for multiple metrics at once
evaluate(qrels, run, ["[email protected]", "mrr"])
>>> {"[email protected]": 0.6416, "mrr": 0.75}

Compare

from ranx import compare

# Compare different runs and perform Two-sided Paired Student's t-Test
report = compare(
    qrels=qrels,
    runs=[run_1, run_2, run_3, run_4, run_5],
    metrics=["[email protected]", "[email protected]", "[email protected]"],
    max_p=0.01  # P-value threshold
)

Output:

print(report)
#    Model    [email protected]    [email protected]    [email protected]
---  -------  --------   --------   ---------
a    model_1  0.320ᵇ     0.320ᵇ     0.368ᵇᶜ
b    model_2  0.233      0.234      0.239
c    model_3  0.308ᵇ     0.309ᵇ     0.330ᵇ
d    model_4  0.366ᵃᵇᶜ   0.367ᵃᵇᶜ   0.408ᵃᵇᶜ
e    model_5  0.405ᵃᵇᶜᵈ  0.406ᵃᵇᶜᵈ  0.451ᵃᵇᶜᵈ

Fusion

from ranx import fuse, optimize_fusion

best_params = optimize_fusion(
    qrels=train_qrels,
    runs=[train_run_1, train_run_2, train_run_3],
    norm="min-max",     # The norm. to apply before fusion
    method="wsum",      # The fusion algorithm to use (Weighted Sum)
    metric="[email protected]",  # The metric to maximize
)

combined_test_run = fuse(
    runs=[test_run_1, test_run_2, test_run_3],  
    norm="min-max",       
    method="wsum",        
    params=best_params,
)

📖 Examples

Name Link
Overview Open In Colab
Qrels and Run Open In Colab
Evaluation Open In Colab
Comparison and Report Open In Colab
Fusion Open In Colab
Plot Open In Colab
Share your runs with ranxhub Open In Colab

📚 Documentation

Browse the documentation for more details and examples.

🎓 Citation

If you use ranx to evaluate results for your scientific publication, please consider citing our ECIR 2022 paper:

BibTeX
@inproceedings{DBLP:conf/ecir/Bassani22,
  author    = {Elias Bassani},
  title     = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison},
  booktitle = {{ECIR} {(2)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {13186},
  pages     = {259--264},
  publisher = {Springer},
  year      = {2022}
}

If you use the fusion functionalities provided by ranx for conducting the experiments of your scientific publication, please consider citing our CIKM 2022 paper:

BibTeX
@inproceedings{DBLP:conf/cikm/BassaniR22,
  author    = {Elias Bassani and
              Luca Romelli},
  title     = {ranx.fuse: {A} Python Library for Metasearch},
  booktitle = {{CIKM}},
  pages     = {4808--4812},
  publisher = {{ACM}},
  year      = {2022}
}

🎁 Feature Requests

Would you like to see other features implemented? Please, open a feature request.

🤘 Want to contribute?

Would you like to contribute? Please, drop me an e-mail.

📄 License

ranx is an open-sourced software licensed under the MIT license.

Popular Recommendation System Projects
Popular Information Retrieval Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Recommendation System
Information Retrieval