Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Pytorch Metric Learning | 5,282 | 12 | 4 hours ago | 193 | June 29, 2022 | 50 | mit | Python | ||
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch. | ||||||||||
Pixie | 4,635 | 13 hours ago | 88 | April 24, 2021 | 234 | apache-2.0 | C++ | |||
Instant Kubernetes-Native Application Observability | ||||||||||
Ignite | 4,277 | 31 | 52 | 2 days ago | 1,038 | July 07, 2022 | 152 | bsd-3-clause | Python | |
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. | ||||||||||
Aim | 3,771 | 4 hours ago | 172 | October 11, 2021 | 271 | apache-2.0 | Python | |||
Aim 💫 — An easy-to-use & supercharged open-source AI metadata tracker (experiment tracking, AI agents tracing) | ||||||||||
Serve | 3,491 | 11 | 19 hours ago | 14 | May 13, 2022 | 284 | apache-2.0 | Java | ||
Serve, optimize and scale PyTorch models in production | ||||||||||
Map | 2,685 | 2 months ago | 99 | apache-2.0 | Python | |||||
mean Average Precision - This code evaluates the performance of your neural net for object recognition. | ||||||||||
Aif360 | 2,043 | 2 | 8 | 4 hours ago | 10 | March 04, 2021 | 170 | apache-2.0 | Python | |
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models. | ||||||||||
Pytorch Nlp | 1,929 | 9 | 8 | 2 years ago | 19 | November 04, 2019 | 16 | bsd-3-clause | Python | |
Basic Utilities for PyTorch Natural Language Processing (NLP) | ||||||||||
Torchmetrics | 1,458 | 7 hours ago | 63 | apache-2.0 | Python | |||||
Torchmetrics - Machine learning metrics for distributed, scalable PyTorch applications. | ||||||||||
Moses | 493 | 2 years ago | 14 | mit | Python | |||||
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models |
reXmeX is recommender system evaluation metric library.
Please look at the Documentation and External Resources.
reXmeX consists of utilities for recommender system evaluation. First, it provides a comprehensive collection of metrics for the evaluation of recommender systems. Second, it includes a variety of methods for reporting and plotting the performance results. Implemented metrics cover a range of well-known metrics and newly proposed metrics from data mining (ICDM, CIKM, KDD) conferences and prominent journals.
Citing
If you find RexMex useful in your research, please consider adding the following citation:
@inproceedings{rexmex,
title = {{rexmex: A General Purpose Recommender Metrics Library for Fair Evaluation.}},
author = {Benedek Rozemberczki and Sebastian Nilsson and Piotr Grabowski and Charles Tapley Hoyt and Gavin Edwards},
year = {2021},
}
An introductory example
The following example loads a synthetic dataset which has the mandatory y_true
and y_score
keys. The dataset has binary labels and predictied probability scores. We read the dataset and define a defult ClassificationMetric
instance for the evaluation of the predictions. Using this metric set we create a score card and get the predictive performance metrics.
from rexmex import ClassificationMetricSet, DatasetReader, ScoreCard
reader = DatasetReader()
scores = reader.read_dataset()
metric_set = ClassificationMetricSet()
score_card = ScoreCard(metric_set)
report = score_card.get_performance_metrics(scores["y_true"], scores["y_score"])
An advanced example
The following more advanced example loads the same synthetic dataset which has the source_id
, target_id
, source_group
and target group
keys besides the mandatory y_true
and y_score
. Using the source_group
key we group the predictions and return a performance metric report.
from rexmex import ClassificationMetricSet, DatasetReader, ScoreCard
reader = DatasetReader()
scores = reader.read_dataset()
metric_set = ClassificationMetricSet()
score_card = ScoreCard(metric_set)
report = score_card.generate_report(scores, grouping=["source_group"])
Scorecard
A rexmex score card allows the reporting of recommender system performance metrics, plotting the performance metrics and saving those. Our framework provides 7 rating, 38 classification, 18 ranking, and 2 coverage metrics.
Metric Sets
Metric sets allow the users to calculate a range of evaluation metrics for a label - predicted label vector pair. We provide a general MetricSet
class and specialized metric sets with pre-set metrics have the following general categories:
Ranking Metric Set
Rating Metric Set
These metrics assume that items are scored explicitly and ratings are predicted by a regression model.
Classification Metric Set
These metrics assume that the items are scored with raw probabilities (these can be binarized).
Coverage Metric Set
These metrics measure how well the recommender system covers the available items in the catalog and possible users. In other words measure the diversity of predictions.
Documentation and Reporting Issues
Head over to our documentation to find out more about installation and data handling, a full list of implemented methods, and datasets.
If you notice anything unexpected, please open an issue and let us know. If you are missing a specific method, feel free to open a feature request. We are motivated to constantly make RexMex even better.
Installation via the command line
RexMex can be installed with the following command after the repo is cloned.
$ pip install .
Use -e/--editable
when developing.
Installation via pip
RexMex can be installed with the following pip command.
$ pip install rexmex
As we create new releases frequently, upgrading the package casually might be beneficial.
$ pip install rexmex --upgrade
Running tests
Tests can be run with tox
with the following:
$ pip install tox
$ tox -e py
Citation
If you use RexMex in a scientific publication, we would appreciate citations. Please see GitHub's built-in citation tool.
License