Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Kibana | 18,349 | 1 | 14 hours ago | 1 | August 01, 2015 | 10,098 | other | TypeScript | ||
Your window into the Elastic Stack | ||||||||||
Cube | 3,934 | 31 | 5 | 4 years ago | 26 | August 20, 2013 | 48 | other | JavaScript | |
Cube: A system for time series visualization. | ||||||||||
Aim | 3,296 | 2 days ago | 172 | October 11, 2021 | 239 | apache-2.0 | Python | |||
Aim 💫 — easy-to-use and performant open-source ML experiment tracker. | ||||||||||
Statsviz | 2,754 | 26 | 3 days ago | 27 | September 05, 2022 | 10 | mit | Go | ||
:rocket: Visualise Go program runtime metrics in real time in your browser | ||||||||||
Hiddenlayer | 1,531 | 4 | 5 | a year ago | 3 | April 24, 2020 | 48 | mit | Python | |
Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras. | ||||||||||
Uncertainty Toolbox | 1,421 | 2 months ago | 1 | December 02, 2021 | 7 | mit | Python | |||
Uncertainty Toolbox: a Python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization | ||||||||||
Infranodus | 682 | 4 months ago | 187 | JavaScript | ||||||
A Node.Js / Neo4J tool that translates words and relations into network graphs and shows you how it all connects. | ||||||||||
K8spacket | 564 | 2 months ago | 1 | apache-2.0 | Go | |||||
k8spacket - packets traffic visualization for kubernetes | ||||||||||
Hera | 498 | 6 years ago | 5 | mit | JavaScript | |||||
Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser. | ||||||||||
Emerge | 449 | a month ago | 6 | April 27, 2022 | 5 | mit | Python | |||
Emerge is a source code and dependency visualizer that can be used to gather insights about source code structure, metrics, dependencies and complexity of software projects. After scanning the source code of a project it provides you an interactive web interface to explore and analyze your project by using graph structures. |
Uncertainty Toolbox
A Python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization.
Also: a glossary of useful terms and a collection of relevant papers and references.
Many machine learning methods return predictions along with uncertainties of some form,
such as distributions or confidence intervals. This begs the questions: How do we
determine which predictive uncertanties are best? What does it mean to produce a best
or ideal uncertainty? Are our uncertainties accurate and well calibrated?
Uncertainty Toolbox provides standard metrics to quantify and compare predictive uncertainty estimates, gives intuition for these metrics, produces visualizations of these metrics/uncertainties, and implements simple "re-calibration" procedures to improve these uncertainties. This toolbox currently focuses on regression tasks.
Uncertainty Toolbox contains:
Uncertainty Toolbox requires Python 3.6+. For a lightweight installation of the package only, run:
pip install uncertainty-toolbox
For a full installation with examples, tests, and the latest updates, run:
git clone https://github.com/uncertainty-toolbox/uncertainty-toolbox.git
cd uncertainty-toolbox
pip install -e . -r requirements/requirements_dev.txt
Note that the previous command requires pip 21.3.
To verify correct installation, you can run the test suite via:
source shell/run_all_tests.sh
import uncertainty_toolbox as uct
# Load an example dataset of 100 predictions, uncertainties, and ground truth values
predictions, predictions_std, y, x = uct.data.synthetic_sine_heteroscedastic(100)
# Compute all uncertainty metrics
metrics = uct.metrics.get_all_metrics(predictions, predictions_std, y)
This example computes metrics for a vector of predicted values
(predictions
) and associated uncertainties (predictions_std
, a vector of standard
deviations), taken with respect to a corresponding set of ground truth values y
.
Colab notebook: You can also take a look at this Colab notebook, which walks through a use case of Uncertainty Toolbox.
Uncertainty Toolbox provides a number of metrics to
quantify and compare predictive uncertainty estimates. For example, the
get_all_metrics
function will return:
The following plots are a few of the visualizations provided by Uncertainty Toolbox. See this example for code to reproduce these plots.
Overconfident (too little uncertainty)
Underconfident (too much uncertainty)
Well calibrated
And here are a few of the calibration metrics for the above three cases:
Mean absolute calibration error (MACE) | Root mean squared calibration error (RMSCE) | Miscalibration area (MA) | |
---|---|---|---|
Overconfident | 0.19429 | 0.21753 | 0.19625 |
Underconfident | 0.20692 | 0.23003 | 0.20901 |
Well calibrated | 0.00862 | 0.01040 | 0.00865 |
The following plots show the results of a recalibration procedure provided by Uncertainty Toolbox, which transforms a set of predictive uncertainties to improve average calibration. The algorithm is based on isotonic regression, as proposed by Kuleshov et al.
See this example for code to reproduce these plots.
Recalibrating overconfident predictions
Mean absolute calibration error (MACE) | Root mean squared calibration error (RMSCE) | Miscalibration area (MA) | |
---|---|---|---|
Before Recalibration | 0.19429 | 0.21753 | 0.19625 |
After Recalibration | 0.01124 | 0.02591 | 0.01117 |
Recalibrating underconfident predictions
Mean absolute calibration error (MACE) | Root mean squared calibration error (RMSCE) | Miscalibration area (MA) | |
---|---|---|---|
Before Recalibration | 0.20692 | 0.23003 | 0.20901 |
After Recalibration | 0.00157 | 0.00205 | 0.00132 |
We welcome and greatly appreciate contributions from the community! Please see our contributing guidelines for details on how to help out.
If you found this toolbox helpful, please cite the following paper:
@article{chung2021uncertainty,
title={Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification},
author={Chung, Youngseog and Char, Ian and Guo, Han and Schneider, Jeff and Neiswanger, Willie},
journal={arXiv preprint arXiv:2109.10254},
year={2021}
}
Additionally, here are papers that led to the development of the toolbox:
@article{chung2020beyond,
title={Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification},
author={Chung, Youngseog and Neiswanger, Willie and Char, Ian and Schneider, Jeff},
journal={arXiv preprint arXiv:2011.09588},
year={2020}
}
@article{tran2020methods,
title={Methods for comparing uncertainty quantifications for material property predictions},
author={Tran, Kevin and Neiswanger, Willie and Yoon, Junwoong and Zhang, Qingyang and Xing, Eric and Ulissi, Zachary W},
journal={Machine Learning: Science and Technology},
volume={1},
number={2},
pages={025006},
year={2020},
publisher={IOP Publishing}
}
Development of Uncertainty Toolbox is supported by the following organizations.