Fairlearn

A Python package to assess and improve fairness of machine learning models.
Alternatives To Fairlearn
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Cs Video Courses53,706
2 days ago14
List of Computer Science courses with video lectures.
C Plus Plus23,668
2 days ago59mitC++
Collection of various algorithms in mathematics, machine learning, computer science and physics implemented in C++ for educational purposes.
Wavefunctioncollapse20,815
13 days ago2otherC#
Bitmap & tilemap generation from a single example with the help of ideas from quantum mechanics
Homemade Machine Learning20,319
5 months ago21mitJupyter Notebook
🤖 Python examples of popular machine learning algorithms with interactive Jupyter demos and math being explained
C15,814
2 days ago23gpl-3.0C
Collection of various algorithms in mathematics, machine learning, computer science, physics, etc implemented in C for educational purposes.
Machine Learning Tutorials12,876
3 months ago33cc0-1.0
machine learning and deep learning tutorials, articles and other resources
Nni12,628822a day ago51June 22, 2022278mitPython
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Halfrost Field11,208
9 months ago5cc-by-sa-4.0Go
✍🏻 这里是写博客的地方 —— Halfrost-Field 冰霜之地
Mlalgorithms8,809
a year ago8mitPython
Minimal and clean examples of machine learning algorithms implementations
Mlcourse.ai8,670
5 days ago3otherPython
Open Machine Learning Course
Alternatives To Fairlearn
Select To Compare


Alternative Project Comparisons
Readme

MIT license PyPI Discord StackOverflow

Fairlearn

Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system's fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as metrics for model assessment. Besides the source code, this repository also contains Jupyter notebooks with examples of Fairlearn usage.

Website: https://fairlearn.org/

Current release

  • The current stable release is available on PyPI.
  • Our current version may differ substantially from earlier versions. Users of earlier versions should visit our version guide to navigate significant changes and find information on how to migrate.

What we mean by fairness

An AI system can behave unfairly for a variety of reasons. In Fairlearn, we define whether an AI system is behaving unfairly in terms of its impact on people i.e., in terms of harms. We focus on two kinds of harms:

  • Allocation harms. These harms can occur when AI systems extend or withhold opportunities, resources, or information. Some of the key applications are in hiring, school admissions, and lending.
  • Quality-of-service harms. Quality of service refers to whether a system works as well for one person as it does for another, even if no opportunities, resources, or information are extended or withheld.

We follow the approach known as group fairness, which asks: Which groups of individuals are at risk for experiencing harms? The relevant groups need to be specified by the data scientist and are application specific.

Group fairness is formalized by a set of constraints, which require that some aspect (or aspects) of the AI system's behavior be comparable across the groups. The Fairlearn package enables assessment and mitigation of unfairness under several common definitions. To learn more about our definitions of fairness, please visit our user guide on Fairness of AI Systems.

Note: Fairness is fundamentally a sociotechnical challenge. Many aspects of fairness, such as justice and due process, are not captured by quantitative fairness metrics. Furthermore, there are many quantitative fairness metrics which cannot all be satisfied simultaneously. Our goal is to enable humans to assess different mitigation strategies and then make trade-offs appropriate to their scenario.

Overview of Fairlearn

The Fairlearn Python package has two components:

  • Metrics for assessing which groups are negatively impacted by a model, and for comparing multiple models in terms of various fairness and accuracy metrics.
  • Algorithms for mitigating unfairness in a variety of AI tasks and along a variety of fairness definitions.

Fairlearn metrics

Check out our in-depth guide on the Fairlearn metrics.

Fairlearn algorithms

For an overview of our algorithms please refer to our website.

Install Fairlearn

For instructions on how to install Fairlearn check out our Quickstart guide.

Usage

For common usage refer to the Jupyter notebooks and our user guide. Please note that our APIs are subject to change, so notebooks downloaded from main may not be compatible with Fairlearn installed with pip. In this case, please navigate the tags in the repository (e.g. v0.7.0) to locate the appropriate version of the notebook.

Contributing

To contribute please check our contributor guide.

Maintainers

A list of current maintainers is on our website.

Issues

Usage Questions

Pose questions and help answer them on Stack Overflow with the tag fairlearn or on Discord.

Regular (non-security) issues

Issues are meant for bugs, feature requests, and documentation improvements. Please submit a report through GitHub issues. A maintainer will respond promptly as appropriate.

Maintainers will try to link duplicate issues when possible.

Reporting security issues

To report security issues please send an email to [email protected].

Popular Machine Learning Projects
Popular Algorithms Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Machine Learning
Algorithms
Artificial Intelligence
Ai