|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Cs Video Courses||53,706||2 days ago||14|
|List of Computer Science courses with video lectures.|
|C Plus Plus||23,668||2 days ago||59||mit||C++|
|Collection of various algorithms in mathematics, machine learning, computer science and physics implemented in C++ for educational purposes.|
|Wavefunctioncollapse||20,815||13 days ago||2||other||C#|
|Bitmap & tilemap generation from a single example with the help of ideas from quantum mechanics|
|Homemade Machine Learning||20,319||5 months ago||21||mit||Jupyter Notebook|
|🤖 Python examples of popular machine learning algorithms with interactive Jupyter demos and math being explained|
|C||15,814||2 days ago||23||gpl-3.0||C|
|Collection of various algorithms in mathematics, machine learning, computer science, physics, etc implemented in C for educational purposes.|
|Machine Learning Tutorials||12,876||3 months ago||33||cc0-1.0|
|machine learning and deep learning tutorials, articles and other resources|
|Nni||12,628||8||22||a day ago||51||June 22, 2022||278||mit||Python|
|An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.|
|Halfrost Field||11,208||9 months ago||5||cc-by-sa-4.0||Go|
|✍🏻 这里是写博客的地方 —— Halfrost-Field 冰霜之地|
|Mlalgorithms||8,809||a year ago||8||mit||Python|
|Minimal and clean examples of machine learning algorithms implementations|
|Mlcourse.ai||8,670||5 days ago||3||other||Python|
|Open Machine Learning Course|
Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system's fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as metrics for model assessment. Besides the source code, this repository also contains Jupyter notebooks with examples of Fairlearn usage.
An AI system can behave unfairly for a variety of reasons. In Fairlearn, we define whether an AI system is behaving unfairly in terms of its impact on people i.e., in terms of harms. We focus on two kinds of harms:
We follow the approach known as group fairness, which asks: Which groups of individuals are at risk for experiencing harms? The relevant groups need to be specified by the data scientist and are application specific.
Group fairness is formalized by a set of constraints, which require that some aspect (or aspects) of the AI system's behavior be comparable across the groups. The Fairlearn package enables assessment and mitigation of unfairness under several common definitions. To learn more about our definitions of fairness, please visit our user guide on Fairness of AI Systems.
Note: Fairness is fundamentally a sociotechnical challenge. Many aspects of fairness, such as justice and due process, are not captured by quantitative fairness metrics. Furthermore, there are many quantitative fairness metrics which cannot all be satisfied simultaneously. Our goal is to enable humans to assess different mitigation strategies and then make trade-offs appropriate to their scenario.
The Fairlearn Python package has two components:
Check out our in-depth guide on the Fairlearn metrics.
For an overview of our algorithms please refer to our website.
For instructions on how to install Fairlearn check out our Quickstart guide.
For common usage refer to the Jupyter notebooks and
our user guide.
Please note that our APIs are subject to change, so notebooks downloaded
main may not be compatible with Fairlearn installed with
pip. In this case, please navigate the tags in the repository (e.g.
locate the appropriate version of the notebook.
To contribute please check our contributor guide.
A list of current maintainers is on our website.
Pose questions and help answer them on Stack
fairlearn or on
Issues are meant for bugs, feature requests, and documentation improvements. Please submit a report through GitHub issues. A maintainer will respond promptly as appropriate.
Maintainers will try to link duplicate issues when possible.
To report security issues please send an email to