Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for jupyter notebook explainable ml
explainable-ml
x
jupyter-notebook
x
29 search results found
Tensorwatch
⭐
3,333
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Shapash
⭐
2,547
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
Imodels
⭐
1,229
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Responsible Ai Toolbox
⭐
1,187
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
Trulens
⭐
1,109
Evaluation and Tracking for LLM Experiments
Omnixai
⭐
674
OmniXAI: A Library for eXplainable AI
Mli Resources
⭐
405
H2O.ai Machine Learning Interpretability Resources
Explainx
⭐
375
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @
[email protected]
Datascience_artificialintelligence_utils
⭐
369
Examples of Data Science projects and Artificial Intelligence use-cases
Deep_xf
⭐
84
Package towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
Seggradcam
⭐
83
SEG-GRAD-CAM: Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping
Acv00
⭐
73
ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based models.
Potato
⭐
44
XAI based human-in-the-loop framework for automatic rule-learning.
Clustershapley
⭐
40
Explaining dimensionality results using SHAP values
Yolo Heatmaps
⭐
32
A utility for generating heatmaps of YOLOv8 using Layerwise Relevance Propagation (LRP/CRP).
Strategic Decisions
⭐
24
Code and data for decision making under strategic behavior
Artemis
⭐
19
A Python package with explanation methods for extraction of feature interactions from predictive models
Diabetes_use_case
⭐
19
Sample use case for Xavier AI in Healthcare conference: https://www.xavierhealth.org/ai-summit-day2/
Dlime_experiments
⭐
16
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Xai Scholar
⭐
16
Cross-field empirical trends analysis of XAI literature
Pysddr
⭐
15
A python package for semi-structured deep distributional regression
Explainableml Vision
⭐
14
This repository introduces different Explainable AI approaches and demonstrates how they can be implemented with PyTorch and torchvision. Used approaches are Class Activation Mappings, LIMA and SHapley Additive exPlanations.
Article Information 2019
⭐
10
Article for Special Edition of Information: Machine Learning with Python
Counterfactual Tpp
⭐
10
Code and real data for the paper "Counterfactual Temporal Point Processes", NeurIPS 2022
Learning Scaffold
⭐
9
This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"
Responsible Ai Workshop
⭐
9
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Iai Clinical Decision Rule
⭐
8
Interpretable clinical decision rules for predicting intra-abdominal injury.
Counterfactual Explanations Mdp
⭐
7
Code for "Counterfactual Explanations in Sequential Decision Making Under Uncertainty", NeurIPS 2021
Smace
⭐
7
A New Method for the Interpretability of Composite Decision Systems.
Long Medical Document Lms
⭐
5
Explain and train language models that extract information from long medical documents with the Masked Sampling Procedure (MSP)
Related Searches
Python Jupyter Notebook (12,976)
Jupyter Notebook Machine Learning (8,463)
Jupyter Notebook Dataset (6,824)
Jupyter Notebook Deep Learning (6,566)
Jupyter Notebook Tensorflow (4,771)
Jupyter Notebook Data Science (4,256)
Jupyter Notebook Convolutional Neural Networks (4,218)
Jupyter Notebook Classification (3,939)
Jupyter Notebook Neural (3,926)
Jupyter Notebook Pytorch (3,877)
1-29 of 29 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.