Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Adversarial Robustness Toolbox | 3,733 | 7 | 17 hours ago | 45 | July 01, 2022 | 125 | mit | Python | ||
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams | ||||||||||
Pfl Non Iid | 555 | 4 days ago | 4 | gpl-2.0 | Python | |||||
Personalized federated learning simulation platform with non-IID and unbalanced dataset | ||||||||||
Awesome Ml Privacy Attacks | 396 | 2 months ago | ||||||||
An awesome list of papers on privacy attacks against machine learning | ||||||||||
Ml_privacy_meter | 366 | 4 months ago | 1 | May 13, 2022 | 4 | mit | Jupyter Notebook | |||
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms. | ||||||||||
Deep Spying | 173 | 6 years ago | apache-2.0 | Python | ||||||
Spying using Smartwatch and Deep Learning | ||||||||||
Privacyraven | 165 | 2 months ago | 36 | apache-2.0 | Python | |||||
Privacy Testing for Deep Learning | ||||||||||
Robustdg | 159 | 16 days ago | 11 | mit | Python | |||||
Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks. | ||||||||||
Evaluatingdpml | 112 | 6 months ago | 1 | mit | Python | |||||
This project's goal is to evaluate the privacy leakage of differentially private machine learning models. | ||||||||||
Privpkt | 81 | 4 months ago | 26 | mit | Python | |||||
Privacy Preserving Collaborative Encrypted Network Traffic Classification (Differential Privacy, Federated Learning, Membership Inference Attack, Encrypted Traffic Classification) | ||||||||||
Mia | 81 | 2 years ago | 4 | September 27, 2018 | 15 | mit | Python | |||
A library for running membership inference attacks against ML models |
This repository contains a curated list of papers related to privacy attacks against machine learning. A code repository is provided when available by the authors. For corrections, suggestions, or missing papers, please either open an issue or submit a pull request.
A curated list of membership inference papers (more than 100 papers) on machine learning models is available at this repository.
Reconstruction attacks cover also attacks known as model inversion and attribute inference.