Skip to content

csong27/ml-model-remember

Repository files navigation

Machine Learning Models that Remember Too Much

This repo contains an example for attacks in the paper Machine Learning that Remember Too Much (https://arxiv.org/pdf/1709.07886.pdf). The example is based on CIFAR10 dataset.

Train a malicious model

python train.py --attack ATTACK

Available ATTACK are cap (capacity abuse attack), cor (correlate value encoding attack) and sgn (sign encoding attack).

Test attack quality

python test_model --attack ATTACK

About

Code for Machine Learning Models that Remember Too Much (in CCS 2017)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages