Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for adversarial attacks ai security
adversarial-attacks
x
ai-security
x
3 search results found
Narcissus
⭐
61
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
I Bau
⭐
36
Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''
Advdrop
⭐
22
Code for "Adversarial attack by dropping information." (ICCV 2021)
Related Searches
Python Adversarial Attacks (211)
Deep Learning Adversarial Attacks (105)
Pytorch Adversarial Attacks (72)
Python Ai Security (10)
Deep Learning Ai Security (8)
Machine Learning Ai Security (7)
Artificial Intelligence Ai Security (6)
Tensorflow Ai Security (6)
Neural Ai Security (5)
Backdoor Attacks Ai Security (5)
1-3 of 3 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.