Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for backdoor attacks ai security
ai-security
x
backdoor-attacks
x
5 search results found
Backdoor Learning Resources
⭐
888
A list of backdoor learning resources
Narcissus
⭐
61
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
I Bau
⭐
36
Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''
Meta Sift
⭐
11
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
Imperio
⭐
9
Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the victim model's prediction for arbitrary targets.
Related Searches
Python Backdoor Attacks (53)
Deep Learning Backdoor Attacks (18)
Python Ai Security (10)
Attack Backdoor Attacks (8)
Deep Learning Ai Security (8)
Machine Learning Ai Security (7)
Artificial Intelligence Ai Security (6)
Tensorflow Ai Security (6)
Neural Ai Security (5)
Adversarial Attacks Ai Security (4)
1-5 of 5 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.