Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for python backdoor attacks
backdoor-attacks
x
python
x
35 search results found
Backdoorbox
⭐
325
The open-sourced Python toolbox for backdoor attacks and defenses.
Trojanzoo
⭐
260
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Backdoors101
⭐
231
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Keres
⭐
82
Persistent Powershell backdoor tool
Openbackdoor
⭐
75
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
Awesome Backdoor In Deep Learning
⭐
73
A curated list of papers & resources on backdoor attacks and defenses in deep learning.
Backdoor
⭐
66
Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
Narcissus
⭐
61
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
Nad
⭐
52
This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oX in PyTorch.
Warping Based_backdoor_attack Release
⭐
42
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)
Rickrolling The Artist
⭐
35
Source code for our ICCV 2023 paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".
Federated Learning Backdoor
⭐
33
ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
Anp_backdoor
⭐
31
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"
Input Aware Backdoor Attack Release
⭐
27
Input-aware Dynamic Backdoor Attack (NeurIPS 2020)
Cognitivedistillation
⭐
27
[ICLR2023] Distilling Cognitive Backdoor Patterns within an Image
Decree
⭐
24
Official repository for CVPR'23 paper: Detecting Backdoors in Pre-trained Encoders
Flip
⭐
24
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning [ICLR‘23, Best Paper Award at ECCV’22 AROW Workshop]
Dfst
⭐
23
This is the repository for DFST paper Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification.
Backdoor Lth
⭐
18
[CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu Chang, Sijia Liu, and Zhangyang Wang
Hiddenkiller
⭐
18
Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"
Fl Analysis
⭐
16
Meta Sift
⭐
11
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
Video Backdoor Attack
⭐
10
Clean-Label Backdoor Attacks on Video Recognition Models, CVPR2020
Imperio
⭐
9
Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the victim model's prediction for arbitrary targets.
Saturn Backdoor
⭐
8
A EASY TO USE `Ngrok` backdoor creator on a IP:PORT
Baadd
⭐
8
Code for Backdoor Attacks Against Dataset Distillation
Fine Pruning Defense
⭐
7
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)
Argd
⭐
7
This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation](https://arxiv.org/abs/2204.09975) in PyTorch.
Defending Against Backdoors With Robust Learning Rate
⭐
7
The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
Backdoor Suite
⭐
7
A module-based repository for testing and evaluating backdoor attacks and defenses.
Rethinking Backdoor Attacks
⭐
7
Waba
⭐
6
Backdoor Attacks for Remote Sensing Data with Wavelet Transform
Anydesk Backdoor
⭐
6
You should never use malware to infiltrate a target system. With the skill of writing and exploiting technical codes, you can do the best ways of penetration. This is done in order to test and increase the security of the open sourcecode.
Non Adversarial_backdoor
⭐
6
Implementation of "Beating Backdoor Attack at Its Own Game" (ICCV-23).
Target_identification
⭐
5
CCS'22 Paper: "Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation"
R4b
⭐
5
R4B, a backdoor based on reverse port opening so as to avoid firewall detection.
Related Searches
Python Deep Learning (17,860)
Python Dataset (14,792)
Python Machine Learning (14,099)
Python Network (11,495)
Python Natural Language Processing (9,064)
Python Artificial Intelligence (8,580)
Python Pytorch (7,877)
Python Neural (7,444)
Python Keras (6,821)
Python Paper (6,577)
1-35 of 35 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.