Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for adversarial attacks
adversarial-attacks
x
290 search results found
Sliver
⭐
7,152
Adversary Emulation Framework
Adversarial Robustness Toolbox
⭐
4,420
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Nlpaug
⭐
3,825
Data augmentation for NLP
Foolbox
⭐
2,600
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Textattack
⭐
2,597
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Promptbench
⭐
1,655
A unified evaluation framework for large language models
Adversarial Attacks Pytorch
⭐
1,609
PyTorch implementation of adversarial attacks.
Taadpapers
⭐
1,413
Must-read Papers on Textual Adversarial Attack and Defense
Advbox
⭐
1,344
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
Advertorch
⭐
1,271
A Toolbox for Adversarial Robustness Research
Deeprobust
⭐
904
A pytorch adversarial library for attack and defense methods on images and graphs
Graph Adversarial Learning Literature
⭐
772
A curated list of adversarial attacks and defenses papers on graph-structured data.
Ad_examples
⭐
738
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Auto Attack
⭐
587
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
Openattack
⭐
571
An Open-Source Package for Textual Adversarial Attack.
Natural Adv Examples
⭐
559
A Harder ImageNet Test Set (CVPR 2021)
Graph Adversarial Learning
⭐
519
A curated collection of adversarial attack and defense on graph data.
Photoguard
⭐
431
Raising the Cost of Malicious AI-Powered Image Editing
Ares
⭐
413
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Textfooler
⭐
376
A Model for Natural Language Attack on Text Classification and Inference
Adversarial Examples Pytorch
⭐
353
Implementation of Papers on Adversarial Examples
Awesome Graph Attack Papers
⭐
315
Adversarial attacks and defenses on Graph Neural Networks.
Aijack
⭐
283
Security and Privacy Risk Simulator for Machine Learning
Trojanzoo
⭐
260
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Adversarial Explainable Ai
⭐
235
💡 Adversarial attacks on explanations and how to defend them
Hednsextractor
⭐
234
A suite for hunting suspicious targets, expose domains and phishing discovery
Pro Gnn
⭐
213
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Deepsec
⭐
206
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
Aegis
⭐
203
Self-hardening firewall for large language models
Nettack
⭐
187
Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".
Awesome Computer Vision
⭐
186
Awesome Resources for Advanced Computer Vision Topics
Defensegan
⭐
164
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
Attack And Defense Methods
⭐
152
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
Yopo You Only Propagate Once
⭐
148
Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle
Anti Dreambooth
⭐
140
Anti-DreamBooth: Protecting users from personalized text-to-image synthesis (ICCV'23)
Vigil Llm
⭐
132
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Adversarial Library
⭐
123
Library containing PyTorch implementations of various adversarial attacks and resources
Fast_adversarial
⭐
118
Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"
Dcnets
⭐
114
Implementation for <Decoupled Networks> in CVPR'18.
Tiger
⭐
108
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Gnn Meta Attack
⭐
106
Implementation of the paper "Adversarial Attacks on Graph Neural Networks via Meta Learning".
Fgsm
⭐
99
Simple pytorch implementation of FGSM and I-FGSM
Free_adv_train
⭐
95
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Robust Classification
⭐
94
CVPR 2022 Workshop Robust Classification
Grb
⭐
89
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Lm Ssp
⭐
85
A reading list for large models safety, security, and privacy.
S Attack
⭐
85
[CVPR 2022] S-attack library. Official implementation of two papers "Vehicle trajectory prediction works, but not everywhere" and "Are socially-aware trajectory prediction models really socially-aware?".
Robust Physical Attack
⭐
82
Physical adversarial attack for fooling the Faster R-CNN object detector
Dialogue Understanding
⭐
82
This repository contains PyTorch implementation for the baseline models from the paper Utterance-level Dialogue Understanding: An Empirical Study
Scratchai
⭐
81
scratchai is a Deep Learning library that aims to store all Deep Learning algorithms. With easy calls to do all the common tasks in AI.
Fakebob
⭐
81
Source code for paper "Who is real Bob? Adversarial Attacks on Speaker Recognition Systems" (IEEE S&P 2021)
Infobert
⭐
81
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
Attackvlm
⭐
79
Code of the paper: On Evaluating Adversarial Robustness of Large Vision-Language Models
Plexiglass
⭐
79
A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).
Torchadver
⭐
78
A PyTorch Toolbox for creating adversarial examples that fool neural networks.
Generative_adversarial_perturbations
⭐
78
Generative Adversarial Perturbations (CVPR 2018)
Faceoff
⭐
76
Steps towards physical adversarial attacks on facial recognition
Transferattack
⭐
76
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
Disrupting Deepfakes
⭐
75
🔥🔥Defending Against Deepfakes Using Adversarial Attacks on Conditional Image Translation Networks
Greatx
⭐
75
A graph reliability toolbox based on PyTorch and PyTorch Geometric (PyG).
Tog
⭐
74
Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This project presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types
Robnets
⭐
73
[CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
Awesome Fools
⭐
73
💀 A collection of methods to fool the deep neural network 💀
Adversarial Learning Robustness
⭐
71
Contains materials for workshops pertaining to adversarial robustness in deep learning.
Patch Wise Iterative Attack
⭐
71
Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.
Awesome Adversarial Deep Learning
⭐
68
A list of awesome resources for adversarial attack and defense method in deep learning
Msc 2018 Final
⭐
66
Face Robustness Benchmark
⭐
63
An adversarial robustness evaluation library on face recognition.
Stateadvdrl
⭐
63
[NeurIPS 2020, Spotlight] Code for "Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations"
Nfl_veripy
⭐
63
Formal Verification of Neural Feedback Loops (NFLs)
Hyperion
⭐
61
Python toolkit for speech processing
Narcissus
⭐
61
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
Robust Ood Detection
⭐
59
Robust Out-of-distribution Detection in Neural Networks
Teapot Nlp
⭐
59
Tool for Evaluating Adversarial Perturbations on Text
Winn
⭐
59
Wasserstein Introspective Neural Networks (CVPR 2018 Oral)
Sememepso Attack
⭐
58
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial Optimization"
Chop
⭐
58
CHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.
Mtcnnattack
⭐
57
The first real-world adversarial attack on MTCNN face detetction system to date
Awesome 3d Point Cloud Attacks
⭐
56
List of state of the art papers, code, and other resources
Rs Adversarial Learning
⭐
52
A curated collection of adversarial attack and defense on recommender systems.
Diac2019 Adversarial Attack Share
⭐
52
DIAC2019基于Adversarial Attack的问题等价性判别比赛
Kitanaqa
⭐
47
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Fooling Lime Shap
⭐
47
Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
Flowattack
⭐
46
Attacking Optical Flow (ICCV 2019)
Adversarial Attacks On Object Detectors Paperlist
⭐
46
A Paperlist of Adversarial Attack on Object Detection
Flat
⭐
46
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Onlinelabelsmoothing
⭐
45
The official code for the paper "Delving Deep into Label Smoothing", IEEE TIP 2021
Bss_distillation
⭐
45
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)
Adversarial_lab
⭐
45
Web-based Tool for visualisation and generation of adversarial examples by attacking ImageNet Models like VGG, AlexNet, ResNet etc.
Hallucination Attack
⭐
44
Attack to induce LLMs within hallucinations
Adversarial Examples Paper
⭐
41
Paper list of Adversarial Examples
Proof Pudding
⭐
40
Copy cat model for Proofpoint
Procedural Advml
⭐
40
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Beyond Imagenet Attack
⭐
40
Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.
Advtrajectoryprediction
⭐
40
Implementation of CVPR 2022 paper "On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles" https://arxiv.org/abs/2201.05057
Vafa
⭐
39
[MICCAI 2023] Official code repository of paper titled "Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation" accepted in MICCAI 2023 conference.
Advis.js
⭐
39
[Tensorflow.js] AdVis: Exploring real-time Adversarial Attacks in the browser with Fast Gradient Sign Method.
Perceptual Advex
⭐
39
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Ssah Adversarial Attack
⭐
37
Code for the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"
Adversarial Information Bottleneck
⭐
37
Official PyTorch Implementation for "Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck" in NeurIPS 2021
1-100 of 290 search results
Next >
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.