Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for python adversarial attacks
adversarial-attacks
x
python
x
178 search results found
Zoo_attack_pytorch
⭐
16
This repository contains the PyTorch implementation of Zeroth Order Optimization Based Adversarial Black Box Attack (https://arxiv.org/abs/1708.03999)
Ga Attack
⭐
16
SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.
Mesh Attack
⭐
16
our code for paper '3D Adversarial Attacks Beyond Point Cloud '
Domain Shift Robustness
⭐
16
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Csa
⭐
16
Official implementation of CVPR2020 Paper "Cooling-Shrinking Attack"
Composite Adv
⭐
15
[CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho.
Vllm Safety Benchmark
⭐
15
Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"
Leba
⭐
15
[NeurIPS'20] Learning Black-Box Attackers with Transferable Priors and Query Feedback
Attack Imagenet
⭐
15
No.2 solution of Tianchi ImageNet Adversarial Attack Challenge.
Segmentandcomplete
⭐
14
Official implementation of Segmentation and Complete (SAC) defense.
Sga
⭐
14
Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023]
Augmented_lagrangian_adversarial_attacks
⭐
14
Code for the ICCV 2021 paper "Augmented Lagrangian Adversarial Attacks"
Transfer_attack_rap
⭐
14
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)
Chainer Adversarial Examples
⭐
14
Adversarial attack methods, FGSM and TGSM, implemented in Chainer
Contranet
⭐
14
This is the official implementation of ContraNet (NDSS2022).
Pcfa
⭐
13
[ECCV 2022 Oral] Source code for "A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow"
Mdattack
⭐
13
St Data
⭐
13
Official Source Code of the paper "Exploring Effective Data for Surrogate Training Towards Black-box Attack", which is accepted by CVPR 2022
Dgslow
⭐
13
Codebase for the ACL 2023 paper: White-Box Multi-Objective Adversarial Attack on Dialogue Generation.
Robustbnns
⭐
12
Code for paper "Robustness of Bayesian Neural Networks to Gradient-Based Attacks"
Adverserial_attack
⭐
12
Different Adversarial attack methods implemented in PyTorch on CIFAR-10 Dataset
Ssa
⭐
12
Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples
Robust Principles
⭐
12
Robust Principles: Architectural Design Principles for Adversarially Robust CNNs
Contrastive Poisoning
⭐
12
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
Adv Reid
⭐
12
Metric Adversarial Attacks and Defense
Bert Adv Embed
⭐
12
Adversarial perturbations on word embeddings of BERT
Fooling_network_interpretation
⭐
12
Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification
Sparse Imperceivable Attacks
⭐
12
Sparse and Imperceivable Adversarial Attacks (accepted to ICCV 2019).
Face Adversarial Attack
⭐
12
An easy approach for the competition "Facial Adversary Examples" in TIANCHI
Simp Gcn
⭐
12
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
3d Neural Network Adversarial Attacks
⭐
11
Research on adversarial attacks and defenses for deep neural network 3D point cloud classifiers like PointNet and PointNet++.
Leveraging Adversarial Examples To Quantify Membership Information Leakage
⭐
11
Bev Attack
⭐
10
[TMLR'24] On the Adversarial Robustness of Camera-based 3D Object Detection
Smoothfool
⭐
10
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations
Uae Rs
⭐
10
Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark
Grnn
⭐
10
Official implementation of "GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning"
Assuda
⭐
10
Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation (ICCV 2021; Oral)
Nips 2018 Adversarial Vision Challenge
⭐
10
Code, documents, and deployment configuration files, related to our participation in the 2018 NIPS Adversarial Vision Challenge "Robust Model Track"
Sa_dqn
⭐
10
[NeurIPS 2020, Spotlight] State-Adversarial DQN (SA-DQN) for robust deep reinforcement learning
Decepticonlp
⭐
10
Python Library for Robustness Monitoring and Adversarial Debugging of NLP models
Verinet
⭐
10
The VeriNet toolkit for verification of neural networks
Saga
⭐
10
SAGA: Spectral Adversarial Geometric Attack on 3D Meshes (ICCV 2023)
Stereoscopic Universal Perturbations
⭐
9
PyTorch Implementation of Stereoscopic Universal Perturbations across Different Architectures and Datasets (CVPR 2022)
Reap Benchmark
⭐
9
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
Tsfool
⭐
9
Repository of the TSFool method proposed in paper "TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Attack".
Sada
⭐
9
SADA: Semantic Adversarial Diagnostic Attacks for Autonomous Applications (AAAI 2020)
White 2 Black
⭐
9
The official code to reproduce results from the NACCL2019 paper: White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks
Zeroe
⭐
9
From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks
Vanilla Adversarial Training
⭐
8
vanilla training and adversarial training in PyTorch
Meta Adversarial Training
⭐
8
Tensorflow implementation of Meta Adversarial Training for Adversarial Patch Attacks on Tiny ImageNet.
Mair
⭐
8
PyTorch implementation of adversarial defenses [Fantastic Robustness Measures: The Secrets of Robust Generalization, NeurIPS 2023].
Eegadversary
⭐
8
This is a toolbox to construct adversarial examples of EEG signals. The traditional EEG extraction methods and classifiers are re-implemented in Tensorflow.
Featurespaceattack
⭐
8
Code for AAAI 2021 "Towards Feature Space Adversarial Attack".
Pytorch Gnn Meta Attack
⭐
8
Pytorch implementation of gnn meta attack (mettack). Paper title: Adversarial Attacks on Graph Neural Networks via Meta Learning.
Linear Region Attack
⭐
8
A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturbations without doing gradient descent
It Defense
⭐
8
Our code for paper 'The art of defense: letting networks fool the attacker', IEEE Transactions on Information Forensics and Security, 2023
Odi
⭐
8
[CVPR 2022] Official implementation of the Object-based Diverse Input (ODI) method
Metaadvdet
⭐
8
The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"
Foolyourvllms
⭐
8
Code for paper: Fool Your (Vision and) Language Model With Embarrassingly Simple Permutations
Attack_vae
⭐
7
Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks
Tth
⭐
7
Source code of our ICASSP2023 paper: Towards Making a Trojan-horse Attack on Text-to-Image Retrieval.
Sacnet
⭐
7
Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification
Transferattacksurrogates
⭐
7
The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferability". We study how to train surrogates model for boosting transfer attack.
Fda
⭐
7
Code of our recently published attack FDA: Feature Disruptive Attack. Colab Notebook: https://colab.research.google.com/drive/1WhkKCrzFq
Defending Against Backdoors With Robust Learning Rate
⭐
7
The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
Adversarial_attack_on_rnn
⭐
7
Performing C&W attack on Recurrent Neural Network
Stereopagnosia
⭐
7
PyTorch implementation of Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations (in AAAI 2021)
Robustadversarialnetwork
⭐
7
A pytorch re-implementation for paper "Towards Deep Learning Models Resistant to Adversarial Attacks"
Sa_ppo
⭐
6
[NeurIPS 2020 Spotlight] State-adversarial PPO for robust deep reinforcement learning
Semanticshield
⭐
6
The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).
Maya
⭐
6
Code base for the EMNLP 2021 paper, "Multi-granularity Textual Adversarial Attack with Behavior Cloning".
Inn
⭐
6
Detecting Failure Modes in Image Reconstructions with Interval Neural Network Uncertainty
Adversarialconvex
⭐
6
Tensorflow implementation for generating adversarial examples using convex programming
Non Adversarial_backdoor
⭐
6
Implementation of "Beating Backdoor Attack at Its Own Game" (ICCV-23).
Learning To Break Deep Perceptual Hashing
⭐
6
Source code for our ACM FAccT 2022 paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash"
Gfcs
⭐
6
Code for the ICLR 2022 paper "Attacking deep networks with surrogate-based adversarial black-box methods is easy"
Morphence
⭐
6
Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models trained on MNIST and CIFAR10.
Capsule_network_tensorflow
⭐
6
Capsule Network implementation in Tensorflow
Dnnf
⭐
6
Deep Neural Network Falsification
Adversarial Machine Learning
⭐
6
Hands-on tutorial on adversarial examples 😈. With Streamlit app ❤️.
Keras_adversarial_attack
⭐
5
Implementation of (2014) Explaining and Harnessing Adversarial Examples.
Transferable_perturbations
⭐
5
[NeurIPS2021] Code Release of Learning Transferable Perturbations
Adversarial_robustness_zsl
⭐
5
[ECCV 2020 AROW Workshop] A Deep Dive into Adversarial Robustness in Zero-Shot Learning
Linkteller
⭐
5
LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis
Solution For Aisafety Cvpr2022
⭐
5
A Simple and Effective Solution For AISafety CVPR2022, ranked 5th
Composite Adv
⭐
5
[CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho.
Vbad
⭐
5
Black-box Adversarial Attacks on Video Recognition Models. (VBAD)
Related Searches
Python Flask (17,643)
Python Dataset (14,792)
Python Machine Learning (14,099)
Python Tensorflow (13,736)
Python Deep Learning (13,092)
Python Jupyter Notebook (12,976)
Python Network (11,495)
Python Algorithms (10,033)
Python Natural Language Processing (9,064)
Python Artificial Intelligence (8,580)
101-178 of 178 search results
< Previous
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.