Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Transformers | 112,606 | 64 | 1,869 | 5 hours ago | 114 | July 18, 2023 | 858 | apache-2.0 | Python | |
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. | ||||||||||
Keras | 59,450 | 578 | 5 hours ago | 80 | June 27, 2023 | 98 | apache-2.0 | Python | ||
Deep Learning for humans | ||||||||||
Real Time Voice Cloning | 47,152 | 6 days ago | 168 | other | Python | |||||
Clone a voice in 5 seconds to generate arbitrary speech in real-time | ||||||||||
Ray | 27,952 | 80 | 298 | 5 hours ago | 87 | July 24, 2023 | 3,448 | apache-2.0 | Python | |
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads. | ||||||||||
Netron | 24,090 | 4 | 69 | 19 hours ago | 587 | August 01, 2023 | 23 | mit | JavaScript | |
Visualizer for neural network, deep learning, and machine learning models | ||||||||||
D2l En | 18,967 | a month ago | 2 | November 13, 2022 | 95 | other | Python | |||
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge. | ||||||||||
Ncnn | 17,982 | 1 | a day ago | 24 | November 28, 2022 | 1,047 | other | C++ | ||
ncnn is a high-performance neural network inference framework optimized for the mobile platform | ||||||||||
Datasets | 17,204 | 9 | 540 | 2 days ago | 69 | July 31, 2023 | 593 | apache-2.0 | Python | |
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools | ||||||||||
Onnx | 15,615 | 148 | 394 | 2 days ago | 29 | May 04, 2023 | 311 | apache-2.0 | Python | |
Open standard for machine learning interoperability | ||||||||||
Deeplearning Models | 15,594 | 7 months ago | 5 | mit | Jupyter Notebook | |||||
A collection of various deep learning architectures, models, and tips |
Foolbox is a Python library that lets you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in PyTorch, TensorFlow, and JAX.
Design
Foolbox 3 has been rewritten from scratch using EagerPy instead of NumPy to achieve native performance on models developed in PyTorch, TensorFlow and JAX, all with one code base without code duplication.
Documentation
pip install foolbox
Foolbox is tested with Python 3.8 and newer - however, it will most likely also work with version 3.6 - 3.8. To use it with PyTorch, TensorFlow, or JAX, the respective framework needs to be installed separately. These frameworks are not declared as dependencies because not everyone wants to use and thus install all of them and because some of these packages have different builds for different architectures and CUDA versions. Besides that, all essential dependencies are automatically installed.
You can see the versions we currently use for testing in the Compatibility section below, but newer versions are in general expected to work.
Example
import foolbox as fb
model = ...
fmodel = fb.PyTorchModel(model, bounds=(0, 1))
attack = fb.attacks.LinfPGD()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(fmodel, images, labels, epsilons=epsilons)
More examples can be found in the examples folder, e.g. a full ResNet-18 example.
Citation
If you use Foolbox for your work, please cite our JOSS paper on Foolbox Native (i.e., Foolbox 3.0) and our ICML workshop paper on Foolbox using the following BibTeX entries:
@article{rauber2017foolboxnative, doi = {10.21105/joss.02607}, url = {https://doi.org/10.21105/joss.02607}, year = {2020}, publisher = {The Open Journal}, volume = {5}, number = {53}, pages = {2607}, author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel}, title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX}, journal = {Journal of Open Source Software} }
@inproceedings{rauber2017foolbox, title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models}, author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias}, booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning}, year={2017}, url={http://arxiv.org/abs/1707.04131}, } Contributions
We welcome contributions of all kind, please have a look at our development guidelines. In particular, you are invited to contribute new adversarial attacks. If you would like to help, you can also have a look at the issues that are marked with contributions welcome.
Questions?
If you have a question or need help, feel free to open an issue on GitHub. Once GitHub Discussions becomes publicly available, we will switch to that.
Performance
Foolbox 3.0 is much faster than Foolbox 1 and 2. A basic performance comparison can be found in the performance folder.
Compatibility
We currently test with the following versions: