Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Cleverhans | 5,751 | 8 | 3 | 2 months ago | 6 | July 24, 2021 | 39 | mit | Jupyter Notebook | |
An adversarial example library for constructing attacks, building defenses, and benchmarking both | ||||||||||
Merlion | 2,921 | a day ago | 14 | June 28, 2022 | 14 | bsd-3-clause | Python | |||
Merlion: A Machine Learning Framework for Time Series Intelligence | ||||||||||
Advertorch | 1,142 | 3 | 1 | 10 months ago | 10 | June 15, 2020 | 22 | lgpl-3.0 | Jupyter Notebook | |
A Toolbox for Adversarial Robustness Research | ||||||||||
Orion | 747 | 1 | a day ago | 26 | July 04, 2022 | 49 | mit | Python | ||
A machine learning library for detecting anomalies in signals. | ||||||||||
Elki | 728 | 15 | 5 | 15 days ago | 3 | February 15, 2019 | 3 | agpl-3.0 | Java | |
ELKI Data Mining Toolkit | ||||||||||
Rliable | 546 | a month ago | 9 | June 22, 2022 | 1 | apache-2.0 | Jupyter Notebook | |||
[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds. | ||||||||||
Moses | 493 | 2 years ago | 14 | mit | Python | |||||
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models | ||||||||||
Kd_lib | 476 | 22 days ago | 8 | May 18, 2022 | 18 | mit | Python | |||
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization. | ||||||||||
Powerful Benchmarker | 410 | 13 days ago | 34 | September 19, 2020 | 2 | Python | ||||
A library for ML benchmarking. It's powerful. | ||||||||||
Genrl | 375 | a year ago | 4 | March 31, 2020 | 52 | mit | Python | |||
A PyTorch reinforcement learning library for generalizable and reproducible algorithm implementations with an aim to improve accessibility in RL |
Currently I can provide technical support (help with code, bug fixes etc.) for the domain-adaptation
branch only.
Clone this repo:
git clone https://github.com/KevinMusgrave/powerful-benchmarker.git
Then go into the folder and install the required packages:
cd powerful-benchmarker
pip install -r requirements.txt
constants.yaml
exp_folder
: experiments will be saved as sub-folders inside of exp_folder
dataset_folder
: datasets will be downloaded here. For example, <dataset_folder>/mnistm
conda_env
: (optional) the conda environment that will be activated for slurm jobsslurm_folder
: slurm logs will be saved to <exp_folder>/.../<slurm_folder>
gdrive_folder
: (optional) the google drive folder to which logs can be uploadedVisit each folder to view its readme file.
Folder | Description |
---|---|
latex |
Code for creating latex tables from experiment data. |
notebooks |
Jupyter notebooks |
powerful_benchmarker |
Code for hyperparameter searches for training models. |
scripts |
Various bash scripts, including scripts for uploading logs to google drive. |
unit_tests |
Tests to check if there are bugs. |
validator_tests |
Code for evaluating validation methods (validators). |
Delete all slurm logs:
python delete_slurm_logs.py --delete
Or delete slurm logs for specific experiments groups. For example, delete slurm logs for all experiment groups starting with "officehome":
python delete_slurm_logs.py --delete --exp_group_prefix officehome
Kill all model training jobs:
python kill_all.py
Or kill all validator test jobs:
python kill_all.py --validator_tests
Print how many hyperparameter trials are done:
python print_progress.py
Include a detailed summary of validator test jobs:
python print_progress.py --with_validator_progress
Save to progress.txt
instead of printing to screen:
python print_progress.py --save_to_file progress.txt
A simple way to run a program via slurm.
For example, run collect_dfs.py
for all experiment groups starting with "office31", using a separate slurm job for each experiment group:
python simple_slurm.py --command "python validator_tests/collect_dfs.py" --slurm_config_folder validator_tests \
--slurm_config a100 --job_name=collect_dfs --cpus-per-task=16 --exp_group_prefix office31
Or run a program without considering experiment groups at all:
python simple_slurm.py --command "python validator_tests/zip_dfs.py" --slurm_config_folder validator_tests \
--slurm_config a100 --job_name=zip_dfs --cpus-per-task=16
Upload slurm logs and experiment progress to a google drive folder at regular intervals (the default is every 2 hours):
python upload_logs.py
Set the google drive folder in constants.yaml
.
Thanks to Jeff Musgrave for designing the logo.
@article{Musgrave2022ThreeNew,
title={Three New Validators and a Large-Scale Benchmark Ranking for Unsupervised Domain Adaptation},
author={Kevin Musgrave and Serge J. Belongie and Ser Nam Lim},
journal={ArXiv},
year={2022},
volume={abs/2208.07360}
}