Powerful Benchmarker

A library for ML benchmarking. It's powerful.
Alternatives To Powerful Benchmarker
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Cleverhans5,751832 months ago6July 24, 202139mitJupyter Notebook
An adversarial example library for constructing attacks, building defenses, and benchmarking both
Merlion2,921
a day ago14June 28, 202214bsd-3-clausePython
Merlion: A Machine Learning Framework for Time Series Intelligence
Advertorch1,1423110 months ago10June 15, 202022lgpl-3.0Jupyter Notebook
A Toolbox for Adversarial Robustness Research
Orion7471a day ago26July 04, 202249mitPython
A machine learning library for detecting anomalies in signals.
Elki72815515 days ago3February 15, 20193agpl-3.0Java
ELKI Data Mining Toolkit
Rliable546
a month ago9June 22, 20221apache-2.0Jupyter Notebook
[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.
Moses493
2 years ago14mitPython
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models
Kd_lib476
22 days ago8May 18, 202218mitPython
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Powerful Benchmarker410
13 days ago34September 19, 20202Python
A library for ML benchmarking. It's powerful.
Genrl375
a year ago4March 31, 202052mitPython
A PyTorch reinforcement learning library for generalizable and reproducible algorithm implementations with an aim to improve accessibility in RL
Alternatives To Powerful Benchmarker
Select To Compare


Alternative Project Comparisons
Readme

Powerful Benchmarker

Which git branch should you checkout?

Currently I can provide technical support (help with code, bug fixes etc.) for the domain-adaptation branch only.

Installation

Clone this repo:

git clone https://github.com/KevinMusgrave/powerful-benchmarker.git

Then go into the folder and install the required packages:

cd powerful-benchmarker
pip install -r requirements.txt

Set paths in constants.yaml

  • exp_folder: experiments will be saved as sub-folders inside of exp_folder
  • dataset_folder: datasets will be downloaded here. For example, <dataset_folder>/mnistm
  • conda_env: (optional) the conda environment that will be activated for slurm jobs
  • slurm_folder: slurm logs will be saved to <exp_folder>/.../<slurm_folder>
  • gdrive_folder: (optional) the google drive folder to which logs can be uploaded

Folder organization

Visit each folder to view its readme file.

Folder Description
latex Code for creating latex tables from experiment data.
notebooks Jupyter notebooks
powerful_benchmarker Code for hyperparameter searches for training models.
scripts Various bash scripts, including scripts for uploading logs to google drive.
unit_tests Tests to check if there are bugs.
validator_tests Code for evaluating validation methods (validators).

Useful top-level scripts

delete_slurm_logs.py

Delete all slurm logs:

python delete_slurm_logs.py --delete

Or delete slurm logs for specific experiments groups. For example, delete slurm logs for all experiment groups starting with "officehome":

python delete_slurm_logs.py --delete --exp_group_prefix officehome

kill_all.py

Kill all model training jobs:

python kill_all.py

Or kill all validator test jobs:

python kill_all.py --validator_tests

print_progress.py

Print how many hyperparameter trials are done:

python print_progress.py

Include a detailed summary of validator test jobs:

python print_progress.py --with_validator_progress

Save to progress.txt instead of printing to screen:

python print_progress.py --save_to_file progress.txt

simple_slurm.py

A simple way to run a program via slurm.

For example, run collect_dfs.py for all experiment groups starting with "office31", using a separate slurm job for each experiment group:

python simple_slurm.py --command "python validator_tests/collect_dfs.py" --slurm_config_folder validator_tests \
--slurm_config a100 --job_name=collect_dfs --cpus-per-task=16 --exp_group_prefix office31

Or run a program without considering experiment groups at all:

python simple_slurm.py --command "python validator_tests/zip_dfs.py" --slurm_config_folder validator_tests \
--slurm_config a100 --job_name=zip_dfs --cpus-per-task=16

upload_logs.py

Upload slurm logs and experiment progress to a google drive folder at regular intervals (the default is every 2 hours):

python upload_logs.py

Set the google drive folder in constants.yaml.

Logo

Thanks to Jeff Musgrave for designing the logo.

Citing the paper

@article{Musgrave2022ThreeNew,
  title={Three New Validators and a Large-Scale Benchmark Ranking for Unsupervised Domain Adaptation},
  author={Kevin Musgrave and Serge J. Belongie and Ser Nam Lim},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.07360}
}
Popular Benchmarking Projects
Popular Machine Learning Projects
Popular Software Performance Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Machine Learning
Deep Learning
Pytorch
Metrics
Benchmark
Computer Vision
Benchmarking
Transfer Learning
Domain Adaptation
Metric Learning