Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Evo | 2,474 | 2 | 18 days ago | 80 | May 09, 2022 | 16 | gpl-3.0 | Python | ||
Python package for the evaluation of odometry and SLAM | ||||||||||
Gymfc | 242 | a year ago | 8 | mit | Python | |||||
A universal flight control tuning framework | ||||||||||
Bark | 184 | 1 | 10 months ago | 48 | April 10, 2022 | 10 | mit | C++ | ||
Open-Source Framework for Development, Simulation and Benchmarking of Behavior Planning Algorithms for Autonomous Driving | ||||||||||
Causalworld | 169 | 5 months ago | 2 | mit | Python | |||||
CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning | ||||||||||
Robel | 58 | 3 years ago | 5 | apache-2.0 | Python | |||||
ROBEL: Robotics Benchmarks for Learning with low-cost robots | ||||||||||
Form2fit | 57 | 2 years ago | 4 | mit | Python | |||||
Train generalizable policies for kit assembly with self-supervised dense correspondence learning. | ||||||||||
Policy Adaptation During Deployment | 46 | 2 years ago | Python | |||||||
Training code and evaluation benchmarks for the "Self-Supervised Policy Adaptation during Deployment" paper. | ||||||||||
Design Bench | 30 | a year ago | 27 | October 22, 2021 | 5 | mit | Python | |||
Benchmarks for Model-Based Optimization | ||||||||||
Safe Multi Agent Mujoco | 26 | 5 days ago | 2 | mit | Python | |||||
Safe Multi-Agent MuJoCo benchmark for safe multi-agent reinforcement learning research. | ||||||||||
Julia Robotics Paper Code | 24 | 4 years ago | Jupyter Notebook | |||||||
Code associated with the paper "Julia for Robotics: Simulation and Real-time Control in a High-level Programming Language" |
This repo contains the benchmarks for the paper "Collision Detection Accelerated: An Optimization Perspective" published at RSS 2022. You can find the paper here and the project page here. There are two main benchmarks: the ellipsoid benchmark (strictly-convex shapes) and the convex mesh benchmark (non-strictly convex shapes), which are intended to compare the GJK algorithm and our method: Nesterov accelerated GJK.
These benchmarks call the HPPFCL C++ library in which both GJK and Nesterov-accelerated GJK are implemented.
For prototyping, we have also reimplemented GJK and Nesterov-accelerated GJK in Python.
To make the install easy, we recommend using conda to isolate the required packages needed to run the benchmarks from your system.
git clone --recursive https://github.com/lmontaut/collision-detection-benchmark.git && cd collision-detection-benchmark
conda create -n collision_benchmark python=3.8 && conda activate collision_benchmark
conda install cmake pinocchio pandas tqdm qhull
. For pinocchio
, add the conda-forge
channel conda config --add channels conda-forge
.conda activate collision_benchmark
for the cmake path to take effect.mkdir hpp-fcl/build && cd hpp-fcl/build
git submodule update --init
cmake -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX -DCMAKE_BUILD_TYPE=Release -DHPP_FCL_HAS_QHULL=ON ..
make install
cd ../..
if you are in hpp-fcl/build
) and install this python library on the conda env: pip install -e .
This was succesfully installed and tested on Manjaro 5.15.50
and Ubuntu 20.04
.
The tested compilers were g++ version 9.4.0
and 12.1.0
and clang++ version 13.0.1
.
The required version for eigen is 3.4.0
.
Please visit https://shapenet.org.
Download ShapeNetCore.v2
and place it in exp/shapenet/data
.
To generate a subset of ShapeNet to run the benchmarks, run python exp/shapenet/generate_subshapenet.py
To launch a quick benchmark:
python exp/continuous_ellipsoids/ellipsoids_quick_benchmark.py [--opts]
python exp/shapenet/shapenet_quick_benchmark.py [--opts]
.The param --opts
can be:
--python
: also runs the quick benchmark with the solvers written in Python, off by default--measure_time
: measures execution times, off by default--distance_category
: overlapping, close-proximity, distant--num_pairs
: number of collision pairs--num_poses
: number of relative poses btw each collision pairTo compare the performances between Nesterov-accelerated GJK and vanilla GJK, we measure both the performance on boolean collision detection and distance computation.
We thus measure the following metrics for distance computation:
dist_to_vanilla
: distance of the solution found by the solver to the solution found by vanilla GJK.numit
: number of iterations to converge.execution_time
rel
relates to the relative performance to vanilla GJK. Given a solver, a metric and a collision problem, we do metric of GJK on problem P / metric of solver on problem P
.
We add the suffix early
to numit
and execution_time
to track the performance of the boolean collision check (early
because boolean collision check is an early stop of distance computation).The plots from the paper where obtained from the following benchmarks.
You will need to have pandas
to save results to .csv
files and jupyter
to plot the results: conda install pandas jupyterlab
./ellipsoids_benchmark.sh
jupyter lab
then go to plot_exp/continuous_ellipsoids/continuous_ellipsoids_plots.ipynb
and run the notebook.To cite Nesterov accelerated GJK and/or the associated benchmarks, please use the following bibtex lines:
@inproceedings{montaut2022GJKNesterov,
title = {Collision Detection Accelerated: An Optimization Perspective},
author = {Montaut, Louis and Le Lidec, Quentin and Petrik, Vladimir and Sivic, Josef and Carpentier, Justin},
booktitle = {Robotics: Science and Systems},
year = {2022}
}