Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Cs Video Courses | 56,273 | 5 days ago | 17 | |||||||
List of Computer Science courses with video lectures. | ||||||||||
Ray | 25,916 | 80 | 199 | 2 hours ago | 76 | June 09, 2022 | 2,905 | apache-2.0 | Python | |
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for accelerating ML workloads. | ||||||||||
Applied Ml | 24,242 | 12 days ago | 3 | mit | ||||||
๐ Papers & tech blogs by companies sharing their work on data science & machine learning in production. | ||||||||||
Annotated_deep_learning_paper_implementations | 22,464 | 1 | a month ago | 76 | June 27, 2022 | 17 | mit | Jupyter Notebook | ||
๐งโ๐ซ 59 Implementations/tutorials of deep learning papers with side-by-side notes ๐; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), ๐ฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐ง | ||||||||||
D2l En | 18,049 | a day ago | 101 | other | Python | |||||
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 400 universities from 60 countries including Stanford, MIT, Harvard, and Cambridge. | ||||||||||
Ml Agents | 14,815 | 12 | 14 | 4 hours ago | 44 | April 01, 2022 | 166 | other | C# | |
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning. | ||||||||||
Tensor2tensor | 13,701 | 82 | 11 | 4 days ago | 79 | June 17, 2020 | 589 | apache-2.0 | Python | |
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. | ||||||||||
Deep Learning Drizzle | 10,767 | 6 months ago | 6 | HTML | ||||||
Drench yourself in Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, and NLP by learning from these exciting lectures!! | ||||||||||
Tensorflow Tutorials | 8,644 | 2 years ago | 2 | mit | Jupyter Notebook | |||||
TensorFlow Tutorials with YouTube Videos | ||||||||||
Amazon Sagemaker Examples | 8,437 | 6 hours ago | 801 | apache-2.0 | Jupyter Notebook | |||||
Example ๐ Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using ๐ง Amazon SageMaker. |
MushroomRL: Reinforcement Learning Python library.
Contents of this document:
MushroomRL is a Python Reinforcement Learning (RL) library whose modularity allows to easily use well-known Python libraries for tensor computation (e.g. PyTorch, Tensorflow) and RL benchmarks (e.g. OpenAI Gym, PyBullet, Deepmind Control Suite). It allows to perform RL experiments in a simple way providing classical RL algorithms (e.g. Q-Learning, SARSA, FQI), and deep RL algorithms (e.g. DQN, DDPG, SAC, TD3, TRPO, PPO).
Full documentation and tutorials available here.
You can do a minimal installation of MushroomRL
with:
pip3 install mushroom_rl
MushroomRL
contains also some optional components e.g., support for OpenAI Gym
environments, Atari 2600 games from the Arcade Learning Environment
, and the support
for physics simulators such as Pybullet
and MuJoCo
.
Support for these classes is not enabled by default.
To install the whole set of features, you will need additional packages installed. You can install everything by running:
pip3 install mushroom_rl[all]
This will install every dependency of MushroomRL, except the Plots dependency. For ubuntu>20.04, you may need to install pygame and gym dependencies:
sudo apt -y install libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev \
libsdl1.2-dev libsmpeg-dev libportmidi-dev ffmpeg libswscale-dev \
libavformat-dev libavcodec-dev swig
Notice that you still need to install some of these dependencies for different operating systems, e.g. swig for macOS
Below is the code that you need to run to install the Plots dependencies:
sudo apt -y install python3-pyqt5
pip3 install mushroom_rl[plots]
You might need to install external dependencies first. For more information about mujoco-py installation follow the instructions on the project page
WARNING! when using conda, there may be issues with QT. You can fix them by adding the following lines to the code, replacing<conda_base_path>
with the path to your conda distribution and<env_name>
with the name of the conda environment you are using:
import os
os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = '<conda_base_path>/envs/<env_name>/bin/platforms'
To use dm_control MushroomRL interface, install dm_control
following the instruction that can
be found here
Habitat and iGibson are simulation platforms providing realistic and sensory-rich learning environments. In MushroomRL, the agent's default observations are RGB images, but RGBD, agent sensory data, and other information can also be used.
If you have previous versions of iGibson or Habitat already installed, we recommend to remove them and do clean installs.
Follow the official guide and install its assets and datasets.
For <MUSHROOM_RL PATH>/mushroom-rl/examples/igibson_dqn.py
you need to run
python -m igibson.utils.assets_utils --download_assets
python -m igibson.utils.assets_utils --download_demo_data
python -m igibson.utils.assets_utils --download_ig_dataset
You can also use third party datasets.
The scene details are defined in a YAML file, that needs to be passed to the agent.
See <IGIBSON PATH>/igibson/test/test_house.YAML
for an example.
Follow the official guide and do a full install with habitat_baselines. Then you can download interactive datasets following this and this. If you need to download other datasets, you can use this utility.
When you create a Habitat
environment, you need to pass a wrapper name and two
YAML files: Habitat(wrapper, config_file, base_config_file)
.
The wrapper has to be among the ones defined in <MUSHROOM_RL PATH>/mushroom-rl/environments/habitat_env.py
,
and takes care of converting actions and observations in a gym-like format. If your task / robot requires it,
you may need to define new wrappers.
The YAML files define every detail: the Habitat environment, the scene, the sensors available to the robot, the rewards, the action discretization, and any additional information you may need. The second YAML file is optional, and overwrites whatever was already defined in the first YAML.
If you use YAMLs from
habitat-lab
, check if they define a YAML forBASE_TASK_CONFIG_PATH
. If they do, you need to pass it asbase_config_file
toHabitat()
.habitat-lab
YAMLs, in fact, use relative paths, and calling them from outside its root folder will cause errors.
If you use a dataset, be sure that the path defined in the YAML file is correct,
especially if you use relative paths. habitat-lab
YAMLs use relative paths, so
be careful with that. By default, the path defined in the YAML file will be
relative to where you launched the python code. If your data folder is
somewhere else, you may also create a symbolic link.
--data-path data
downloads them in the folder
from where you are launching your code)python -m habitat_sim.utils.datasets_download --uids replica_cad_dataset --data-path data
<HABITAT_LAB PATH>/habitat_baselines/config/rearrange/rl_pick.yaml
.
This YAML defines BASE_TASK_CONFIG_PATH: configs/tasks/rearrange/pick.yaml
,
and since this is a relative path we need to overwrite it by passing its absolute path
as base_config_file
argument to Habitat()
.pick.yaml
defines the dataset to be used with respect to <HABITAT_LAB PATH>
.
If you have not used --data-path
argument with the previous download command,
the ReplicaCAD datasets is now in <HABITAT_LAB PATH>/data
and you need to
make a link to itln -s <HABITAT_LAB PATH>/data/ <MUSHROOM_RL PATH>/mushroom-rl/examples/habitat
python habitat_rearrange_sac.py
.Download and extract Replica scenes
WARNING! The dataset is very large!
sudo apt-get install pigz
git clone https://github.com/facebookresearch/Replica-Dataset.git
cd Replica-Dataset
./download.sh replica-path
For this task we only use the custom YAML file pointnav_apartment-0.yaml
.
DATA_PATH: "replica_{split}_apartment-0.json.gz"
defines the JSON file with
some scene details, such as the agent's initial position and orientation.
The {split}
value is defined in the SPLIT
key.
If you want to try new positions, you can sample some from the set of the scene's navigable points. After initializing a
habitat
environment, for examplemdp = Habitat(...)
, runmdp.env._env._sim.sample_navigable_point()
.
SCENES_DIR: "Replica-Dataset/replica-path/apartment_0"
defines the scene.
As said before, this path is relative to where you launch the script, thus we need to link the Replica folder.
If you launch habitat_nav_dqn.py
from its example folder, run
ln -s <PATH TO>/Replica-Dataset/ <MUSHROOM_RL PATH>/mushroom-rl/examples/habitat
python habitat_nav_dqn.py
.You can also perform a local editable installation by using:
pip install --no-use-pep517 -e .
To install also optional dependencies:
pip install --no-use-pep517 -e .[all]
To run experiments, MushroomRL requires a script file that provides the necessary information for the experiment. Follow the scripts in the "examples" folder to have an idea of how an experiment can be run.
For instance, to run a quick experiment with one of the provided example scripts, run:
python3 examples/car_on_hill_fqi.py
If you are using MushroomRL for your scientific publications, please cite:
@article{JMLR:v22:18-056,
author = {Carlo D'Eramo and Davide Tateo and Andrea Bonarini and Marcello Restelli and Jan Peters},
title = {MushroomRL: Simplifying Reinforcement Learning Research},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {131},
pages = {1-5},
url = {http://jmlr.org/papers/v22/18-056.html}
}
For any question, drop an e-mail at [email protected].
Follow us on Twitter @Mushroom_RL!