Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Stable Baselines3 | 6,995 | 79 | a day ago | 80 | November 17, 2023 | 81 | mit | Python | ||
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms. | ||||||||||
Keras Rl | 5,348 | 51 | 3 | a year ago | 8 | June 01, 2018 | 43 | mit | Python | |
Deep Reinforcement Learning for Keras. | ||||||||||
Cleanrl | 3,724 | 15 hours ago | 44 | other | Python | |||||
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG) | ||||||||||
Stable Baselines | 3,064 | 25 | 10 | 3 years ago | 31 | April 06, 2021 | n,ull | mit | Python | |
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms | ||||||||||
Muzero General | 2,203 | 3 months ago | 54 | mit | Python | |||||
MuZero | ||||||||||
Awesome Ai Books | 1,086 | 8 months ago | mit | Jupyter Notebook | ||||||
Some awesome AI related books and pdfs for learning and downloading, also apply some playground models for learning | ||||||||||
Rex Gym | 759 | 2 years ago | 17 | October 08, 2020 | 9 | apache-2.0 | Python | |||
OpenAI Gym environments for an open-source quadruped robot (SpotMicro) | ||||||||||
Deep Learning Wizard | 664 | a month ago | 2 | mit | HTML | |||||
Open source guides/codes for mastering deep learning to deploying deep learning in production in PyTorch, Python, C++ and more. | ||||||||||
Sumo Rl | 467 | 4 months ago | 8 | June 15, 2023 | 15 | mit | Python | |||
Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with Gymnasium, PettingZoo, and popular RL libraries. | ||||||||||
Agilerl | 438 | 13 hours ago | 1 | apache-2.0 | Python | |||||
Streamlining reinforcement learning with RLOps |
Overview paper | Reinforcement learning paper | Quickstart | Install guide | Reference docs | Release notes
The gym-electric-motor (GEM) package is a Python toolbox for the simulation and control of various electric motors. It is built upon Faram Gymnasium Environments, and, therefore, can be used for both, classical control simulation and reinforcement learning experiments. It allows you to construct a typical drive train with the usual building blocks, i.e., supply voltages, converters, electric motors and load models, and obtain not only a closed-loop simulation of this physical structure, but also a rich interface for plugging in any decision making algorithm, from linear feedback control to Deep Deterministic Policy Gradient agents.
An easy way to get started with GEM is by playing around with the following interactive notebooks in Google Colaboratory. Most important features of GEM as well as application demonstrations are showcased, and give a kickstart for engineers in industry and academia.
There is a list of standalone example scripts as well for minimalistic demonstrations.
A basic routine is as simple as:
import gym_electric_motor as gem
if __name__ == '__main__':
env = gem.make("Finite-CC-PMSM-v0") # instantiate a discretely controlled PMSM
env.reset()
for _ in range(10000):
(states, references), rewards, done, _ =\
env.step(env.action_space.sample()) # pick random control actions
if done:
(states, references), _ = env.reset()
env.close()
pip install gym-electric-motor
git clone [email protected]:upb-lea/gym-electric-motor.git
cd gym-electric-motor
# Then either
python setup.py install
# or alternatively
pip install -e .
A GEM environment consists of following building blocks:
Among various DC-motor models, the following AC motors - together with their power electronic counterparts - are available:
The converters can be driven by means of a duty cycle (continuous control set) or switching commands (finite control set).
A white paper for the general toolbox in the context of drive simulation and control prototyping can be found in the Journal of Open Sorce Software (JOSS). Please use the following BibTeX entry for citing it:
@article{Balakrishna2021,
doi = {10.21105/joss.02498},
url = {https://doi.org/10.21105/joss.02498},
year = {2021},
publisher = {The Open Journal},
volume = {6},
number = {58},
pages = {2498},
author = {Praneeth {Balakrishna} and Gerrit {Book} and Wilhelm {Kirchgässner} and Maximilian {Schenke} and Arne {Traue} and Oliver {Wallscheid}},
title = {gym-electric-motor (GEM): A Python toolbox for the simulation of electric drive systems},
journal = {Journal of Open Source Software}
}
A white paper for the utilization of this framework within reinforcement learning is available at IEEE-Xplore (preprint: arxiv.org/abs/1910.09434). Please use the following BibTeX entry for citing it:
@article{9241851,
author={Traue, Arne and Book, Gerrit and Kirchgässner, Wilhelm and Wallscheid, Oliver},
journal={IEEE Transactions on Neural Networks and Learning Systems},
title={Toward a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control},
year={2022},
volume={33},
number={3},
pages={919-928},
doi={10.1109/TNNLS.2020.3029573}}
To run the unit tests ''pytest'' is required. All tests can be found in the ''tests'' folder. Execute pytest in the project's root folder:
>>> pytest
or with test coverage:
>>> pytest --cov=./
All tests shall pass.