Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Tianshou | 7,125 | 10 | 8 months ago | 33 | August 22, 2023 | 97 | mit | Python | ||
An elegant PyTorch deep reinforcement learning library. | ||||||||||
Deep Reinforcement Learning With Pytorch | 2,741 | a year ago | 26 | mit | Python | |||||
PyTorch implementation of DQN, AC, ACER, A2C, A3C, PG, DDPG, TRPO, PPO, SAC, TD3 and .... | ||||||||||
Pytorch Rl | 638 | 4 years ago | 6 | mit | Python | |||||
PyTorch implementation of Deep Reinforcement Learning: Policy Gradient methods (TRPO, PPO, A2C) and Generative Adversarial Imitation Learning (GAIL). Fast Fisher vector product TRPO. | ||||||||||
Hands On Reinforcement Learning With Python | 596 | 4 years ago | 2 | Jupyter Notebook | ||||||
Master Reinforcement and Deep Reinforcement Learning using OpenAI Gym and TensorFlow | ||||||||||
Modular_rl | 523 | 6 years ago | 10 | mit | Python | |||||
Implementation of TRPO and related algorithms | ||||||||||
Reinforcement Learning Algorithms | 407 | 4 years ago | 4 | Python | ||||||
This repository contains most of pytorch implementation based classic deep reinforcement learning algorithms, including - DQN, DDQN, Dueling Network, DDPG, SAC, A2C, PPO, TRPO. (More algorithms are still in progress) | ||||||||||
Reinforcement Implementation | 380 | 2 years ago | 1 | Python | ||||||
Implementation of benchmark RL algorithms | ||||||||||
Deep_rl | 372 | 3 years ago | 1 | mit | Python | |||||
PyTorch implementations of deep reinforcement learning algorithms | ||||||||||
Machine Learning Is All You Need | 337 | a year ago | Python | |||||||
🔥🌟《Machine Learning 格物志》: ML + DL + RL basic codes and notes by sklearn, PyTorch, TensorFlow, Keras & the most important, from scratch!💪 This repository is ALL You Need! | ||||||||||
Pg_travel | 243 | 5 years ago | 7 | mit | Python | |||||
Policy Gradient algorithms (REINFORCE, NPG, TRPO, PPO) |