Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Keras Rl | 5,348 | 51 | 2 | 5 months ago | 7 | June 01, 2018 | 43 | mit | Python | |
Deep Reinforcement Learning for Keras. | ||||||||||
Deep Q Learning | 894 | 3 years ago | 12 | mit | Python | |||||
Minimal Deep Q Learning (DQN & DDQN) implementations in Keras | ||||||||||
Deep Rl Keras | 485 | 3 years ago | 12 | Python | ||||||
Keras Implementation of popular Deep RL Algorithms (A3C, DDQN, DDPG, Dueling DDQN) | ||||||||||
Keras Flappybird | 392 | 4 years ago | 12 | Python | ||||||
Using Keras and Deep Q-Network to Play FlappyBird | ||||||||||
Openai_lab | 314 | 5 years ago | mit | Python | ||||||
An experimentation framework for Reinforcement Learning using OpenAI Gym, Tensorflow, and Keras. | ||||||||||
Machine Learning Is All You Need | 253 | a year ago | Python | |||||||
🔥🌟《Machine Learning 格物志》: ML + DL + RL basic codes and notes by sklearn, PyTorch, TensorFlow, Keras & the most important, from scratch!💪 This repository is ALL You Need! | ||||||||||
Dqn | 194 | 5 years ago | 12 | mit | Python | |||||
Basic DQN implementation | ||||||||||
Dqn | 116 | 5 years ago | 8 | Python | ||||||
DQN implementation in Keras + TensorFlow + OpenAI Gym | ||||||||||
Openaigym | 86 | 2 years ago | 2 | Python | ||||||
Solving OpenAI Gym problems. | ||||||||||
Tensorflow Practice | 80 | 3 years ago | mit | Jupyter Notebook | ||||||
Tutorials of Tensorflow for beginners with popular data sets and projects. Let's have fun to learn Machine Learning with Tensorflow. |
Introduction to Making a Simple Game AI with Deep Reinforcement Learning
Minimal and Simple Deep Q Learning Implemenation in Keras and Gym. Under 100 lines of code!
The explanation for the dqn.py
code is covered in the blog article
https://keon.io/deep-q-learning/
I made minor tweaks to this repository such as load
and save
functions for convenience.
I also made the memory
a deque instead of just a list.
This is in order to limit the maximum number of elements in the memory.
The training might be unstable for dqn.py
. This problem is mitigated in ddqn.py
.
I'll cover ddqn
in the next article.