Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Cs Video Courses | 60,304 | 2 days ago | 2 | |||||||
List of Computer Science courses with video lectures. | ||||||||||
Keras | 59,445 | 578 | 10 hours ago | 80 | June 27, 2023 | 98 | apache-2.0 | Python | ||
Deep Learning for humans | ||||||||||
Scikit Learn | 55,980 | 18,944 | 9,755 | 7 hours ago | 71 | June 30, 2023 | 2,253 | bsd-3-clause | Python | |
scikit-learn: machine learning in Python | ||||||||||
100 Days Of Ml Code | 41,216 | 3 months ago | 61 | mit | ||||||
100 Days of ML Coding | ||||||||||
Deepspeed | 28,611 | 53 | 9 hours ago | 68 | July 17, 2023 | 799 | apache-2.0 | Python | ||
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. | ||||||||||
Coursera Ml Andrewng Notes | 28,489 | a month ago | 60 | HTML | ||||||
吴恩达老师的机器学习课程个人笔记 | ||||||||||
Ray | 27,922 | 80 | 298 | 6 hours ago | 87 | July 24, 2023 | 3,428 | apache-2.0 | Python | |
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads. | ||||||||||
Machine Learning For Software Engineers | 26,824 | 5 months ago | 24 | cc-by-sa-4.0 | ||||||
A complete daily plan for studying to become a machine learning engineer. | ||||||||||
Data Science Ipython Notebooks | 25,242 | 3 months ago | 34 | other | Python | |||||
Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines. | ||||||||||
Handson Ml | 25,030 | 3 months ago | 139 | apache-2.0 | Jupyter Notebook | |||||
⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 instead. |
Have you heard about the amazing results achieved by Deepmind with AlphaGo Zero and by OpenAI in Dota 2? It's all about deep neural networks and reinforcement learning. Do you want to know more about it?
This is the right opportunity for you to finally learn Deep RL and use it on new and exciting projects and applications.
Here you'll find an in depth introduction to these algorithms. Among which you'll learn q learning, deep q learning, PPO, actor critic, and implement them using Python and PyTorch.
The ultimate aim is to use these general-purpose technologies and apply them to all sorts of important real world problems. Demis Hassabis
This repository contains:
Lectures (& other content) primarily from DeepMind and Berkley Youtube's Channel.
Algorithms (like DQN, A2C, and PPO) implemented in PyTorch and tested on OpenAI Gym: RoboSchool & Atari.
Stay tuned and follow me on and #60DaysRLChallenge
Now we have also a Slack channel. To get an invitation, email me at [email protected]. Also, email me if you have any idea, suggestion or improvement.
To learn Deep Learning, Computer Vision or Natural Language Processing check my 1-Year-ML-Journey
To learn Reinforcement Learning and Deep RL more in depth, check out my book Reinforcement Learning Algorithms with Python!!
Table of Contents
Those who cannot remember the past are condemned to repeat it - George Santayana
This week, we will learn about the basic blocks of reinforcement learning, starting from the definition of the problem all the way through the estimation and optimization of the functions that are used to express the quality of a policy or state.
Q-learning applied to FrozenLake - For exercise, you can solve the game using SARSA or implement Q-learning by yourself. In the former case, only few changes are needed.
This week we'll learn more advanced concepts and apply deep neural network to Q-learning algorithms.
DQN and some variants applied to Pong - This week the goal is to develop a DQN algorithm to play an Atari game. To make it more interesting I developed three extensions of DQN: Double Q-learning, Multi-step learning, Dueling networks and Noisy Nets. Play with them, and if you feel confident, you can implement Prioritized replay, Dueling networks or Distributional RL. To know more about these improvements read the papers!
Week 4 introduce Policy Gradient methods, a class of algorithms that optimize directly the policy. Also, you'll learn about Actor-Critic algorithms. These algorithms combine both policy gradient (the actor) and value function (the critic).
Vanilla PG and A2C applied to CartPole - The exercise of this week is to implement a policy gradient method or a more sophisticated actor-critic. In the repository you can find an implemented version of PG and A2C. Bug Alert! Pay attention that A2C give me strange result. If you find the implementation of PG and A2C easy, you can try with the asynchronous version of A2C (A3C).
This week is about advanced policy gradient methods that improve the stability and the convergence of the "Vanilla" policy gradient methods. You'll learn and implement PPO, a RL algorithm developed by OpenAI and adopted in OpenAI Five.
PPO applied to BipedalWalker - This week, you have to implement PPO or TRPO. I suggest PPO given its simplicity (compared to TRPO). In the project folder Week5 you find an implementation of PPO that learn to play BipedalWalker. Furthermore, in the folder you can find other resources that will help you in the development of the project. Have fun!
To learn more about PPO read the paper and take a look at the Arxiv Insights's video
In the last year, Evolution strategies (ES) and Genetic Algorithms (GA) has been shown to achieve comparable results to RL methods. They are derivate-free black-box algorithms that require more data than RL to learn but are able to scale up across thousands of CPUs. This week we'll look at this black-box algorithms.
Evolution Strategies applied to LunarLander - This week the project is to implement a ES or GA. In the Week6 folder you can find a basic implementation of the paper Evolution Strategies as a Scalable Alternative to Reinforcement Learning to solve LunarLanderContinuous. You can modify it to play more difficult environments or add your ideas.
The algorithms studied up to now are model-free, meaning that they only choose the better action given a state. These algorithms achieve very good performance but require a lot of training data. Instead, model-based algorithms, learn the environment and plan the next actions accordingly to the model learned. These methods are more sample efficient than model-free but overall achieve worst performance. In this week you'll learn the theory behind these methods and implement one of the last algorithms.
MB-MF applied to RoboschoolAnt - This week I chose to implement the model-based algorithm described in this paper. You can find my implementation here. NB: Instead of implementing it on Mujoco as in the paper, I used RoboSchool, an open-source simulator for robot, integrated with OpenAI Gym.
This last week is about advanced RL concepts and a project of your choice.
Here you can find some project ideas.
Congratulation for completing the 60 Days RL Challenge!! Let me know if you enjoyed it and share it!
See you!
📚 Reinforcement Learning: An Introduction - by Sutton & Barto. The "Bible" of reinforcement learning. Here you can find the PDF draft of the second version.
📚 Deep Reinforcement Learning Hands-On - by Maxim Lapan
📚 Deep Learning - Ian Goodfellow
📺 Deep Reinforcement Learning - UC Berkeley class by Levine, check here their site.
📺 Reinforcement Learning course - by David Silver, DeepMind. Great introductory lectures by Silver, a lead researcher on AlphaGo. They follow the book Reinforcement Learning by Sutton & Barto.
📚 Awesome Reinforcement Learning. A curated list of resources dedicated to reinforcement learning
📚 GroundAI on RL. Papers on reinforcement learning
Any contribution is higly appreciated! Cheers!