Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Deepc | 448 | 7 months ago | 26 | apache-2.0 | C++ | |||||
vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers | ||||||||||
Eloquentarduino | 132 | 10 months ago | 7 | C++ | ||||||
IO, scheduling, utils, machine learning... for Arduino | ||||||||||
Colabs | 87 | 7 days ago | 2 | other | Jupyter Notebook | |||||
This repository holds the Google Colabs for the EdX TinyML Specialization | ||||||||||
Gestures Ml Js | 61 | 4 years ago | 2 | gpl-3.0 | JavaScript | |||||
[WIP] - Gesture recognition using hardware and Tensorflow.js | ||||||||||
Rgb Neural Net | 60 | 5 years ago | gpl-3.0 | Jupyter Notebook | ||||||
Physical visualisation of neural network learning using RGB leds, arduino and raspberry pi. | ||||||||||
Esp32_cloudspeech | 40 | 5 years ago | 1 | mit | C++ | |||||
Transcribe your voice by Google's Cloud Speech-to-Text API with esp32 | ||||||||||
Machine Learning For Physical Computing | 37 | a year ago | JavaScript | |||||||
Repository for the "Machine Learning for Physical Computing" class at ITP, NYU | ||||||||||
Arduino Library | 27 | 4 months ago | 1 | other | C++ | |||||
This repository holds the Arduino Library for the EdX TinyML Specialization | ||||||||||
Esp32 Autonomous Car | 25 | 7 months ago | mit | Python | ||||||
Autonomous car using ESP32. | ||||||||||
Nindamani The Weed Removal Robot | 23 | 2 years ago | mit | Python | ||||||
Nindamani, the AI based mechanically weed removal robot |
In this repository i tried to replicate a cool project by a japanese scientist who made a machine which had 100 % accuracy in defeating humans in the game of stone-paper and scissors using convolutional neural networks and computer vision i have used opencv for computer vision and keras for CNNS link to video tutorial https://www.youtube.com/watch?v=ecSDKWkktOw
pip install -r requirements_gpu.txt
pip install -r requirements_cpu.txt
Place the camera and don't move it, As soon as the camera starts, perform only one gesture at a time, the numberred images of this gesture will be stored in the root directory(you can modify the code append the path to which ever directory you want) Gather data for all the classes in the similar way
I used my own laptop for training puproses but you can use aws, google collab, azure ........
For training
1)Modify the path of the stone,paper and scissor folder in hand_gesture_creating_model.py
2)Run hand_gesture_creating_model.py
After the model is trained you are ready to run it.
1)Modify the path to the model file in predicting.py
2)Run predicting.py
Congratulations you just made your very own MAN defeating machine.
Give the repository a star if you really liked it.
if you have any doubts, you can comment them under my youtube video or you can post your doubts on my facebook page
REACTOR SCIENCE
Thank you