Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Deepspeech | 21,963 | 29 | 11 | 4 days ago | 100 | December 19, 2020 | 128 | mpl-2.0 | C++ | |
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers. | ||||||||||
Awesome Embedded | 3,412 | 5 days ago | 1 | unlicense | ||||||
A curated list of awesome embedded programming. | ||||||||||
Tensorflow On Raspberry Pi | 2,077 | 5 years ago | 73 | other | Python | |||||
TensorFlow for Raspberry Pi | ||||||||||
Project_alias | 1,421 | 3 years ago | 1 | June 02, 2018 | 11 | gpl-3.0 | Python | |||
Alias is a teachable “parasite” that is designed to give users more control over their smart assistants, both when it comes to customisation and privacy. Through a simple app the user can train Alias to react on a custom wake-word/sound, and once trained, Alias can take control over your home assistant by activating it for you. | ||||||||||
Deepcamera | 1,133 | 4 months ago | 11 | mit | JavaScript | |||||
Open-Source AI Camera. Empower any camera/CCTV with state-of-the-art AI, including facial recognition, person recognition(RE-ID) car detection, fall detection and more | ||||||||||
Tensorflow On Arm | 791 | 2 years ago | 19 | mit | Shell | |||||
TensorFlow for Arm | ||||||||||
Enclosure Picroft | 768 | 5 months ago | 22 | lgpl-3.0 | Shell | |||||
Mycroft interface for Raspberry Pi environment | ||||||||||
Actionai | 607 | 7 months ago | 26 | gpl-3.0 | Python | |||||
Real-Time Spatio-Temporally Localized Activity Detection by Tracking Body Keypoints | ||||||||||
Ultimatealpr Sdk | 499 | 19 days ago | 18 | other | C++ | |||||
World's fastest ANPR / ALPR implementation for CPUs, GPUs, VPUs and NPUs using deep learning (Tensorflow, Tensorflow lite, TensorRT, OpenVX, OpenVINO). Multi-Charset (Latin, Korean, Chinese) & Multi-OS (Jetson, Android, Raspberry Pi, Linux, Windows) & Multi-Arch (ARM, x86). | ||||||||||
Self Driving Toy Car | 487 | 6 years ago | 1 | Jupyter Notebook | ||||||
A self driving toy car using end-to-end learning |
ActionAI is a python library for training machine learning models to classify human action. It is a generalization of our yoga smart personal trainer, which is included in this repo as an example.
These instructions will show how to prepare your image data, train a model, and deploy the model to classify human action from image samples. See deployment for notes on how to deploy the project on a live stream.
Docker installation is recommended:
The included Dockerfile builds for Jetson devices running Jetpack 4.6.1. To build, cd into the docker/
directory and run:
docker build -f jetson-deployment.dockerfile -t actionai:j4.6.1 .
You can also pull a prebuilt image hosted on Docker Hub.
docker pull smellslikeml/actionai:j4.6.1
docker run -itd --rm
--net=host
--privileged
--env=DISPLAY
--runtime=nvidia # for GPU
--env=QT_X11_NO_MITSHM=1 # for visualization
-v /tmp/.X11-unix:/tmp/.X11-unix
--device /dev/input/js0 # for PS3 controller
-v /run/udev/data:/run/udev/data
-v /dev/bus/usb:/dev/bus/usb # for depthai camera
--device-cgroup-rule='c *:* rmw'
-v /path/to/ActionAI:/app/
smellslikeml/actionai:j4.6.1-latest /bin/bash
Alternatively, use a virtual environment to avoid any conflicts with your system's global configuration. You can install the required dependencies via pip:
We use the trt_pose repo to extract pose estimations. Please look to this repo to install the required dependencies.
You will also need to download these zipped model assets and unzip the package into the models/
directory.
# Assuming your python path points to python 3.x
$ pip install -r requirements.txt
We've provided a sample inference script, inference.py
, that will read input from a webcam, mp4, or rstp stream, run inference on each frame, and print inference results.
If you are running on a Jetson Nano, you can try running the iva.py
script, which will perform multi-person tracking and activity recognition like the demo gif above Getting Started. Make sure you have followed the Jetson Nano installation instructions above and simply run:
$ python iva.py 0
# or if you have a video file
$ python iva.py /path/to/file.mp4
If specified, this script will write a labeled video as out.mp4
. This demo uses a sample model called lstm_spin_squat.h5
to classify spinning vs. squatting. Change the model and motion dictionary under the RUNSECONDARY
flag to run your own classifier.
We've also included a script under the experimental folder, online_finetune.py
, that supports labelling samples via a PS3 Controller on a Jetson Nano and training in real-time from a webcam stream. This will require these extra dependencies:
To test it, run:
# Using a webcam
$ python experimental/online_finetune.py /dev/video0
# Using a video asset
$ python experimental/online_finetune.py /path/to/file.mp4
This script will also write labelled data into a csv file stored in data/
directory and produce a video asset out.mp4
.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE.md file for details