Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Yolov7 | 10,255 | a day ago | 1,282 | gpl-3.0 | Jupyter Notebook | |||||
Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors | ||||||||||
Keras Yolo3 | 7,059 | 7 months ago | 519 | mit | Python | |||||
A Keras implementation of YOLOv3 (Tensorflow backend) | ||||||||||
Pytorch Yolov3 | 7,023 | 9 days ago | 23 | December 31, 2021 | 100 | gpl-3.0 | Python | |||
Minimal PyTorch implementation of YOLOv3 | ||||||||||
Chineseocr | 4,953 | 9 months ago | 424 | mit | Python | |||||
yolo3+ocr | ||||||||||
Pytorch Yolov4 | 4,156 | 6 months ago | 330 | apache-2.0 | Python | |||||
PyTorch ,ONNX and TensorRT implementation of YOLOv4 | ||||||||||
Map | 2,685 | a month ago | 99 | apache-2.0 | Python | |||||
mean Average Precision - This code evaluates the performance of your neural net for object recognition. | ||||||||||
Yad2k | 2,604 | 2 years ago | 120 | other | Python | |||||
YAD2K: Yet Another Darknet 2 Keras | ||||||||||
Yolov3 Tf2 | 2,480 | 2 months ago | 168 | mit | Jupyter Notebook | |||||
YoloV3 Implemented in Tensorflow 2.0 | ||||||||||
Yolor | 1,872 | 3 months ago | 213 | gpl-3.0 | Python | |||||
implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks (https://arxiv.org/abs/2105.04206) | ||||||||||
Yolo_tensorflow | 1,638 | 4 years ago | 37 | other | Python | |||||
tensorflow implementation of 'YOLO : Real-Time Object Detection' |
This is a repository for an object detection inference API using the Yolov4 Darknet framework.
This Repository has also cross compatibility for Yolov3 darknet models.
This Repository has also support for state of the art Yolov4 models
This repo is based on AlexeyAB darknet repository.
The inference REST API works on GPU. It's supported only on Linux Operating systems.
Models trained using our training automation Yolov4 and Yolov3 repository can be deployed in this API. Several object detection models can be loaded and used at the same time.
To choose Yolov4 instead of Yolov3 training just change the inference engine name in the config.json inside your model folder.
This repo can be deployed using either docker or docker swarm.
Please use docker swarm only if you need to:
Provide redundancy in terms of API containers: In case a container went down, the incoming requests will be redirected to another running instance.
Coordinate between the containers: Swarm will orchestrate between the APIs and choose one of them to listen to the incoming request.
Scale up the Inference service in order to get a faster prediction especially if there's traffic on the service.
If none of the aforementioned requirements are needed, simply use docker.
To check if you have docker-ce installed:
docker --version
To check if you have nvidia-docker installed:
nvidia-docker --version
To check your nvidia drivers version, open your terminal and type the command nvidia-smi
Use the following command to install docker on Ubuntu:
chmod +x install_prerequisites.sh && source install_prerequisites.sh
Install NVIDIA Drivers (410.x or higher) and NVIDIA Docker for GPU by following the official docs
In order to build the project run the following command from the project's root directory:
sudo docker build -t yolov4_inference_api_gpu -f ./docker/dockerfile .
sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t yolov4_inference_api_gpu -f ./docker/dockerfile .
As mentioned before, this container can be deployed using either docker or docker swarm.
If you wish to deploy this API using docker, please issue the following run command.
If you wish to deploy this API using docker swarm, please refer to following link docker swarm documentation. After deploying the API with docker swarm, please consider returning to this documentation for further information about the API endpoints as well as the model structure sections.
To run the API, go the to the API's directory and run the following:
sudo NV_GPU=0 nvidia-docker run -itv $(pwd)/models:/models -v $(pwd)/models_hash:/models_hash -p <docker_host_port>:1234 yolov4_inference_api_gpu
The <docker_host_port> can be any unique port of your choice.
The API file will be run automatically, and the service will listen to http requests on the chosen port.
NV_GPU defines on which GPU you want the API to run. If you want the API to run on multiple GPUs just enter multiple numbers seperated by a comma: (NV_GPU=0,1 for example)
To see all available endpoints, open your favorite browser and navigate to:
http://<machine_IP>:<docker_host_port>/docs
The 'predict_batch' endpoint is not shown on swagger. The list of files input is not yet supported.
P.S: If you are using custom endpoints like /load, /detect, and /get_labels, you should always use the /load endpoint first and then use /detect or /get_labels
Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again
Performs inference on specified model, image, and returns bounding-boxes
Returns all of the specified model labels with their hashed values
Performs inference on specified model, image, draws bounding boxes on the image, and returns the actual image as response
Lists all available models
Loads the specified model. Loaded models are stored and aren't loaded again
Performs inference on specified model, image, and returns bounding boxes.
Returns all of the specified model labels
Returns the specified model's configuration
Performs inference on specified model and a list of images, and returns bounding boxes
P.S: Custom endpoints like /load, /detect, and /get_labels should be used in a chronological order. First you have to call /load, and then call /detect or /get_labels
The folder "models" contains subfolders of all the models to be loaded. Inside each subfolder there should be a:
~/BMW-YOLOv4-Inference-API-GPU/models/<name-of-the-model>/config.json
~/BMW-YOLOv4-Inference-API-GPU/models/<name-of-the-model>/obj.data
~/BMW-YOLOv4-Inference-API-GPU/models/<name-of-the-model>/obj.names
~/BMW-YOLOv4-Inference-API-GPU/models/<name-of-the-model>/yolo-obj.cfg
~/BMW-YOLOv4-Inference-API-GPU/models/<name-of-the-model>/yolo-obj.weights
Cfg file (yolo-obj.cfg): contains the configuration of the model
data file (obj.data): contains number of classes and names file path
classes=<number_of_classes>
names=/models/<model_name>/obj.names
Weights file (yolo-obj.weights)
Names file (obj.names) : contains the names of the classes
Config.json (This is a json file containing information about the model)
{
"inference_engine_name": "yolov4_darknet_detection",
"detection_threshold": 0.6,
"nms_threshold": 0.45,
"hier_threshold": 0.5,
"framework": "yolo",
"type": "detection",
"network": "network_name"
}
P.S
Ubuntu | |||
---|---|---|---|
Network\Hardware | Intel Xeon CPU 2.3 GHz | Intel Core i9-7900 3.3 GHZ | Tesla V100 |
COCO Dataset | 0.259 seconds/image | 0.281 seconds/image | 0.0691 seconds/image |
Antoine Charbel, inmind.ai , Beirut, Lebanon
Charbel El Achkar, Beirut, Lebanon
Hadi Koubeissy, Beirut, Lebanon