Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for tensorrt openvino
openvino
x
tensorrt
x
25 search results found
Yolox
⭐
8,759
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Tnn
⭐
4,210
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts
Fastdeploy
⭐
2,547
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
Mmdeploy
⭐
2,371
OpenMMLab Model Deployment Framework
Berrynet
⭐
1,563
Deep learning gateway on Raspberry Pi and other edge devices
Bisenet
⭐
1,130
Add bisenetv2. My implementation of BiSeNet
Adlik
⭐
688
Adlik: Toolkit for Accelerating Deep Learning Inference
Yolou
⭐
608
YOLOv3、YOLOv4、YOLOv5、YOLOv5-Lite、YOLOv6-v1、YOLOv6- v2、FastestDet、YOLOv5-SPD、TensorRT、NCNN、Tengine、Ope
Nndeploy
⭐
303
nndeploy is a cross-platform, high-performing, and straightforward AI model deployment framework. We strive to deliver a consistent and user-friendly experience across various inference framework backends in complex deployment environments and focus on performance.
Openvino2tensorflow
⭐
262
This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the
Yolox Ros
⭐
209
YOLOX + ROS2 object detection package (C++ only support)
Vs Mlrt
⭐
189
Efficient CPU/GPU/Vulkan ML Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2/v3, Real-CUGAN, RIFE and more!)
Tflite2tensorflow
⭐
186
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
Ros Yolo Sort
⭐
144
YOLO v3, v4, v5, v6, v7 + SORT tracking + ROS platform. Supporting: YOLO with Darknet, OpenCV(DNN), OpenVINO, TensorRT(tkDNN). SORT supports python(original) and C++. (Not Deep SORT)
Lightglue Onnx
⭐
141
ONNX-compatible LightGlue: Local Feature Matching at Light Speed. Supports TensorRT, OpenVINO
Fast Pathology
⭐
108
⚡ Open-source software for deep learning-based digital pathology
Opti_models
⭐
38
PyTorch optimizations and benchmarking
Infery Examples
⭐
28
A collection of demo-apps and inference scripts for various deep learning frameworks using infery (Python).
Model Inference Deployment
⭐
21
A curated list of awesome inference deployment framework of artificial intelligence models.
Onnx Runtime With Tensorrt And Openvino
⭐
21
Docker scripts for building ONNX Runtime with TensorRT and OpenVINO in manylinux environment
Yolact_edge_onnx_tensorrt_myriad
⭐
19
Provides a conversion flow for YOLACT_Edge to models compatible with ONNX, TensorRT, OpenVINO and Myriad (OAK). My own implementation of post-processing allows for e2e inference. Support for Multi-Class NonMaximumSuppression, CombinedNonMaxSuppression.
Dedode Onnx Tensorrt
⭐
17
ONNX-compatible DeDoDe 🎶 Detect, Don't Describe - Describe, Don't Detect, for Local Feature Matching. Supports TensorRT 🚀
Mtomo
⭐
17
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Sit4onnx
⭐
17
Tools for simple inference testing using TensorRT, CUDA and OpenVINO CPU/GPU and CPU providers. Simple Inference Test for ONNX.
Image Animation Turbo Boost
⭐
13
Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.
1-25 of 25 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.