Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for tensorrt triton inference server
tensorrt
x
triton-inference-server
x
8 search results found
Bisenet
⭐
1,130
Add bisenetv2. My implementation of BiSeNet
Generativeaiexamples
⭐
458
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
Yolov4 Triton Tensorrt
⭐
184
This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
Torchpipe
⭐
91
Boosting DL Service Throughput 1.5-4x by Ensemble Pipeline Serving with Concurrent CUDA Streams for PyTorch/LibTorch Frontend and TensorRT/CVCUDA, etc., Backends
Isaac_ros_dnn_inference
⭐
88
Hardware-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Stable Diffusion Tritonserver
⭐
49
Deploy stable diffusion model with onnx/tenorrt + tritonserver
Setup Deeplearning Tools
⭐
41
Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.
Yolov5_optimization_on_triton
⭐
41
Compare multiple optimization methods on triton to imporve model service performance
Related Searches
Python Tensorrt (122)
C Plus Plus Tensorrt (115)
Pytorch Tensorrt (79)
Deep Learning Tensorrt (72)
Onnx Tensorrt (64)
Yolov5 Tensorrt (45)
Ros Tensorrt (19)
Jupyter Notebook Tensorrt (17)
Deployment Tensorrt (15)
Gpu Tensorrt (15)
1-8 of 8 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.