Awesome Open Source
Search
Programming Languages
Languages
All Categories
Categories
About
Search results for deployment triton inference server
deployment
x
triton-inference-server
x
2 search results found
Torchpipe
⭐
91
Boosting DL Service Throughput 1.5-4x by Ensemble Pipeline Serving with Concurrent CUDA Streams for PyTorch/LibTorch Frontend and TensorRT/CVCUDA, etc., Backends
Fastdeploy
⭐
90
Deploy DL/ ML inference pipelines with minimal extra code.
Stable Diffusion Tritonserver
⭐
49
Deploy stable diffusion model with onnx/tenorrt + tritonserver
Yolov8 Triton
⭐
11
Provides an ensemble model to deploy a YoloV8 ONNX model to Triton
Related Searches
Javascript Deployment (34,231)
Reactjs Deployment (33,256)
Python Deployment (3,235)
Typescript Deployment (2,481)
Docker Deployment (2,458)
Deployment Amazon Web Services (2,056)
Html Deployment (1,982)
Ruby Deployment (1,807)
Css Deployment (1,684)
Deployment Heroku (1,673)
1-2 of 2 search results
Privacy
|
About
|
Terms
|
Follow Us On Twitter
Copyright 2018-2024 Awesome Open Source. All rights reserved.