Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Server | 6,445 | a day ago | 64 | October 27, 2023 | 334 | bsd-3-clause | Python | |||
The Triton Inference Server provides an optimized cloud and edge inferencing solution. | ||||||||||
Deepcamera | 1,506 | 3 months ago | 15 | mit | JavaScript | |||||
Open-Source AI Camera. Empower any camera/CCTV with state-of-the-art AI, including facial recognition, person recognition(RE-ID) car detection, fall detection and more | ||||||||||
Edgeml | 1,499 | 6 days ago | 2 | July 22, 2019 | 33 | other | C++ | |||
This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India. | ||||||||||
Jraph | 1,141 | 6 | 7 months ago | 4 | August 12, 2022 | 11 | apache-2.0 | Python | ||
A Graph Neural Network Library in Jax | ||||||||||
Aoe | 847 | 8 months ago | 4 | February 05, 2020 | 11 | apache-2.0 | C++ | |||
AoE (AI on Edge,终端智能,边缘计算) 是一个终端侧AI集成运行时环境 (IRE),帮助开发者提升效率。 | ||||||||||
Practical Deep Learning Book | 675 | 3 months ago | 17 | mit | Jupyter Notebook | |||||
Official code repo for the O'Reilly Book - Practical Deep Learning for Cloud, Mobile & Edge | ||||||||||
Model_server | 603 | 4 | 19 hours ago | 6 | September 18, 2023 | 42 | apache-2.0 | C++ | ||
A scalable inference server for models optimized with OpenVINO™ | ||||||||||
Datasets | 521 | 6 months ago | mit | |||||||
A repository of pretty cool datasets that I collected for network science and machine learning research. | ||||||||||
Awesome Federated Learning | 482 | 9 months ago | mit | Shell | ||||||
All materials you need for Federated Learning: blogs, videos, papers, and softwares, etc. | ||||||||||
Awesome Federated Computing | 448 | 4 months ago | cc0-1.0 | |||||||
:books: :eyeglasses: A collection of research papers, codes, tutorials and blogs on Federated Computing/Learning. |
LATEST RELEASE: You are currently on the main branch which tracks under-development progress towards the next release. The current release is version 2.38.0 and corresponds to the 23.09 container release on NVIDIA GPU Cloud (NGC).
Triton Inference Server is an open source inference serving software that streamlines AI inferencing. Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton Inference Server supports inference across cloud, data center, edge and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton Inference Server delivers optimized performance for many query types, including real time, batched, ensembles and audio/video streaming. Triton inference Server is part of NVIDIA AI Enterprise, a software platform that accelerates the data science pipeline and streamlines the development and deployment of production AI.
Major features include:
New to Triton Inference Server? Make use of these tutorials to begin your Triton journey!
Join the Triton and TensorRT community and stay current on the latest product updates, bug fixes, content, best practices, and more. Need enterprise support? NVIDIA global support is available for Triton Inference Server with the NVIDIA AI Enterprise software suite.
# Step 1: Create the example model repository
git clone -b r23.10 https://github.com/triton-inference-server/server.git
cd server/docs/examples
./fetch_models.sh
# Step 2: Launch triton from the NGC Triton container
docker run --gpus=1 --rm --net=host -v ${PWD}/model_repository:/models nvcr.io/nvidia/tritonserver:23.10-py3 tritonserver --model-repository=/models
# Step 3: Sending an Inference Request
# In a separate console, launch the image_client example from the NGC Triton SDK container
docker run -it --rm --net=host nvcr.io/nvidia/tritonserver:23.10-py3-sdk
/workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg
# Inference should return the following
Image '/workspace/images/mug.jpg':
15.346230 (504) = COFFEE MUG
13.224326 (968) = CUP
10.422965 (505) = COFFEEPOT
Please read the QuickStart guide for additional information regarding this example. The quickstart guide also contains an example of how to launch Triton on CPU-only systems. New to Triton and wondering where to get started? Watch the Getting Started video.
Check out NVIDIA LaunchPad for free access to a set of hands-on labs with Triton Inference Server hosted on NVIDIA infrastructure.
Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. The NVIDIA Developer Zone contains additional documentation, presentations, and examples.
The recommended way to build and use Triton Inference Server is with Docker images.
The first step in using Triton to serve your models is to place one or more models into a model repository. Depending on the type of the model and on what Triton capabilities you want to enable for the model, you may need to create a model configuration for the model.
A Triton client application sends inference and other requests to Triton. The Python and C++ client libraries provide APIs to simplify this communication.
Triton Inference Server's architecture is specifically designed for modularity and flexibility
Contributions to Triton Inference Server are more than welcome. To contribute please review the contribution guidelines. If you have a backend, client, example or similar contribution that is not modifying the core of Triton, then you should file a PR in the contrib repo.
We appreciate any feedback, questions or bug reporting regarding this project. When posting issues in GitHub, follow the process outlined in the Stack Overflow document. Ensure posted examples are:
For issues, please use the provided bug report and feature request templates.
For questions, we recommend posting in our community GitHub Discussions.
Please refer to the NVIDIA Developer Triton page for more information.