Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Jina | 18,506 | 2 | 2 hours ago | 2,019 | July 06, 2022 | 22 | apache-2.0 | Python | ||
🔮 Build multimodal AI services via cloud native technologies | ||||||||||
Recommenders | 15,799 | 2 | 3 hours ago | 11 | April 01, 2022 | 165 | mit | Python | ||
Best Practices on Recommendation Systems | ||||||||||
Awesome Kubernetes | 13,893 | 20 days ago | 9 | other | Shell | |||||
A curated list for awesome kubernetes sources :ship::tada: | ||||||||||
Argo Workflows | 12,983 | 24 | 31 | 2 hours ago | 423 | June 23, 2022 | 871 | apache-2.0 | Go | |
Workflow engine for Kubernetes | ||||||||||
Kubeflow | 12,620 | 2 | a day ago | 112 | April 13, 2021 | 455 | apache-2.0 | TypeScript | ||
Machine Learning Toolkit for Kubernetes | ||||||||||
Computervision Recipes | 8,950 | 4 months ago | 65 | mit | Jupyter Notebook | |||||
Best Practices, code samples, and documentation for Computer Vision. | ||||||||||
Metaflow | 6,693 | 3 hours ago | 57 | September 17, 2022 | 270 | apache-2.0 | Python | |||
:rocket: Build and manage real-life data science projects with ease! | ||||||||||
Fate | 5,040 | 6 hours ago | 1 | May 06, 2020 | 734 | apache-2.0 | Python | |||
An Industrial Grade Federated Learning Framework | ||||||||||
Bentoml | 4,979 | 4 | 10 | 11 hours ago | 72 | July 13, 2021 | 176 | apache-2.0 | Python | |
Unified Model Serving Framework 🍱 | ||||||||||
Pixie | 4,625 | 17 hours ago | 88 | April 24, 2021 | 231 | apache-2.0 | C++ | |||
Instant Kubernetes-Native Application Observability |
KServe provides a Kubernetes Custom Resource Definition for serving machine learning (ML) models on arbitrary frameworks. It aims to solve production model serving use cases by providing performant, high abstraction interfaces for common ML frameworks like Tensorflow, XGBoost, ScikitLearn, PyTorch, and ONNX.
It encapsulates the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU Autoscaling, Scale to Zero, and Canary Rollouts to your ML deployments. It enables a simple, pluggable, and complete story for Production ML Serving including prediction, pre-processing, post-processing and explainability. KServe is being used across various organizations.
For more details, visit the KServe website.
Since 0.7 KFServing is rebranded to KServe, we still support the RTS release 0.6.x, please refer to corresponding release branch for docs.
To learn more about KServe, how to use various supported features, and how to participate in the KServe community, please follow the KServe website documentation. Additionally, we have compiled a list of presentations and demos to dive through various details.
KServe is an important addon component of Kubeflow, please learn more from the Kubeflow KServe documentation and follow KServe with Kubeflow on AWS to learn how to use KServe on AWS.