Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Ray | 27,844 | 80 | 298 | 6 hours ago | 87 | July 24, 2023 | 3,423 | apache-2.0 | Python | |
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads. | ||||||||||
Gradio | 22,179 | 1 | 124 | 6 hours ago | 479 | July 26, 2023 | 424 | apache-2.0 | Python | |
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work! | ||||||||||
Openllm | 6,109 | 2 | 11 hours ago | 72 | August 08, 2023 | 54 | apache-2.0 | Python | ||
Operating LLMs in production | ||||||||||
Bentoml | 5,698 | 10 | 12 hours ago | 110 | August 01, 2023 | 178 | apache-2.0 | Python | ||
Build Production-Grade AI Applications | ||||||||||
Fate | 5,200 | 1 | 7 days ago | 30 | April 18, 2022 | 786 | apache-2.0 | Python | ||
An Industrial Grade Federated Learning Framework | ||||||||||
Seldon Core | 3,914 | 6 | 9 hours ago | 197 | July 12, 2023 | 107 | apache-2.0 | HTML | ||
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models | ||||||||||
Orchest | 3,876 | 4 months ago | 19 | December 13, 2022 | 125 | apache-2.0 | TypeScript | |||
Build data pipelines, the easy way 🛠️ | ||||||||||
Production Level Deep Learning | 3,241 | 2 years ago | 6 | |||||||
A guideline for building practical production-level deep learning systems to be deployed in real world applications. | ||||||||||
Opyrator | 2,864 | 2 months ago | 11 | May 04, 2021 | 5 | mit | Python | |||
🪄 Turns your machine learning code into microservices with web API, interactive GUI, and more. | ||||||||||
Transformer Deploy | 1,475 | 2 months ago | 51 | apache-2.0 | Python | |||||
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀 |
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute:
Learn more about Ray AI Libraries:
Or more about Ray Core and its key abstractions:
Monitor and debug Ray applications and clusters using the Ray dashboard.
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.
Install Ray with: pip install ray
. For nightly wheels, see the
Installation page.
Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.
Ray is a unified way to scale Python and AI applications from a laptop to a cluster.
With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.
Older documents:
Platform | Purpose | Estimated Response Time | Support Level |
---|---|---|---|
Discourse Forum | For discussions about development and questions about usage. | < 1 day | Community |
GitHub Issues | For reporting bugs and filing feature requests. | < 2 days | Ray OSS Team |
Slack | For collaborating with other Ray users. | < 2 days | Community |
StackOverflow | For asking questions about how to use Ray. | 3-5 days | Community |
Meetup Group | For learning about Ray projects and best practices. | Monthly | Ray DevRel |
For staying up-to-date on new features. | Daily | Ray DevRel |