OpenPAI v1.8.1 has been released!
With the release of v1.0, OpenPAI is switching to a more robust, more powerful and lightweight architecture. OpenPAI is also becoming more and more modular so that the platform can be easily customized and expanded to suit new needs. OpenPAI also provides many AI user-friendly features, making it easier for end users and administrators to complete daily AI tasks.
The platform incorporates the mature design that has a proven track record in Microsoft's large-scale production environment.
OpenPAI is a full stack solution. OpenPAI not only supports on-premises, hybrid, or public Cloud deployment but also supports single-box deployment for trial users.
Pre-built docker for popular AI frameworks. Easy to include heterogeneous hardware. Support Distributed training, such as distributed TensorFlow.
OpenPAI is a most complete solution for deep learning, support virtual cluster, compatible with Kubernetes eco-system, complete training pipeline at one cluster etc. OpenPAI is architected in a modular way: different module can be plugged in as appropriate. Here is the architecture of OpenPAI, highlighting technical innovations of the platform.
OpenPAI manages computing resources and is optimized for deep learning. Through docker technology, the computing hardware are decoupled with software, so that it's easy to run distributed jobs, switch with different deep learning frameworks, or run other kinds of jobs on consistent environments.
As OpenPAI is a platform, there are typically two different roles:
OpenPAI provides end-to-end manuals for both cluster users and administrators.
The admin manual is a comprehensive guide for cluster administrators, it covers (but not limited to) the following contents:
If you are considering upgrade from older version to the latest v1.0.0, please refer to the table below for a brief comparison between
v0.14.0 and the
v1.0.0. More detail about the upgrade considerations can be found upgrade guide.
|Architecture||Kubernetes + Hadoop YARN||Kubernetes|
|Scheduler||YARN Scheduler||HiveD / K8S default|
|Job Orchestrating||YARN Framework Launcher||Framework Controller|
|RESTful API||v1 + v2||pure v2|
|Storage||Team-wise storage plugin||PV/PVC storage sharing|
Basic cluster management. Through the Web-portal and a command-line tool
paictl, administrators could complete cluster managements, such as adding (or removing) nodes, monitoring nodes and services, and storages setup and permission control.
Users and groups management. Administrators could manage the users and groups easily.
Alerts management. Administrators could customize alerts rules and actions.
Customization. Administrators could customize the cluster by plugins. Administrators could also upgrade (or downgrade) a single component (e.g. rest servers) to address customized application demands.
The user manual is a guidance for cluster users, who could train and serve deep learning (and other) tasks on OpenPAI.
Job submission and monitoring. The quick start tutorial is a good start for learning how to train models on OpenPAI. And more examples and supports to multiple mainstream frameworks (out-of-the-box docker images) are in here. OpenPAI also provides supports for good debuggability and advanced job functionalities.
Data managements. Users could use cluster provisioned storages and custom storages in their jobs. The cluster provisioned storages are well integrated and easy to configure in a job (refer to here).
Collaboration and sharing. OpenPAI provides facilities for collaboration in teams and organizations. The cluster provisioned storages are organized by teams (groups). And users could easily share their works (e.g. jobs) in the marketplace, where others could discover and reproduce (clone) by one-click.
Besides the webportal, OpenPAI provides VS Code extension and command line tool (preview). The VS Code extension is a friendly, GUI based client tool of OpenPAI, and it's highly recommended. It's an extension of Visual Studio Code. It can submit job, simulate jobs locally, manage multiple OpenPAI environments, and so on.
v1.0.0 release, OpenPAI starts using a more modularized component design and re-organize the code structure to 1 main repo together with 7 standalone key component repos. pai is the main repo, and the 7 component repos are:
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
We are working on a set of major features improvement and refactor, anyone who is familiar with the features is encouraged to join the design review and discussion in the corresponding issue ticket.
One key purpose of OpenPAI is to support the highly diversified requirements from academia and industry. OpenPAI is completely open: it is under the MIT license. This makes OpenPAI particularly attractive to evaluate various research ideas, which include but not limited to the components.
OpenPAI operates in an open model. It is initially designed and developed by Microsoft Research (MSR) and Microsoft Software Technology Center Asia (STCA) platform team. We are glad to have Peking University, Xi'an Jiaotong University, Zhejiang University, University of Science and Technology of China and SHANGHAI INESA AI INNOVATION CENTER (SHAIIC) joined us to develop the platform jointly. Contributions from academia and industry are all highly welcome.