Polyaxon

MLOps Tools For Managing & Orchestrating The Machine Learning LifeCycle
Alternatives To Polyaxon
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Annotated_deep_learning_paper_implementations43,47726 days ago79November 05, 202337mitJupyter Notebook
🧑‍🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Tensorflow Examples43,126
19 days ago219otherJupyter Notebook
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)
Google Research32,2061843 days ago9July 22, 20221,263apache-2.0Jupyter Notebook
Google Research
Ai For Beginners27,453
4 days ago65mitJupyter Notebook
12 Weeks, 24 Lessons, AI for All!
Handson Ml226,500
10 days ago213apache-2.0Jupyter Notebook
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
Fastai25,15318415724 days ago147October 15, 2023202apache-2.0Jupyter Notebook
The fastai deep learning library
Handson Ml25,036
5 months ago140apache-2.0Jupyter Notebook
⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 instead.
Llm Course23,631
11 days ago24apache-2.0Jupyter Notebook
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
Homemade Machine Learning21,542
7 months ago23mitJupyter Notebook
🤖 Python examples of popular machine learning algorithms with interactive Jupyter demos and math being explained
Shap21,20268373a day ago100December 07, 20231,482mitJupyter Notebook
A game theoretic approach to explain the output of any machine learning model.
Alternatives To Polyaxon
Select To Compare


Alternative Project Comparisons
Readme

License: Apache 2 Polyaxon API Slack

Docs Release GitHub GitHub

CLI Haupt Hypertune Traceml Codacy Badge

Reproduce, Automate, Scale your data science

Welcome to Polyaxon, a platform for building, training, and monitoring large scale deep learning applications. We are making a system to solve reproducibility, automation, and scalability for machine learning applications.

Polyaxon deploys into any data center, cloud provider, or can be hosted and managed by Polyaxon, and it supports all the major deep learning frameworks such as Tensorflow, MXNet, Caffe, Torch, etc.

Polyaxon makes it faster, easier, and more efficient to develop deep learning applications by managing workloads with smart container and node management. And it turns GPU servers into shared, self-service resources for your team or organization.


demo


Install

TL;DR;

  • Install CLI

    # Install Polyaxon CLI
    $ pip install -U polyaxon
    
  • Create a deployment

    # Create a namespace
    $ kubectl create namespace polyaxon
    
    # Add Polyaxon charts repo
    $ helm repo add polyaxon https://charts.polyaxon.com
    
    # Deploy Polyaxon
    $ polyaxon admin deploy -f config.yaml
    
    # Access API
    $ polyaxon port-forward
    

Please check polyaxon installation guide

Quick start

TL;DR;

  • Start a project

    # Create a project
    $ polyaxon project create --name=quick-start --description='Polyaxon quick start.'
    
  • Train and track logs & resources

    # Upload code and start experiments
    $ polyaxon run -f experiment.yaml -u -l
    
  • Dashboard

    # Start Polyaxon dashboard
    $ polyaxon dashboard
    
    Dashboard page will now open in your browser. Continue? [Y/n]: y
    

compare dashboards


  • Notebook
    # Start Jupyter notebook for your project
    $ polyaxon run --hub notebook
    

compare


  • Tensorboard
    # Start TensorBoard for a run's output
    $ polyaxon run --hub tensorboard -P uuid=UUID
    

tensorboard


Please check our quick start guide to start training your first experiment.

Distributed job

Polyaxon supports and simplifies distributed jobs. Depending on the framework you are using, you need to deploy the corresponding operator, adapt your code to enable the distributed training, and update your polyaxonfile.

Here are some examples of using distributed training:

Hyperparameters tuning

Polyaxon has a concept for suggesting hyperparameters and managing their results very similar to Google Vizier called experiment groups. An experiment group in Polyaxon defines a search algorithm, a search space, and a model to train.

Parallel executions

You can run your processing or model training jobs in parallel, Polyaxon provides a mapping abstraction to manage concurrent jobs.

DAGs and workflows

Polyaxon DAGs is a tool that provides container-native engine for running machine learning pipelines. A DAG manages multiple operations with dependencies. Each operation is defined by a component runtime. This means that operations in a DAG can be jobs, services, distributed jobs, parallel executions, or nested DAGs.

Architecture

Polyaxon architecture

Documentation

Check out our documentation to learn more about Polyaxon.

Dashboard

Polyaxon comes with a dashboard that shows the projects and experiments created by you and your team members.

To start the dashboard, just run the following command in your terminal

$ polyaxon dashboard -y

Project status

Polyaxon is stable and it's running in production mode at many startups and Fortune 500 companies.

Contributions

Please follow the contribution guide line: Contribute to Polyaxon.

Research

If you use Polyaxon in your academic research, we would be grateful if you could cite it.

Feel free to contact us, we would love to learn about your project and see how we can support your custom need.

Popular Machine Learning Projects
Popular Jupyter Notebook Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Jupyter
Machine Learning
Ml
Deep Learning
Kubernetes
K8s
Artificial Intelligence
Pytorch
Tensorflow
Dashboard
Pipeline
Data Science
Keras
Parallel
Reinforcement Learning
Caffe
Dag
Mxnet
Hyperparameter Optimization