What is Argo Workflows?
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo
Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).
- Define workflows where each step in the workflow is a container.
- Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic
- Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo
Workflows on Kubernetes.
Argo is a Cloud Native Computing Foundation (CNCF) hosted project.
- Machine Learning pipelines
- Data and batch processing
- Infrastructure automation
Why Argo Workflows?
- Argo Workflows is the most popular workflow execution engine for Kubernetes.
- It can run 1000s of workflows a day, each with 1000s of concurrent tasks.
- Our users say it is lighter-weight, faster, more powerful, and easier to use
- Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based
- Cloud agnostic and can run on any Kubernetes cluster.
Read what people said in our latest survey
Try Argo Workflows
Access the demo environment (login using Github)
Argo Workflows Catalog
Check out our Java, Golang and Python SDKs.
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/install.yaml
Who uses Argo Workflows?
Official Argo Workflows user list
- UI to visualize and manage Workflows
- Artifact support (S3, Artifactory, Alibaba Cloud OSS, HTTP, Git, GCS, raw)
- Workflow templating to store commonly used Workflows in the cluster
- Archiving Workflows after executing for later access
- Scheduled workflows using cron
- Server interface with REST API (HTTP and GRPC)
- DAG or Steps based declaration of workflows
- Step level input & outputs (artifacts/parameters)
- Timeouts (step & workflow level)
- Retry (step & workflow level)
- Resubmit (memoized)
- Suspend & Resume
- K8s resource orchestration
- Exit Hooks (notifications, cleanup)
- Garbage collection of completed workflow
- Scheduling (affinity/tolerations/node selectors)
- Volumes (ephemeral/existing)
- Parallelism limits
- Daemoned steps
- DinD (docker-in-docker)
- Script steps
- Event emission
- Prometheus metrics
- Multiple executors
- Multiple pod and workflow garbage collection strategies
- Automatically calculated resource usage per step
- Java/Golang/Python SDKs
- Pod Disruption Budget support
- Single-sign on (OAuth2/OIDC)
- Webhook triggering
- Out-of-the box and custom Prometheus metrics
- Windows container support
- Embedded widgets
- Multiplex log viewer
We host monthly community meetings where we and the community showcase demos and discuss the current and future state of
the project. Feel free to join us! For Community Meeting information, minutes and recordings
please see here.
Participation in the Argo Workflows project is governed by
the CNCF Code of Conduct
Community Blogs and Presentations