Prometheus Operator

Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes
Alternatives To Prometheus Operator
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
18 hours ago325gpl-3.0C
Real-time performance monitoring, done right!
Devops Exercises43,460
4 days ago14otherPython
Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP, DNS, Elastic, Network, Virtualization. DevOps Interview Questions
Faas23,11092 days ago31April 12, 202130mitGo
OpenFaaS - Serverless Functions Made Simple
Victoriametrics8,605421 hours ago174September 08, 2022668apache-2.0Go
VictoriaMetrics: fast, cost-effective monitoring solution and time series database
Prometheus Operator8,04625918 hours ago295September 20, 2022292apache-2.0Go
Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes
Devops Resources6,995
4 days ago13Groovy
DevOps resources - Linux, Jenkins, AWS, SRE, Prometheus, Docker, Python, Ansible, Git, Kubernetes, Terraform, OpenStack, SQL, NoSQL, Azure, GCP
Kube Prometheus5,328
18 hours ago27June 17, 2022164apache-2.0Jsonnet
Use Prometheus to monitor Kubernetes and applications running on Kubernetes
Cortex5,0929617 hours ago325July 15, 2022179apache-2.0Go
A horizontally scalable, highly available, multi-tenant, long term Prometheus.
Kube State Metrics4,5634820 hours ago93June 03, 202267apache-2.0Go
Add-on agent to generate and expose cluster-level metrics.
M34,43563 days ago994April 07, 2022210apache-2.0Go
M3 monorepo - Distributed TSDB, Aggregator and Query Engine, Prometheus Sidecar, Graphite Compatible, Metrics Platform
Alternatives To Prometheus Operator
Select To Compare

Alternative Project Comparisons

Prometheus Operator

Build Status Go Report Card Slack


The Prometheus Operator provides Kubernetes native deployment and management of Prometheus and related monitoring components. The purpose of this project is to simplify and automate the configuration of a Prometheus based monitoring stack for Kubernetes clusters.

The Prometheus operator includes, but is not limited to, the following features:

  • Kubernetes Custom Resources: Use Kubernetes custom resources to deploy and manage Prometheus, Alertmanager, and related components.

  • Simplified Deployment Configuration: Configure the fundamentals of Prometheus like versions, persistence, retention policies, and replicas from a native Kubernetes resource.

  • Prometheus Target Configuration: Automatically generate monitoring target configurations based on familiar Kubernetes label queries; no need to learn a Prometheus specific configuration language.

For an introduction to the Prometheus Operator, see the getting started guide.

Project Status

The operator in itself is considered to be production ready. Please refer to the Custom Resource Defintion (CRD) versions for the status of each CRD:

  • stable CRDs and API, changes are made in a backward-compatible way.
  • unstable CRDs and API, changes can happen but the team is focused on avoiding them. We encourage usage in production for users that accept the risk of breaking changes.
  • unstable CRDs and API, changes can happen frequently, and we suggest avoiding its usage on mission-critical environments.

Prometheus Operator vs. kube-prometheus vs. community helm chart

Prometheus Operator

The Prometheus Operator uses Kubernetes custom resources to simplify the deployment and configuration of Prometheus, Alertmanager, and related monitoring components.


kube-prometheus provides example configurations for a complete cluster monitoring stack based on Prometheus and the Prometheus Operator. This includes deployment of multiple Prometheus and Alertmanager instances, metrics exporters such as the node_exporter for gathering node metrics, scrape target configuration linking Prometheus to various metrics endpoints, and example alerting rules for notification of potential issues in the cluster.

helm chart

The prometheus-community/kube-prometheus-stack helm chart provides a similar feature set to kube-prometheus. This chart is maintained by the Prometheus community. For more information, please see the chart's readme


Version >=0.39.0 of the Prometheus Operator requires a Kubernetes cluster of version >=1.16.0. If you are just starting out with the Prometheus Operator, it is highly recommended to use the latest version.

If you have an older version of Kubernetes and the Prometheus Operator running, we recommend upgrading Kubernetes first and then the Prometheus Operator.


A core feature of the Prometheus Operator is to monitor the Kubernetes API server for changes to specific objects and ensure that the current Prometheus deployments match these objects. The Operator acts on the following Custom Resource Definitions (CRDs):

  • Prometheus, which defines a desired Prometheus deployment.

  • PrometheusAgent, which defines a desired Prometheus deployment, but running in Agent mode.

  • Alertmanager, which defines a desired Alertmanager deployment.

  • ThanosRuler, which defines a desired Thanos Ruler deployment.

  • ServiceMonitor, which declaratively specifies how groups of Kubernetes services should be monitored. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.

  • PodMonitor, which declaratively specifies how group of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.

  • Probe, which declaratively specifies how groups of ingresses or static targets should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.

  • ScrapeConfig, which declaratively specifies scrape configurations to be added to Prometheus. This CustomResourceDefinition helps with scraping resources outside the Kubernetes cluster.

  • PrometheusRule, which defines a desired set of Prometheus alerting and/or recording rules. The Operator generates a rule file, which can be used by Prometheus instances.

  • AlertmanagerConfig, which declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibit rules.

The Prometheus operator automatically detects changes in the Kubernetes API server to any of the above objects, and ensures that matching deployments and configurations are kept in sync.

To learn more about the CRDs introduced by the Prometheus Operator have a look at the design page.

To automate the validation of your CRD configuration files, see the linting page.

Dynamic Admission Control

To prevent invalid Prometheus alerting and recording rules from causing failures in a deployed Prometheus instance, an admission webhook is provided to validate PrometheusRule resources upon initial creation or update.

For more information on this feature, see the user guide.


Note: this quickstart does not provision an entire monitoring stack; if that is what you are looking for, see the kube-prometheus project. If you want the whole stack, but have already applied the bundle.yaml, delete the bundle first (kubectl delete -f bundle.yaml).

To quickly try out just the Prometheus Operator inside a cluster, choose a release and run the following command:

kubectl create -f bundle.yaml

Note: make sure to adapt the namespace in the ClusterRoleBinding if deploying in a namespace other than the default namespace.

To run the Operator outside of a cluster:

scripts/ <kubectl cluster name>


To remove the operator and Prometheus, first delete any custom resources you created in each namespace. The operator will automatically shut down and remove Prometheus and Alertmanager pods, and associated ConfigMaps.

for n in $(kubectl get namespaces -o jsonpath={}); do
  kubectl delete --all --namespace=$n prometheus,servicemonitor,podmonitor,alertmanager

After a couple of minutes you can go ahead and remove the operator itself.

kubectl delete -f bundle.yaml

The operator automatically creates services in each namespace where you created a Prometheus or Alertmanager resources, and defines three custom resource definitions. You can clean these up now.

for n in $(kubectl get namespaces -o jsonpath={}); do
  kubectl delete --ignore-not-found --namespace=$n service prometheus-operated alertmanager-operated

kubectl delete --ignore-not-found customresourcedefinitions \ \ \ \ \



  • golang environment
  • docker (used for creating container images, etc.)
  • kind (optional)


Ensure that you're running tests in the following path: $GOPATH/src/ as tests expect paths to match. If you're working from a fork, just add the forked repo as a remote and pull against your local prometheus-operator checkout before running tests.

Running unit tests:

make test-unit

Running end-to-end tests on local kind cluster:

  1. kind create cluster --image=kindest/node:<latest>. e.g v1.23.0 version.

Note: In case you are running kind on MacOS using podman, it is recommended to create podman machine 4 CPUs and 8GiB memory. Less resources might cause end to end tests fail because of lack of resources for cluster.

podman machine init --cpus=4 --memory=8192 --rootful --now

  1. kubectl cluster-info --context kind-kind. kind version >= 0.6.x
  2. make image - build Prometheus Operator docker image locally.

Note: In case you are running kind using podman, the step 3 won't work for you. You will need to switch command in Makefile:

CONTAINER_CLI=podman make image

  1. publish locally built images to be accessible inside kind

    for n in "prometheus-operator" "prometheus-config-reloader" "admission-webhook"; do kind load docker-image "$n:$(git rev-parse --short HEAD)"; done;

Note: In case you are running kind using podman, docker-image command won't work. You need to use image archives instead:

for n in "prometheus-operator" "prometheus-config-reloader" "admission-webhook"; do podman save --quiet -o images/$n.tar "$n:$(git rev-parse --short HEAD)"; kind load image-archive images/$n.tar; done

  1. make test-e2e

Running end-to-end tests on local minikube cluster:

  1. minikube start --kubernetes-version=stable --memory=4096 --extra-config=apiserver.authorization-mode=RBAC
  2. eval $(minikube docker-env) && make image - build Prometheus Operator docker image on minikube's docker
  3. make test-e2e




If you find a security vulnerability related to the Prometheus Operator, please do not report it by opening a GitHub issue, but instead please send an e-mail to the maintainers of the project found in the file.


Check the troubleshooting documentation for common issues and frequently asked questions (FAQ).


prometheus-operator organization logo was created and contributed by Bianca Cheng Costanzo.

Popular Kubernetes Projects
Popular Prometheus Projects
Popular Virtualization Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.