UI for Apache Kafka is a free, open-source tool that is curated by Provectus, and is built and supported by the open-source community. It will remain free and open-source in the future. Provectus does not plan to add any paid features or subscription plans so that everyone can have a better experience observing their data. UI for Apache Kafka is a part of the Provectus NextGen Data Platform — Check it out! Also, learn more about Professional Services for Apache Kafka, to start handling your Kafka clusters and streaming apps with the help of Provectus Kafka experts.
UI for Apache Kafka is a simple tool that makes your data flows observable, helps find and troubleshoot issues faster and deliver optimal performance. Its lightweight dashboard makes it easy to track key metrics of your Kafka clusters - Brokers, Topics, Partitions, Production, and Consumption.
Set up UI for Apache Kafka with just a couple of easy commands to visualize your Kafka data in a comprehensible way. You can run the tool locally or in the cloud.
UI for Apache Kafka wraps major functions of Apache Kafka with an intuitive user interface.
UI for Apache Kafka makes it easy for you to create topics in your browser by several clicks, pasting your own parameters, and viewing topics in the list.
It's possible to jump from connectors view to corresponding topics and from a topic to consumers (back and forth) for more convenient navigation. connectors, overview topic settings.
Let's say we want to produce messages for our topic. With the UI for Apache Kafka we can send or write data/messages to the Kafka topics without effort by specifying parameters, and viewing messages in the list.
There are 3 supported types of schemas: Avro®, JSON Schema, and Protobuf schemas.
Before producing avro-encoded messages, you have to add an avro schema for the topic in Schema Registry. Now all these steps are easy to do with a few clicks in a user-friendly interface.
To run UI for Apache Kafka, you can use a pre-built Docker image or build it locally.
We have plenty of docker-compose files as examples. They're built for various configuration stacks.
Example of how to configure clusters in the application-local.yml configuration file:
kafka: clusters: - name: local bootstrapServers: localhost:29091 schemaRegistry: http://localhost:8085 schemaRegistryAuth: username: username password: password # schemaNameTemplate: "%s-value" jmxPort: 9997 -
name: cluster name
bootstrapServers: where to connect
schemaRegistry: schemaRegistry's address
schemaRegistryAuth.username: schemaRegistry's basic authentication username
schemaRegistryAuth.password: schemaRegistry's basic authentication password
schemaNameTemplate: how keys are saved to schemaRegistry
jmxPort: open jmxPosrts of a broker
readOnly: enable read only mode
Configure as many clusters as you need by adding their configs below separated with
The official Docker image for UI for Apache Kafka is hosted here: hub.docker.com/r/provectuslabs/kafka-ui.
Launch Docker container in the background:
docker run -p 8080:8080 \ -e KAFKA_CLUSTERS_0_NAME=local \ -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 \ -d provectuslabs/kafka-ui:latest
If you prefer to use
docker-compose please refer to the documentation.
Helm chart could be found under charts/kafka-ui directory
Quick-start instruction here
Liveliness and readiness endpoint is at
Info endpoint (build info) is located at
Alternatively, each variable of the .yml file can be set with an environment variable.
For example, if you want to use an environment variable to set the
name parameter, you can write it like this:
||Setting log level (trace, debug, info, warn, error). Default: info|
||Setting log level (trace, debug, info, warn, error). Default: debug|
||Port for the embedded server. Default:
||Kafka API timeout in ms. Default:
||Address where to connect|
||KSQL DB server address|
||Security protocol to connect to the brokers. For SSL connection use "SSL", for plaintext connection don't set this environment variable|
||SchemaRegistry's basic authentication username|
||SchemaRegistry's basic authentication password|
||How keys are saved to schemaRegistry|
||Open jmxPosrts of a broker|
||Enable read-only mode. Default: false|
||Disable collecting segments information. It should be true for confluent cloud. Default: false|
||Given name for the Kafka Connect cluster|
||Address of the Kafka Connect service endpoint|
||Kafka Connect cluster's basic authentication username|
||Kafka Connect cluster's basic authentication password|
||Enable SSL for JMX?
||Username for JMX authentication|
||Password for JMX authentication|
||Time delay between topic deletion and topic creation attempts for topic recreate functionality. Default: 1|
||Number of attempts of topic creation after topic deletion for topic recreate functionality. Default: 15|