Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Flink Learning | 13,198 | 24 days ago | apache-2.0 | Java | ||||||
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》 | ||||||||||
Redpanda | 6,356 | 1 | 5 hours ago | 343 | April 25, 2021 | 1,242 | C++ | |||
Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM! | ||||||||||
Ksql | 5,448 | 5 hours ago | 1,260 | other | Java | |||||
The database purpose-built for stream processing applications. | ||||||||||
Materialize | 4,922 | 5 hours ago | 2 | August 12, 2022 | 1,853 | other | Rust | |||
Materialize is a fast, distributed SQL database built on streaming internals. | ||||||||||
Jocko | 4,516 | a year ago | 61 | mit | Go | |||||
Kafka implemented in Golang with built-in coordination (No ZK dep, single binary install, Cloud Native) | ||||||||||
Liftbridge | 2,413 | 2 | a month ago | 67 | September 09, 2022 | 44 | apache-2.0 | Go | ||
Lightweight, fault-tolerant message streams. | ||||||||||
Examples | 1,693 | 11 days ago | 573 | July 07, 2022 | 96 | apache-2.0 | Shell | |||
Apache Kafka and Confluent Platform examples and demos | ||||||||||
Flink Streaming Platform Web | 1,550 | 5 days ago | 27 | mit | Java | |||||
基于flink的实时流计算web平台 | ||||||||||
Killrweather | 1,174 | 6 years ago | 23 | apache-2.0 | Scala | |||||
KillrWeather is a reference application (work in progress) showing how to easily integrate streaming and batch data processing with Apache Spark Streaming, Apache Cassandra, Apache Kafka and Akka for fast, streaming computations on time series data in asynchronous event-driven environments. | ||||||||||
Stream Reactor | 922 | 1 | 4 days ago | 1 | December 27, 2018 | 77 | apache-2.0 | Scala | ||
Streaming reference architecture for ETL with Kafka and Kafka-Connect. You can find more on http://lenses.io on how we provide a unified solution to manage your connectors, most advanced SQL engine for Kafka and Kafka Streams, cluster monitoring and alerting, and more. |
This is a simple dashboard example on Kafka and Spark Streaming
Java 1.8 or newer version required because lambda expression used for few cases
First of all, clone the git repository,
$ git clone [email protected]:trK54Ylmz/kafka-spark-streaming-example.git
after you need to use Maven for creating uber jar files,
$ mvn clean package -DskipTests
until that moment we had created jar files and now we'll install Kafka and MySQL,
$ wget http://www-us.apache.org/dist/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
$ # or wget http://www-eu.apache.org/dist/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
$ tar -xf kafka_2.11-0.10.1.0.tgz
$ cd kafka_2.11-0.10.1.0
$ nohup ./bin/zookeeper-server-start.sh ./config/zookeeper.properties > /tmp/kafka-zookeeper.out 2>&1 &
$ nohup ./bin/kafka-server-start.sh ./config/server.properties > /tmp/kafka-server.out 2>&1 &
Kafka is ready we can continue to install MySQL,
$ sudo apt-get install mysql-server # for Ubuntu, Debian
$ sudo yum install mysql-server && sudo systemctl start mysqld # for CentOS, RHEL
$ brew install mysql && mysql.server restart # for macOS
and finally create MySQL database and table,
CREATE DATABASE IF NOT EXISTS dashboard_test;
USE dashboard_test;
CREATE TABLE IF NOT EXISTS events (
market VARCHAR(24) NOT NULL DEFAULT '',
rate FLOAT DEFAULT NULL,
dt DATETIME NOT NULL,
PRIMARY KEY (market, dt)
)
ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
1 - Start the Spark streaming service and it'll process events from Kafka topic to MySQL,
$ cd kafka-spark-streaming-example
$ java -Dconfig=./config/common.conf -jar streaming/target/spark-streaming-0.1.jar
2 - Start the Kafka producer and it'll write events to Kafka topic,
$ java -Dconfig=./config/common.conf -jar producer/target/kafka-producer-0.1.jar
3 - Start the web server so you can see the dashboard
$ java -Dconfig=./config/common.conf -jar web/target/web-0.1.jar
4 - If everything look fine, please enter the dashboard address,
open http://localhost:8080 # default value : 8080