Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Cqrs Manager For Distributed Reactive Services | 278 | 4 years ago | 4 | apache-2.0 | Java | |||||
Experimental CQRS and Event Sourcing service | ||||||||||
Kukulcan | 91 | 2 years ago | apache-2.0 | Scala | ||||||
A REPL for Apache Kafka | ||||||||||
Apps | 30 | 2 months ago | 73 | February 01, 2023 | 8 | Scala | ||||
Thimble | 19 | 2 years ago | 12 | February 09, 2020 | epl-2.0 | Clojure | ||||
Clojure sandbox for Streaming Data Platforms | ||||||||||
Messenjer | 8 | 5 years ago | 1 | apache-2.0 | Clojure | |||||
A simple project to demonstrate building a streaming messaging app with Clojure, Kafka, Re-Frame, Pedestal, and Component. This corresponds to a walkthrough published on LambdaStew.com | ||||||||||
Franzy Examples | 8 | 7 years ago | 2 | epl-1.0 | Clojure | |||||
Examples and more for Franzy, a suite of Kafka libraries including producers, consumers, admin, validation, config, and more. | ||||||||||
Rotera | 8 | 4 years ago | 5 | other | Haskell | |||||
Persistent rotating queue | ||||||||||
Streampunk | 7 | 10 months ago | 1 | apache-2.0 | Java | |||||
Polyglot Programmable Shell for Kafka | ||||||||||
Scoop | 7 | 5 years ago | Clojure | |||||||
Speaking Kafka's protocol from Clojure REPL | ||||||||||
Kc Repl | 3 | 6 months ago | 10 | epl-1.0 | Clojure | |||||
An interactive, command line tool for exploring Kafka clusters |
Example of SummingBird in hybrid mode.
I'm currently improving it to be used as a production-ready bootstrap for any job.
See logged issues for expected improvements. Feel free as well to add any other!
Unzip it anywhere and follow instructions until step 4: http://kafka.apache.org/07/quickstart.html
To test it, run a producer and a consumer in two separate shell windows and write anything to the producer shell window. You should see what was sent in the consumer shell window.
Note: the example is currently incompatible with Kafka 0.8.x due to KafkaSpout being available only for version 0.7.x. Some examples are available but not production ready.
On OSX you can use brew to get it:
brew install memcached
First get summingbird from my cloned repository: sdjamaa/summingbird This clone uses version 0.8.0 of Storehaus library as there is a compatibility issue with Memcached store.
Build the project using the following command:
cd summingbird
./sbt update compile
cd summingbird-example
./sbt update compile
Configuration strings are located in src/main/resources/application.properties file.
All configuration strings are retrieved from package objects.
Start ZooKeeper, Kafka server and a Kafka producer: follow instructions from step 2 and step 3 (http://kafka.apache.org/07/quickstart.html)
Start memcached service: go to memcached folder and run memcached
command
Start example "service":
./sbt "company-hybrid-example/run --local"
This starts Storm and Scalding platforms
./sbt "company-hybrid-example/console"
You can now send messages from your Kafka producer which will be consumed by Summingbird.
To test the Storm consumer, type in the Scala REPL:
scala> import com.company.summingbird.client._
import com.company.summingbird.client._
scala> HybridClient.stormLookup("timestamp") // This tests the Storm store linked to the ClientStore
res1: Option[Long] = Some(2)
scala> HybridClient.processHadoop // This tests the Storm store linked to the ClientStore
scala> HybridClient.lookup("timestamp") // This inserts a file containing 2 lines with timestamp and 1 line with another json key/value pair
[... lot of logs ...]
scala> HybridClient.lookup("timestamp") // This queries the ClientStore to retrieved merged values between Storm and Scalding stores
res2: Option[Long] = Some(4)
To test the Scalding consumer, type in the Scala REPL (to be replaced by a real consumer App as well)
scala> HybridClient.processHadoop
[... some logging ...]
scala> HybridClient.hadoopLookup
Results : Right((BatchID.11636597,ReaderFn(<function1>)))
scala> ScaldingRunner.queryFiles()
14/04/02 00:36:35 INFO compress.CodecPool: Got brand-new decompressor
14/04/02 00:36:35 INFO compress.CodecPool: Got brand-new decompressor
14/04/02 00:36:35 INFO compress.CodecPool: Got brand-new decompressor
lolstamp : 1
timestamp : 2
For Storm: make sure everything is already launched (Kafka + memcached)
For Scalding: change directories in Scalding package object and create a file to be consumed (in a JSON format) somewhere.