Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Practical.cleanarchitecture | 1,532 | 5 days ago | 25 | C# | ||||||
Full-stack .Net 7 Clean Architecture (Microservices + Dapr, Modular Monolith, Monolith), Blazor, Angular 15, React 18, Vue 3, Domain-Driven Design, CQRS, SOLID, Asp.Net Core Identity Custom Storage, Identity Server, Entity Framework Core, Selenium, SignalR, Hosted Services, Health Checks, Rate Limiting, Cloud (Azure, AWS) Services, ... | ||||||||||
Eventhorizon | 1,463 | 5 | 13 | 3 days ago | 68 | January 03, 2022 | 48 | apache-2.0 | Go | |
Event Sourcing for Go! | ||||||||||
Seldon Server | 1,420 | 3 years ago | 44 | June 28, 2017 | 26 | apache-2.0 | Java | |||
Machine Learning Platform and Recommendation Engine built on Kubernetes | ||||||||||
Devops Bash Tools | 1,250 | a day ago | 1 | mit | Shell | |||||
1000+ DevOps Bash Scripts - AWS, GCP, Kubernetes, Docker, CI/CD, APIs, SQL, PostgreSQL, MySQL, Hive, Impala, Kafka, Hadoop, Jenkins, GitHub, GitLab, BitBucket, Azure DevOps, TeamCity, Spotify, MP3, LDAP, Code/Build Linting, pkg mgmt for Linux, Mac, Python, Perl, Ruby, NodeJS, Golang, Advanced dotfiles: .bashrc, .vimrc, .gitconfig, .screenrc, tmux.. | ||||||||||
Nagios Plugins | 1,095 | 23 days ago | 64 | other | Python | |||||
450+ AWS, Hadoop, Cloud, Kafka, Docker, Elasticsearch, RabbitMQ, Redis, HBase, Solr, Cassandra, ZooKeeper, HDFS, Yarn, Hive, Presto, Drill, Impala, Consul, Spark, Jenkins, Travis CI, Git, MySQL, Linux, DNS, Whois, SSL Certs, Yum Security Updates, Kubernetes, Cloudera etc... | ||||||||||
Sitewhere | 854 | 6 | 19 | 2 years ago | 18 | June 19, 2017 | 112 | other | Java | |
SiteWhere is an industrial strength open-source application enablement platform for the Internet of Things (IoT). It provides a multi-tenant microservice-based infrastructure that includes device/asset management, data ingestion, big-data storage, and integration through a modern, scalable architecture. SiteWhere provides REST APIs for all system functionality. SiteWhere provides SDKs for many common device platforms including Android, iOS, Arduino, and any Java-capable platform such as Raspberry Pi rapidly accelerating the speed of innovation. | ||||||||||
Chronos | 633 | 15 days ago | 21 | mit | TypeScript | |||||
📊 📊 📊 Monitors the health and web traffic of servers, microservices, Kubernetes/Kafka clusters, containers, and AWS services with real-time data monitoring and receive automated notifications over Slack or email. | ||||||||||
Agile_data_code_2 | 435 | 2 months ago | 7 | mit | Jupyter Notebook | |||||
Code for Agile Data Science 2.0, O'Reilly 2017, Second Edition | ||||||||||
Kafka Connect Storage Cloud | 239 | 18 hours ago | 156 | other | Java | |||||
Kafka Connect suite of connectors for Cloud storage (Amazon S3) | ||||||||||
Firecamp | 188 | 3 years ago | 8 | May 08, 2018 | 22 | apache-2.0 | Go | |||
Serverless Platform for the stateful services |
THIS REPO IS ARCHIVED Based on Security issues SEC ticket #SEC-2988
Forked from the awesome kafka-connect-hdfs
StreamX is a kafka-connect based connector to copy data from Kafka to Object Stores like Amazon s3, Google Cloud Storage and Azure Blob Store. It focusses on reliable and scalable data copying. It can write the data out in different formats (like parquet, so that it can readily be used by analytical tools) and also in different partitioning requirements.
##Features :
StreamX inherits rich set of features from kafka-connect-hdfs.
In addition to these, we have made changes to the following to make it work efficiently with s3
##Getting Started:
Pre-req : StreamX is based on Kafka Connect framework, which is part of Kafka project. Kafka Connect is added in Kafka 0.9, hence StreamX can only be used with Kafka version >= 0.9. To download Kafka binaries, check here.
Clone : git clone https://github.com/qubole/streamx.git
Branch : For Kafka 0.9, use 2.x branch. For Kafka 0.10 and above, use master branch.
Build : mvn -DskipTests package
Once the build succeeds, StreamX packages all required jars under target/streamx-0.1.0-SNAPSHOT-development/share/java/streamx/*
in StreamX repo. This directory needs to be in classpath.
Add Connector to Kafka Connect Classpath:
export CLASSPATH=$CLASSPATH:`pwd`/target/streamx-0.1.0-SNAPSHOT-development/share/java/streamx/*
In Kafka, change the following in config/connect-distibuted.properties
or config/connect-standalone.properties
depending on what mode you want to use.
bootstrap.servers=set Kafka end-point (ex: localhost:9092)
key.converter=com.qubole.streamx.ByteArrayConverter
value.converter=com.qubole.streamx.ByteArrayConverter
Use ByteArrayConverter
to copy data from Kafka as-is without any changes. (copy JSON/CSV)
Set s3.url and hadoop-conf in StreamX config/quickstart-s3.properties
. StreamX packages hadoop-conf directory at config/hadoop-conf
for ease-of-use. Set s3 access and secret keys in config/hadoop-conf/hdfs-site.xml
.
In Kafka, run
bin/connect-standalone etc/kafka/connect-standalone.properties \
/path/to/streamx/config/quickstart-s3.properties
You are done. Check s3 for ingested data!
bin/connect-distributed.sh config/connect-distributed.properties
We have started the Kafka Connect framework and the S3 Connector is added to classpath. Kafka Connect framework starts a REST server (rest.port
property in connect-distributed.properties
) listening for Connect Job requests. The copy job can be submitted by hitting the REST end-point using curl or any REST clients.
For example, to submit a copy job from Kafka to S3
curl -i -X POST \
-H "Accept:application/json" \
-H "Content-Type:application/json" \
-d \
'{"name":"clickstream",
"config":
{
"name":"clickstream",
"connector.class":"com.qubole.streamx.s3.S3SinkConnector",
"format.class":"com.qubole.streamx.SourceFormat",
"tasks.max":"1",
"topics":"adclicks",
"flush.size":"2",
"s3.url":"s3://streamx/demo",
"hadoop.conf.dir":"/Users/pseluka/src/streamx/hadoop-conf"
}}' \
'http://localhost:8083/connectors'
hdfs-site.xml
and other hadoop configuration resides. StreamX packages the hadoop dependencies, so it need not have hadoop project/jars in its classpath. So, create a directory containing hadoop config files like core-site.xml
, hdfs-site.xml
and provide the location of this directory in hadoop.conf.dir while submitting copy job. (StreamX provides a default hadoop-conf
directory under config/hadoop-conf
. Set your s3 access key, secret key there and provide full path in hadoop.conf.dir)You have submitted the job, check S3 for output files. For the above copy job, it will create s3://streamx/demo/topics/adclicks/partition=x/files.xyz
Note that, a single copy job could consume from multiple topics and writes to topic specific directory.
To delete a Connect job,
curl -i -X DELETE \
-H "Accept:application/json" \
-H "Content-Type:application/json" \
'http://localhost:8083/connectors/clickstream'
To list all Connect jobs,
curl -i -X GET \
-H "Accept:application/json" \
-H "Content-Type:application/json" \
'http://localhost:8083/connectors'
Restarting Connect jobs : All Connect jobs are stored in a Kafka Queue. So, restarting the Kafka Connect will restart all the connectors submitted to it.
Docker Streamx supports Docker, but only in distributed mode To build your image,
docker build -t qubole/streamx .
When you run your container, you can override all the properties in connect-distributed.properties file with env vars. env_vars will be of format CONNECT_BOOTSTRAP_SERVERS corresponding to bootstrap.servers. The convention is to prefix env vars with CONNECT. Example of how to run your container,
docker run -d -p 8083:8083 --env CONNECT_BOOTSTRAP_SERVERS=public_dns:9092 --env CONNECT_AWS_ACCESS_KEY=youracesskey --env CONNECT_AWS_SECRET_KEY=yoursecretkey qubole/streamx
You can also use Avro/Parquet format. Example:
docker run -d -p 8083:8083 --env CONNECT_BOOTSTRAP_SERVERS=public_dns:9092 --env CONNECT_AWS_ACCESS_KEY=youracesskey --env CONNECT_AWS_SECRET_KEY=yoursecretkey --env CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter --env CONNECT_VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter --env CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://your.schema.registry.com:8081 --env CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://your.schema.registry.com:8081 qubole/streamx
##Roadmap