|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Scio||2,496||37||a day ago||96||November 21, 2023||142||apache-2.0||Scala|
|A Scala API for Apache Beam and Google Cloud Dataflow.|
|Fluvio||2,207||17||a day ago||22||September 28, 2023||137||apache-2.0||Rust|
|Lean and mean distributed stream processing system written in rust and web assembly.|
|Examples||1,802||a month ago||642||November 09, 2023||100||apache-2.0||Shell|
|Apache Kafka and Confluent Platform examples and demos|
|Spring Cloud Dataflow||1,043||75||3||4 days ago||30||February 08, 2019||277||apache-2.0||Java|
|A microservices-based Streaming and Batch data processing in Cloud Foundry and Kubernetes|
|Kickflip Android Sdk||632||5||7 years ago||10||May 26, 2015||33||apache-2.0||Java|
|Kickflip Android SDK - Live Video Streaming to the Cloud|
|Airy||345||2||a month ago||303||April 23, 2021||140||apache-2.0||Java|
|💬 Open Source App Framework to build streaming apps with real-time data - 💎 Build real-time data pipelines and make real-time data universally accessible - 🤖 Join historical and real-time data in the stream to create smarter ML and AI applications. - ⚡ Standardize complex data ingestion and stream data to apps with pre-built connectors|
|H5stream||337||5 years ago||3||C|
|HTML5 RTSP paly a HTML5 streaming server|
|Hivemq Mqtt Tensorflow Kafka Realtime Iot Machine Learning Training Inference||159||3 years ago||4||apache-2.0||Jupyter Notebook|
|Real Time Big Data / IoT Machine Learning (Model Training and Inference) with HiveMQ (MQTT), TensorFlow IO and Apache Kafka - no additional data store like S3, HDFS or Spark required|
|Dataflowpythonsdk||157||7 years ago||20|
|Google Cloud Dataflow provides a simple, powerful model for building both batch and streaming parallel data processing pipelines.|
Airy Core is an is an open-source streaming app framework to train ML models and supply them with historical and real-time data. With Airy you can process data from a variety of sources:
You can then use Airy to:
Since Airy's infrastructure is built around Apache Kafka, it can process a large amount of events simultaneously and stream the relevant real-time and historical data to wherever you need it.
**What does Airy do? ** Learn more on our Website
**I'm new to Airy ** Get Started with Airy
**I'd like to read the detailed docs ** Read The Docs
**I'm ready to install Airy ** Installation
**I'm ready for the Airy Quickstart ** Quickstart
**I have a question ** The Airy Community will help
Airy Core comes with all the components you need to stream historical and real-time data.
By ingesting all real-time events and continuously processing, aggregating and joining them in the stream, development time can be significantly reduced. Through integrations with pre-built and easily configured connectors, events are consumed from any source, including business systems such as ERP/CRM, conversational sources, third party APIs. Airy also comes with an SDK to build custom connectors to any source.
An API to access data with blazing fast HTTP endpoints.
A WebSocket server that allows clients to receive near real-time updates about data flowing through the system.
A webhook integration server that allows its users to create actionable workflows (the webhook integration exposes events users can "listen" to and react programmatically.)
No-code interfaces to manage and control Airy, your connectors and your streams.
We welcome (and love) every form of contribution! Good entry points to the project are:
If you're still not sure where to start, open a new issue and we'll gladly help you get started.