Data Accelerator for Apache Spark democratizes streaming big data using Spark by offering several key features such as a no-code experience to set up a data pipeline as well as fast dev-test loop for creating complex logic. Our team has been using the project for two years within Microsoft for processing streamed data across many internal deployments handling data volumes at Microsoft scale. It offers an easy to use platform to learn and evaluate streaming needs and requirements. We are thrilled to share this project with the wider community as open source!
Azure Friday: We are now featured on Azure Fridays! See the video here.
Data Accelerator offers three level of experiences:
data-accelerator repository contains everything needed to set up an end-to-end data pipeline. There are many ways you can participate in the project:
We have also enabled a "hello world" experience that you try out locally by running docker container. When running locally there are no dependencies on Azure, however the functionality is very limited and only there to give you a very cursory overview of Data Accelerator.
To run Data Accelerator locally, deploy locally and then check out the local mode tutorials.
Data Accelerator for Spark runs on the following:
See the wiki pages for further information on how to build, diagnose and maintain your data pipelines built using Data Accelerator for Spark.
If you are interested in fixing issues and contributing to the code base, we would love to partner with you. Try things out, join in the design conversations and make pull requests.
Please also see our Code of Conduct.
Security issues and bugs should be reported privately, via email, to the Microsoft Security Response Center (MSRC) [email protected]. You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Further information, including the MSRC PGP key, can be found in the Security TechCenter.
This repository is licensed with the MIT license.