|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Superset||54,380||16||7 hours ago||6||April 18, 2023||1,523||apache-2.0||TypeScript|
|Apache Superset is a Data Visualization and Data Exploration Platform|
|Spark||36,844||2,394||903||8 hours ago||46||May 09, 2021||248||apache-2.0||Scala|
|Apache Spark - A unified analytics engine for large-scale data processing|
|Flink||22,018||38||8 hours ago||11||September 14, 2022||1,089||apache-2.0||Java|
|Beam||7,159||13||7 hours ago||557||July 11, 2023||4,270||apache-2.0||Java|
|Apache Beam is a unified programming model for Batch and Streaming data processing.|
|Hive||5,095||6 hours ago||105||apache-2.0||Java|
|Ignite||4,548||15||3||7 hours ago||36||May 04, 2023||718||apache-2.0||Java|
|Calcite||4,039||390||114||a day ago||1,713||July 21, 2023||310||apache-2.0||Java|
|Flink Training Course||2,815||3 years ago||17|
|Datastation||2,760||3 months ago||36||other||TypeScript|
|App to easily query, script, and visualize data from every database, file, and API.|
|Drill||1,837||23||11||a month ago||23||April 19, 2023||86||apache-2.0||Java|
|Apache Drill is a distributed MPP query layer for self describing data|
Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.
You can find the latest Spark documentation, including a programming guide, on the project web page. This README file only contains basic setup instructions.
Spark is built using Apache Maven. To build Spark and its example programs, run:
./build/mvn -DskipTests clean package
(You do not need to do this if you downloaded a pre-built package.)
More detailed documentation is available from the project site, at "Building Spark".
For general development tips, including info on developing Spark using an IDE, see "Useful Developer Tools".
The easiest way to start using Spark is through the Scala shell:
Try the following command, which should return 1,000,000,000:
scala> spark.range(1000 * 1000 * 1000).count()
Alternatively, if you prefer Python, you can use the Python shell:
And run the following command, which should also return 1,000,000,000:
>>> spark.range(1000 * 1000 * 1000).count()
Spark also comes with several sample programs in the
To run one of them, use
./bin/run-example <class> [params]. For example:
will run the Pi example locally.
You can set the MASTER environment variable when running examples to submit
examples to a cluster. This can be spark:// URL,
"yarn" to run on YARN, and "local" to run
locally with one thread, or "local[N]" to run locally with N threads. You
can also use an abbreviated class name if the class is in the
package. For instance:
MASTER=spark://host:7077 ./bin/run-example SparkPi
Many of the example programs print usage help if no params are given.
Testing first requires building Spark. Once Spark is built, tests can be run using:
Please see the guidance on how to run tests for a module, or individual tests.
There is also a Kubernetes integration test, see resource-managers/kubernetes/integration-tests/README.md
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.
Please refer to the build documentation at "Specifying the Hadoop Version and Enabling YARN" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions.
Please refer to the Configuration Guide in the online documentation for an overview on how to configure Spark.
Please review the Contribution to Spark guide for information on how to get started contributing to the project.