Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Spark | 35,348 | 2,394 | 882 | a day ago | 46 | May 09, 2021 | 223 | apache-2.0 | Scala | |
Apache Spark - A unified analytics engine for large-scale data processing | ||||||||||
Cookbook | 11,362 | 3 months ago | 108 | apache-2.0 | ||||||
The Data Engineering Cookbook | ||||||||||
God Of Bigdata | 7,992 | 4 days ago | 2 | |||||||
äøę³Øå¤§ę°ę®å¦ä¹ é¢čÆļ¼å¤§ę°ę®ęē„ä¹č·Æå¼åÆćFlink/Spark/Hadoop/Hbase/Hive... | ||||||||||
Zeppelin | 5,981 | 32 | 23 | 7 days ago | 2 | June 21, 2017 | 134 | apache-2.0 | Java | |
Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and more. | ||||||||||
Sparkinternals | 4,665 | a year ago | 27 | |||||||
Notes talking about the design and implementation of Apache Spark | ||||||||||
Bigdl | 4,181 | 10 | a day ago | 16 | April 19, 2021 | 717 | apache-2.0 | Jupyter Notebook | ||
Fast, distributed, secure AI for Big Data | ||||||||||
Iceberg | 4,076 | a day ago | 4 | May 23, 2022 | 1,312 | apache-2.0 | Java | |||
Apache Iceberg | ||||||||||
Tensorflowonspark | 3,849 | 5 | 21 days ago | 32 | April 21, 2022 | 11 | apache-2.0 | Python | ||
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters. | ||||||||||
Koalas | 3,228 | 1 | 12 | 4 months ago | 47 | October 19, 2021 | 109 | apache-2.0 | Python | |
Koalas: pandas API on Apache Spark | ||||||||||
Spark Nlp | 3,163 | 2 | 2 | a day ago | 90 | March 05, 2021 | 36 | apache-2.0 | Scala | |
State of the Art Natural Language Processing |
Apache Spark is an open-source cluster-computing framework. Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3. And you can use it interactively from the Scala, Python and R shells. You can run Spark using its standalone cluster mode, on an IaaS, on Hadoop YARN, or on container orchestrators like Apache Mesos.
z/OS is an extremely scalable and secure high-performance operating system based on the 64-bit z/Architecture. z/OS is highly reliable for running mission-critical applications, and the operating system supports Web- and Java-based applications.
In this jurney we demonstrate running an analytics application using Spark on z/OS. Apache Spark on z/OS is in-place, optimized abstraction and real-time analysis of structured and unstructured enterprise data which is powered by z Systems Community Cloud.
z/OS Platform for Apache Spark includes a supported version of Apache Spark open source capabilities consisting of the ApacheSpark core, Spark SQL, Spark Streaming, Machine Learning Library (MLib) and Graphx.It also includes optimized data access to a broad set of structured and unstructured data sources through Spark APIs. With this capability, traditional z/OS data sources, such as IMSā¢, VSAM, IBM DB2Ā®, z/OS, PDSE, or SMF data, can be accessed in a performance-optimized manner with Spark
This analytics example uses data stored in DB2 and VSAM tables, and a machine learning application written in Scala. The code also uses open-source Jupyter Notebook to write and submit Scala code to your Spark instance, and view the output within a web GUI. The Jupyter Notebook is commonly used in data analytics space for data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.
The scenarios are accomplished by using:
Register at z Systems Community Cloud for a trial account. You will receive an email containing credentials to access the self-service portal. This is where you can start exploring all the available services.
1.Open a web browser and enter the URL to access the z Systems Community Cloud self-service portal.
2.Enter your Portal User ID and Portal Password, and click āSign Inā.
3.You will see the home page for the z Systems Community Cloud self-service portal.
4.You will now see a dashboard, which shows the status of your Apache Spark on z/OS instance.
At the top of the screen, notice the āz/OS Statusā indicator, which should show the status of your instance as āOKā.
In the middle of the screen, the āSpark Instanceā, āStatusā, āData managementā, and āOperationsā sections will be displayed. The āSpark Instanceā section contains your individual Spark username and IP address.
Below the field headings, you will see buttons for functions that can be applied to your instance.
The following table lists the operation for each function:
5.If it is the first time for you to try the Analytics Service on zOS, you must set a new Spark password.
6.Confirm your instance is Active. If it is āStoppedā, click āStartā to start it.
1.Go to cloud4z/spark and download all the sample files.
2.Load the DB2 data file :
āUpload Successā will appear in the dashboard when the data load is complete. The VSAM data for this exercise has already been loaded for you. However, this step may be repeated by loading the VSAM copybook and VSAM data file you downloaded, from your local system.
Submit a prepared Scala program to analyze the data.
āJOB Submittedā will appear in the dashboard when the program is complete. This Scala program will access DB2 and VSAM data, perform transformations on the data, join these two tables in a Spark dataframe, and store the result back to DB2.
Launch your individual Spark worker output GUI to view the job you just submitted.
!
Jupyter Notebook tool that is installed in the dashboard. This tool will allow you to write and submit Scala code to your Spark instance, and view the output within a web GUI.
1.Launch the Jupyter Notebook service in your browser from your dashboard.
The prepared Scala program in this level will access DB2 and VSAM data, perform transformations on the data, join these two tables in a Spark dataframe, and store the result back to DB2. It will also perform a logistic regression analysis and plot the output.
The Jupyter Notebook will connect to your Spark on z/OS instance automatically and will be in the ready state when the Apache Toree āScala indicator in the top right hand corner of the screen is clear.
The Jupyter Notebook environment is divided into input cells labelled with āIn [#]:ā.
Run cell #1 - The Scala code in the first cell loads the VSAM data (customer information) into Spark and performs a data transformation.
Ā Ā Before running the code, make the fllowing changes:
The Jupyter Notebook connection to your Spark instance is in the busy state when the Apache Toree āScala indicator in the top right hand corner of the screen is grey.
When this indicator turns clear, the cell run has completed and returned to the ready state.
The output should be similar to the following:
Run cell #2 - The Scala code in the second cell loads the DB2 data (transaction data) into Spark and performs a data transformation.
Run cell #3 - The Scala code in the third cell joins the VSAM and DB2 data into a new āclient_joinā dataframe in Spark.
Run cell #4 - The Scala code in the fourth cell performs a logistic regression to evaluate the probability of customer churn as a function of customer activity level. The āresult_dfā dataframe is also created, which is used to plot the results on a line graph.
Run cell #5 - The Scala code in the fifth cell plots the āplot_dfā dataframe.
a. The number of rows in the input VSAM dataset
IBM z/OS Platform for Apache Spark - http://www-03.ibm.com/systems/z/os/zos/apache-spark.html