Spark Streaming With Google Cloud Example

an example of integrating Spark Streaming with Google Pub/Sub and Google Datastore
Alternatives To Spark Streaming With Google Cloud Example
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Google Api Go Client3,5819,70716 hours ago339August 08, 202326bsd-3-clauseGo
Auto-generated Google APIs for Go.
Google Cloud Node2,66832117219 hours ago73August 10, 202399apache-2.0TypeScript
Google Cloud Client Library for Node.js
Library1,101
3 months ago73apache-2.0JavaScript
A collaborative documentation site, powered by Google Docs.
Objectify7101,563279 months ago56August 31, 202064mitJava
The simplest convenient interface to the Google Cloud Datastore
Gae Boilerplate688
10 months ago43otherPython
Google App Engine Boilerplate
Sprunge600
8 years ago24Python
command line pastebin for google appengine
Djangae587271a month ago18December 15, 2021111bsd-3-clausePython
The best way to run Django on Google Cloud. This project is now on GitLab: https://gitlab.com/potato-oss/djangae/djangae
Spring Cloud Gcp342919 hours ago21December 16, 2022109apache-2.0Java
New home for Spring Cloud GCP development starting with version 2.0.
Gstore Node29015125 months ago66April 07, 202224apache-2.0TypeScript
Google Datastore Entities Modeling for Node.js
Elixir Samples275
10 months ago9apache-2.0Elixir
A collection of samples on using Elixir with Google Cloud Platform.
Alternatives To Spark Streaming With Google Cloud Example
Select To Compare


Alternative Project Comparisons
Readme

Example to Integrate Spark Streaming with Google Cloud at Scale

This is an example to integrate Spark Streaming with Google Cloud products. The streaming application pulls messages from Google Pub/Sub directly without Kafka, using custom receivers. When the streaming application is running, it can get entities from Google Datastore and put ones to Datastore.

What I want to show here is that we can be free from managing some big data products, such as Kafka and Cassandra. We data scientist can focus on implementing logics with Spark Streaming.

Spark Streaming with Google Cloud

Requirments

  • Google Cloud account
    • Need privilege to use Google Pub/Sub topics and subscriptions, Google Datastore and Google Dataproc.
  • Google Cloud SDK
    • version 147.0.0 or later
    • gcloud coomand is required

How to Run

  1. Create Google Pub/Sub topic/subscription
  • gcloud beta pubsub topics create $TOPIC_NAME`
    
  1. Create a Spark cluster on Google Dataproc with make create-dataproc-cluster
  • gcloud dataproc clusters create $CLUSTER_NAME \
        --zone="us-central1-a" \
        --image-version=1.1 \
        --master-machine-type="n1-standard-4" \
        --num-workers=2 \
        --worker-machine-type="n1-standard-4" \
        --scopes=https://www.googleapis.com/auth/pubsub,https://www.googleapis.com/auth/datastore,https://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/logging.write
    
  1. Create a JAR file of the project
  • make assembly
    
  1. Submit a Spark Streaming job to Google Dataproc
  • ./bin/submit-streaming-job.sh $GOOGLE_PROJECT_ID $PUBSUB_TOPIC
    
  1. Send Pub/Sub messages
  • ./bin/send-pubsub-messages.sh $GOOGLE_PROJECT_ID $PUBSUB_TOPIC
    

Please make sure to delete a Pub/Sub topic and a Dataproc cluster.

Appendix: Price Estimation for Google Pub/Sub and Google Datastore

As you know, Google Cloud Platform Pricing Calculator allows us to estimate how much cost we need to run systems. I would like to show an example cost to use Google Pug/Sub and Google Datastore. Of course, we need extras cost to use Spark Streaming on Google Dataproc.

According to the calculator, the estimated amount is just $22.20 per 1 month. From my perspective, it is much more reasonable than the cost of having an own Kafka cluster and a Cassandra cluster.

The cost was estimated with the following condition:

  • Google Datastore: $16.60
    • Stored data: 50 GB
    • Entity Reads: 10,000,000
    • Entity Writes: 5,000,000
    • Entity Deletes: 1,000,000
  • Cloud Pub/Sub: $6.00
    • Volume of message data: 100 GB

Appendix: How to Avoid Conflicts on protobuf-java Versions

Apache Spark 2.0.2 uses protobuf-java 2.5. Meanwhile, we need to protobuf-java 3.0 or later for Google Datastore. We can't avoid this conflict on protobuf-java event with any eviction.

In stead of any eviction, sbt-assembly offers an excellent feature to resolve this issue. Please take a look ./build.sbt in the repository.

assemblyShadeRules in assembly := Seq(
  ShadeRule.rename("com.google.protobuf.*" -> "shadedproto.@1").inAll
)

Reference

Popular Datastore Projects
Popular Google Projects
Popular Data Processing Categories

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Google
Scala
Cloud
Spark
Kafka
Streaming
Sbt
Cost
Datastore