Spark R Notebooks

R on Apache Spark (SparkR) tutorials for Big Data analysis and Machine Learning as IPython / Jupyter notebooks
Alternatives To Spark R Notebooks
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Spark Py Notebooks1,515
6 months ago9otherJupyter Notebook
Apache Spark & Python (pySpark) tutorials for Big Data Analysis and Machine Learning as IPython / Jupyter notebooks
Popmon46122 months ago32June 21, 202215mitPython
Monitor the stability of a Pandas or Spark dataframe ⚙︎
Findspark428101352 years ago14February 11, 202212bsd-3-clausePython
Ipython Spark Docker151
8 years ago1otherPython
8 years ago4
An example of running Apache Spark using Scala in ipython notebook
Spark R Notebooks109
6 years agootherJupyter Notebook
R on Apache Spark (SparkR) tutorials for Big Data analysis and Machine Learning as IPython / Jupyter notebooks
2 years ago10apache-2.0Scala
An Apache Spark-shell backend for IPython
Pyspark Setup Guide54
6 years agoJupyter Notebook
A guide for setting up Spark + PySpark under Ubuntu linux
4 years ago6August 02, 201920otherPython
Set of iPython and Jupyter extensions to improve user experience
Hdp Datascience Demo36
8 years agoHTML
HDP Data Science/Machine Learning demo
Alternatives To Spark R Notebooks
Select To Compare

Alternative Project Comparisons

SparkR Notebooks

Join the chat at

This is a collection of Jupyter notebooks intended to train the reader on different Apache Spark concepts, from basic to advanced, by using the R language.

If your are interested in being introduced to some basic Data Science Engineering concepts and applications, you might find these series of tutorials interesting. There we explain different concepts and applications using Python and R. Additionally, if you are interested in using Python with Spark, you can have a look at our pySpark notebooks.


For these series of notebooks, we have used Jupyter with the IRkernel R kernel. You can find installation instructions for you specific setup here. Have also a look at Andrie de Vries post Using R with Jupyter Notebooks that includes instructions for installing Jupyter and IRkernel together.

A good way of using these notebooks is by first cloning the repo, and then starting your Jupyter in pySpark mode. For example, if we have a standalone Spark installation running in our localhost with a maximum of 6Gb per node assigned to IPython:

MASTER="spark://" SPARK_EXECUTOR_MEMORY="6G" IPYTHON_OPTS="notebook --pylab inline" ~/spark-1.5.0-bin-hadoop2.6/bin/pyspark

Notice that the path to the pyspark command will depend on your specific installation. So as requirement, you need to have Spark installed in the same machine you are going to start the IPython notebook server.

For more Spark options see here. In general it works the rule of passign options described in the form spark.executor.memory as SPARK_EXECUTOR_MEMORY when calling IPython/pySpark.


2013 American Community Survey dataset.

Every year, the US Census Bureau runs the American Community Survey. In this survey, approximately 3.5 million households are asked detailed questions about who they are and how they live. Many topics are covered, including ancestry, education, work, transportation, internet use, and residency. You can directly to the source in order to know more about the data and get files for different years, longer periods, individual states, etc.

In any case, the starting up notebook will download the 2013 data locally for later use with the rest of the notebooks.

The idea of using this dataset came from being recently announced in Kaggle as part of their Kaggle scripts datasets. There you will be able to analyse the dataset on site, while sharing your results with other Kaggle users. Highly recommended!


Downloading data and starting with SparkR

Where we download our data locally and start up a SparkR cluster.

SparkSQL basics with SparkR

About loading our data into SparkSQL data frames using SparkR.

Data frame operations with SparkSQL and SparkR

Different operations we can use with SparkR and DataFrame objects, such as data selection and filtering, aggregations, and sorting. The basis for exploratory data analysis and machine learning.

Exploratory Data Analysis with SparkR and ggplot2

How to explore different types of variables using SparkR and ggplot2 charts.

Linear Models with SparkR

About linear models using SparkR, its uses and current limitations in v1.5.


Exploring geographical data with SparkR and ggplot2

An Exploratory Data Analysis of the 2013 American Community Survey dataset, more concretely its geographical features.


Contributions are welcome! For bug reports or requests please submit an issue.


Feel free to contact me to discuss any issues, questions, or comments.


This repository contains a variety of content; some developed by Jose A. Dianes, and some from third-parties. The third-party content is distributed under the license provided by those parties.

The content developed by Jose A. Dianes is distributed under the following license:

Copyright 2016 Jose A Dianes

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
See the License for the specific language governing permissions and
limitations under the License.
Popular Ipython Projects
Popular Spark Projects
Popular Command Line Interface Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Jupyter Notebook
Data Science
Data Analysis
Big Data
Exploratory Data Analysis