Awesome Open Source
Awesome Open Source

Joblib Apache Spark Backend

This library provides Apache Spark backend for joblib to distribute tasks on a Spark cluster.


joblibspark requires Python 3.6+, joblib>=0.14 and pyspark>=2.4 to run. To install joblibspark, run:

pip install joblibspark

The installation does not install PySpark because for most users, PySpark is already installed. If you do not have PySpark installed, you can install pyspark together with joblibspark:

pip install pyspark>=3.0.0 joblibspark

If you want to use joblibspark with scikit-learn, please install scikit-learn>=0.21.


Run following example code in pyspark shell:

from sklearn.utils import parallel_backend
from sklearn.model_selection import cross_val_score
from sklearn import datasets
from sklearn import svm
from joblibspark import register_spark

register_spark() # register spark backend

iris = datasets.load_iris()
clf = svm.SVC(kernel='linear', C=1)
with parallel_backend('spark', n_jobs=3):
  scores = cross_val_score(clf,,, cv=5)



joblibspark does not generally support run model inference and feature engineering in parallel. For example:

from sklearn.feature_extraction import FeatureHasher
h = FeatureHasher(n_features=10)
with parallel_backend('spark', n_jobs=3):
    # This won't run parallelly on spark, it will still run locally.

from sklearn import linear_model
regr = linear_model.LinearRegression(), y_train)

with parallel_backend('spark', n_jobs=3):
    # This won't run parallelly on spark, it will still run locally.

Note: for sklearn.ensemble.RandomForestClassifier, there is a n_jobs parameter, that means the algorithm support model training/inference in parallel, but in its inference implementation, it bind the backend to built-in backends, so the spark backend not work for this case.

Alternatives To Joblib Spark
Select To Compare

Alternative Project Comparisons
Related Awesome Lists
Top Programming Languages
Top Projects

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Python (795,156
Apache (16,448
Spark (10,734
Parallel (7,132
Pyspark (1,350
Feature Engineering (644