Awesome Open Source
Awesome Open Source


Coverage Status issues-status python-version-dependency

✨✨ Note to new users: ✨✨

Version 3 of Splink is in development that will make it simpler and more intuitive to use. It also removes the need for PySpark for smaller data linkages of up to around 1 million records. You can find the documentation here. You can try it by installing a pre-release, or in the new demos here. For new users, it may make sense to work with the new version, because it is quicker to learn. However, note that the new code is not yet fully tested.

Splink: Probabilistic record linkage and deduplication at scale

splink implements Fellegi-Sunter's canonical model of record linkage in Apache Spark, including the EM algorithm to estimate parameters of the model.


  • Works at much greater scale than current open source implementations (100 million records+).

  • Runs quickly - with runtimes of less than an hour.

  • Has a highly transparent methodology; match scores can be easily explained both graphically and in words

  • Is highly accurate

It is assumed that users of Splink are familiar with the probabilistic record linkage theory, and the Fellegi-Sunter model in particular. A series of interactive articles explores the theory behind Splink.

The statistical model behind splink is the same as that used in the R fastLink package. Accompanying the fastLink package is an academic paper that describes this model. This is the best place to start for users wanting to understand the theory about how splink works.

Data Matching, a book by Peter Christen, is another excellent resource.


splink is a Python package. It uses the Spark Python API to execute data linking jobs in a Spark cluster. It has been tested in Apache Spark 2.3, 2.4 and 3.1.

Install splink using:

pip install splink

Note that Splink requires pyspark and a working Spark installation. These are not specified as explicit dependencies becuase it is assumed users have an existing pyspark setup they wish to use.

Interactive demo

You can run demos of splink in an interactive Jupyter notebook by clicking the button below:



The best documentation is currently a series of demonstrations notebooks in the splink_demos repo.

Other tools in the Splink family

Splink Graph

splink_graph is a graph utility library for use in Apache Spark. It computes graph metrics on the outputs of data linking. The repo is here

  • Quality assurance of linkage results and identifying false positive links
  • Computing quality metrics associated with groups (clusters) of linked records
  • Automatically identifying possible false positive links in clusters

Splink Comparison Viewer

splink_comparison_viewer produces interactive dashboards that help you rapidly understand and quality assure the outputs of record linkage. A tutorial video is available here.

Splink Cluster Studio

splink_cluster_studio creates an interactive html dashboard from Splink output that allows you to visualise and analyse a sample of clusters from your record linkage. The repo is here.

Splink Synthetic Data

This code is able to generate realistic test datasets for linkage using the WikiData Query Service.

It has been used to performance test the accuracy of various Splink models.

Interactive settings editor with autocomplete

We also provide an interactive splink settings editor and example settings here.

Starting parameter generation tools

A tool to generate custom m and u probabilities can be found here.


You can read a short blog post about splink here.


You can find an introductory video showcasing Splink's features and running through an demo of functionality here.

How to make changes to Splink

(Steps 5 onwards for repo admins only)

  1. Raise new issue or target existing issue
  2. Create new branch (usually off master). Or fork for external contributors.
  3. Make changes, commit and push to GitHub
  4. Make pull request, referencing the issue
  5. Wait for tests to pass
  6. Review pull request
  7. Bump Splink version in pyproject.toml and update as part of pull request
  8. Merge
  9. Create tagged release on Github. This will trigger autopublish to PyPi


We are very grateful to ADR UK (Administrative Data Research UK) for providing funding for this work as part of the Data First project.

We are also very grateful to colleagues at the UK's Office for National Statistics for their expert advice and peer review of this work.

Related Awesome Lists
Top Programming Languages
Top Projects

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Spark (10,733
Roff (1,927
Apache Spark (1,191
Deduplication (631
Fuzzy Matching (187
Entity Resolution (109
Record Linkage (102
Em Algorithm (40
Data Matching (12