🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Alternatives To Datasets
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Datasets15,649920815 hours ago52June 15, 2022526apache-2.0Python
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
15 hours ago535gpl-3.0Python
NEW - YOLOv8 🚀 in PyTorch > ONNX > CoreML > TFLite
Hub3,28315714114 days ago15April 14, 20213apache-2.0Python
A library for transfer learning by reusing parts of TensorFlow models.
3 years ago16otherPython
GNES is Generic Neural Elastic Search, a cloud-native semantic search system based on deep neural network.
Around Dataengineering926
5 months ago2Python
A Data Engineering & Machine Learning Knowledge Hub
Huggingface_hub7768621 hours ago37June 21, 202285apache-2.0Python
All the open source things related to the Hugging Face Hub.
16 hours ago24mitTypeScript
Utilities to use the Hugging Face hub API
Best Of Jupyter624
8 days ago1cc-by-sa-4.0
🏆 A ranked list of awesome Jupyter Notebook, Hub and Lab projects (extensions, kernels, tools). Updated weekly.
Ml Hub198
a year ago16apache-2.0Python
🧰 Multi-user development platform for machine learning teams. Simple to setup within minutes.
7 days ago7January 28, 202269gpl-3.0Python
The Ersilia Model Hub, a repository of AI/ML models for infectious and neglected disease research.
Alternatives To Datasets
Select To Compare

Alternative Project Comparisons

Hugging Face Datasets Library

Build GitHub Documentation GitHub release Number of datasets Contributor Covenant DOI

Datasets is a lightweight library providing two main features:

  • one-line dataloaders for many public datasets: one-liners to download and pre-process any of the number of datasets major public datasets (image datasets, audio datasets, text datasets in 467 languages and dialects, etc.) provided on the HuggingFace Datasets Hub. With a simple command like squad_dataset = load_dataset("squad"), get any of these datasets ready to use in a dataloader for training/evaluating a ML model (Numpy/Pandas/PyTorch/TensorFlow/JAX),
  • efficient data pre-processing: simple, fast and reproducible data pre-processing for the public datasets as well as your own local datasets in CSV, JSON, text, PNG, JPEG, WAV, MP3, Parquet, etc. With simple commands like processed_dataset =, efficiently prepare the dataset for inspection and ML model evaluation and training.

Documentation Colab tutorial

Find a dataset in the Hub Add a new dataset to the Hub

Datasets is designed to let the community easily add and share new datasets.

Datasets has many additional interesting features:

  • Thrive on large datasets: Datasets naturally frees the user from RAM memory limitation, all datasets are memory-mapped using an efficient zero-serialization cost backend (Apache Arrow).
  • Smart caching: never wait for your data to process several times.
  • Lightweight and fast with a transparent and pythonic API (multi-processing/caching/memory-mapping).
  • Built-in interoperability with NumPy, pandas, PyTorch, Tensorflow 2 and JAX.
  • Native support for audio and image data
  • Enable streaming mode to save disk space and start iterating over the dataset immediately.

Datasets originated from a fork of the awesome TensorFlow Datasets and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library. More details on the differences between Datasets and tfds can be found in the section Main differences between Datasets and tfds.


With pip

Datasets can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)

pip install datasets

With conda

Datasets can be installed using conda as follows:

conda install -c huggingface -c conda-forge datasets

Follow the installation pages of TensorFlow and PyTorch to see how to install them with conda.

For more details on installation, check the installation page in the documentation:

Installation to use with PyTorch/TensorFlow/pandas

If you plan to use Datasets with PyTorch (1.0+), TensorFlow (2.2+) or pandas, you should also install PyTorch, TensorFlow or pandas.

For more details on using the library with NumPy, pandas, PyTorch or TensorFlow, check the quick start page in the documentation:


Datasets is made to be very simple to use. The main methods are:

  • datasets.list_datasets() to list the available datasets
  • datasets.load_dataset(dataset_name, **kwargs) to instantiate a dataset

This library can be used for text/image/audio/etc. datasets. Here is an example to load a text dataset:

Here is a quick example:

from datasets import list_datasets, load_dataset

# Print all the available datasets

# Load a dataset and print the first example in the training set
squad_dataset = load_dataset('squad')

# Process the dataset - add a column with the length of the context texts
dataset_with_length = x: {"length": len(x["context"])})

# Process the dataset - tokenize the context texts (using a tokenizer from the  Transformers library)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')

tokenized_dataset = x: tokenizer(x['context']), batched=True)

If your dataset is bigger than your disk or if you don't want to wait to download the data, you can use streaming:

# If you want to use the dataset immediately and efficiently stream the data as you iterate over the dataset
image_dataset = load_dataset('cifar100', streaming=True)
for example in image_dataset["train"]:

For more details on using the library, check the quick start page in the documentation: and the specific pages on:

Another introduction to Datasets is the tutorial on Google Colab here: Open In Colab

Add a new dataset to the Hub

We have a very detailed step-by-step guide to add a new dataset to the number of datasets datasets already provided on the HuggingFace Datasets Hub.

You can find:

Main differences between Datasets and tfds

If you are familiar with the great TensorFlow Datasets, here are the main differences between Datasets and tfds:

  • the scripts in Datasets are not provided within the library but are queried, downloaded/cached and dynamically loaded upon request
  • Datasets also provides evaluation metrics in a similar fashion to the datasets, i.e. as dynamically installed scripts with a unified API. This gives access to the pair of a benchmark dataset and a benchmark metric for instance for benchmarks like SQuAD or GLUE.
  • the backend serialization of Datasets is based on Apache Arrow instead of TF Records and leverage python dataclasses for info and features with some diverging features (we mostly don't do encoding and store the raw data as much as possible in the backend serialization cache).
  • the user-facing dataset object of Datasets is not a but a built-in framework-agnostic dataset class with methods inspired by what we like in (like a map() method). It basically wraps a memory-mapped Arrow table cache.


Similar to TensorFlow Datasets, Datasets is a utility library that downloads and prepares public datasets. We do not host or distribute most of these datasets, vouch for their quality or fairness, or claim that you have license to use them. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

Moreover Datasets may run Python code defined by the dataset authors to parse certain data formats or structures. For security reasons, we ask users to:

  • check the dataset scripts they're going to run beforehand and
  • pin the revision of the repositories they use.

If you're a dataset owner and wish to update any part of it (description, citation, license, etc.), or do not want your dataset to be included in the Hugging Face Hub, please get in touch by opening a discussion or a pull request in the Community tab of the dataset page. Thanks for your contribution to the ML community!


If you want to cite our Datasets library, you can use our paper:

    title = "Datasets: A Community Library for Natural Language Processing",
    author = "Lhoest, Quentin  and
      Villanova del Moral, Albert  and
      Jernite, Yacine  and
      Thakur, Abhishek  and
      von Platen, Patrick  and
      Patil, Suraj  and
      Chaumond, Julien  and
      Drame, Mariama  and
      Plu, Julien  and
      Tunstall, Lewis  and
      Davison, Joe  and
      {\v{S}}a{\v{s}}ko, Mario  and
      Chhablani, Gunjan  and
      Malik, Bhavitvya  and
      Brandeis, Simon  and
      Le Scao, Teven  and
      Sanh, Victor  and
      Xu, Canwen  and
      Patry, Nicolas  and
      McMillan-Major, Angelina  and
      Schmid, Philipp  and
      Gugger, Sylvain  and
      Delangue, Cl{\'e}ment  and
      Matussi{\`e}re, Th{\'e}o  and
      Debut, Lysandre  and
      Bekman, Stas  and
      Cistac, Pierric  and
      Goehringer, Thibault  and
      Mustar, Victor  and
      Lagunas, Fran{\c{c}}ois  and
      Rush, Alexander  and
      Wolf, Thomas",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "",
    pages = "175--184",
    abstract = "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at",

If you need to cite a specific version of our Datasets library for reproducibility, you can use the corresponding version Zenodo DOI from this list.

Popular Machine Learning Projects
Popular Hub Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Machine Learning
Deep Learning
Natural Language Processing
Computer Vision