Awesome Open Source
Awesome Open Source


A Framework for Textual Entailment based Zero Shot text classification

Contributor Covenant

This repository contains the code for out of the box ready to use zero-shot classifiers among different tasks, such as Topic Labelling or Relation Extraction. It is built on top of 🤗 HuggingFace Transformers library, so you are free to choose among hundreds of models. You can either, use a dataset specific classifier or define one yourself with just labels descriptions or templates! The repository contains the code for the following publications:

To get started with the repository consider reading the new documentation!

Demo 🕹️

We have realeased a demo on Zero-Shot Information Extraction using Textual Entailment (ZS4IE: A toolkit for Zero-Shot Information Extraction with simple Verbalizations) accepted in the Demo Track of NAACL 2022. The code is publicly availabe on its own GitHub repository: ZS4IE.


By using Pip (check the last release)

pip install a2t

By clonning the repository

git clone
cd Ask2Transformers
pip install .

Or directly by

pip install git+


Available models

By default, roberta-large-mnli checkpoint is used to perform the inference. You can try different models to perform the zero-shot classification, but they need to be finetuned on a NLI task and be compatible with the AutoModelForSequenceClassification class from Transformers. For example:

  • roberta-large-mnli
  • joeddav/xlm-roberta-large-xnli
  • facebook/bart-large-mnli
  • microsoft/deberta-v2-xlarge-mnli

Coming soon: t5-large like generative models support.

Pre-trained models 🆕

We now provide (task specific) pre-trained entailment models to: (1) reproduce the results of the papers and (2) reuse them for new schemas of the same tasks. The models are publicly available on the 🤗 HuggingFace Models Hub.

The model name describes the configuration used for training as follows:


  • pretrained_model: The checkpoint used for initialization. For example: RoBERTalarge.
  • NLI_datasets: The NLI datasets used for pivot training.
    • S: Standford Natural Language Inference (SNLI) dataset.
    • M: Multi Natural Language Inference (MNLI) dataset.
    • F: Fever-nli dataset.
    • A: Adversarial Natural Language Inference (ANLI) dataset.
  • finetune_datasets: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.

Some models like HiTZ/A2T_RoBERTa_SMFA_ACE-arg have been trained marking some information between square brackets ('[[' and ']]') like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.

Training your own models

There is no special script for fine-tuning your own entailment based models. In our experiments, we have used the publicly available python script (from HuggingFace Transformers). To train your own model, first, you will need to convert your actual dataset in some sort of NLI data, we recommend you to have a look to script that serves as an example.

Tutorials (Notebooks)

Coming soon!

Results and evaluation

To obtain the results reported in the papers run the script with the corresponding configuration files. A configuration file containing the task and evaluation information should look like this:

    "name": "BabelDomains",
    "task_name": "topic-classification",
    "features_class": "a2t.tasks.text_classification.TopicClassificationFeatures",
    "hypothesis_template": "The domain of the sentence is about {label}.",
    "nli_models": [
    "labels": [
        "Art, architecture, and archaeology",
        "Business, economics, and finance",
        "Chemistry and mineralogy",
        "Culture and society",
        "Royalty and nobility",
        "Sport and recreation",
        "Textile and clothing",
        "Transport and travel",
        "Warfare and defense"
    "preprocess_labels": true,
    "dataset": "babeldomains",
    "test_path": "data/babeldomains.domain.gloss.tsv",
    "use_cuda": true,
    "half": true

Consider reading the papers to access the results.

About legacy code

The old code of this repository has been moved to a2t.legacy module and is only intended to be use for experimental reproducibility. Please, consider moving to the new code. If you need help read the new documentation or post an Issue on GitHub.


If you use this work, please consider citing at least one of the following papers. You can find the bibtex files in their corresponding aclanthology page.

Oscar Sainz, Haoling Qiu, Oier Lopez de Lacalle, Eneko Agirre, and Bonan Min. 2022. ZS4IE: A toolkit for Zero-Shot Information Extraction with simple Verbalizations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations, pages 27–38, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.

Oscar Sainz, Itziar Gonzalez-Dios, Oier Lopez de Lacalle, Bonan Min, and Eneko Agirre. 2022. Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2439–2455, Seattle, United States. Association for Computational Linguistics.

Oscar Sainz, Oier Lopez de Lacalle, Gorka Labaka, Ander Barrena, and Eneko Agirre. 2021. Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1199–1212, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.

Oscar Sainz and German Rigau. 2021. Ask2Transformers: Zero-Shot Domain labelling with Pretrained Language Models. In Proceedings of the 11th Global Wordnet Conference, pages 44–52, University of South Africa (UNISA). Global Wordnet Association.

Alternatives To Ask2transformers
Select To Compare

Alternative Project Comparisons
Related Awesome Lists
Top Programming Languages

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Python (888,878
Deep Learning (39,275
Dataset (33,267
Pytorch (22,609
Natural Language Processing (15,880
Classification (13,458
Text Classification (1,655
Wordnet (722
Topic Modeling (612
Relation Extraction (453
Labelling (402
Zero Shot (50
Topic Classification (12
Mnli (7
Nlp Tool (5