Awesome Open Source
Awesome Open Source

An Open-Source Framework for Prompt-learning.

Overview Installation How To Use Docs Paper Citation Performance


What's New?


Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. OpenPrompt supports loading PLMs directly from huggingface transformers. In the future, we will also support PLMs implemented by other libraries. For more resources about prompt-learning, please check our paper list.

What Can You Do via OpenPrompt?


  • Use the implementations of current prompt-learning approaches.* We have implemented various of prompting methods, including templating, verbalizing and optimization strategies under a unified standard. You can easily call and understand these methods.
  • Design your own prompt-learning work. With the extensibility of OpenPrompt, you can quickly practice your prompt-learning ideas.


Using Pip

Our repo is tested on Python 3.6+ and PyTorch 1.8.1+, install OpenPrompt using pip as follows:

pip install openprompt

To play with the latest features, you can also install OpenPrompt from the source.

Using Git

Clone the repository from github:

git clone
cd OpenPrompt
pip install -r requirements.txt
python install

Modify the code

python develop

Use OpenPrompt

Base Concepts

A PromptModel object contains a PLM, a (or multiple) Template and a (or multiple) Verbalizer, where the Template class is defined to wrap the original input with templates, and the Verbalizer class is to construct a projection between labels and target words in the current vocabulary. And a PromptModel object practically participates in training and inference.

Introduction by a Simple Example

With the modularity and flexibility of OpenPrompt, you can easily develop a prompt-learning pipeline.

Step 1: Define a task

The first step is to determine the current NLP task, think about whats your data looks like and what do you want from the data! That is, the essence of this step is to determine theclasssesand theInputExampleof the task. For simplicity, we use Sentiment Analysis as an example.tutorial_task.

from openprompt.data_utils import InputExample
classes = [ # There are two classes in Sentiment Analysis, one for negative and one for positive
dataset = [ # For simplicity, there's only two examples
    # text_a is the input text of the data, some other datasets may have multiple input sentences in one example.
        guid = 0,
        text_a = "Albert Einstein was one of the greatest intellects of his time.",
        guid = 1,
        text_a = "The film was badly made.",

Step 2: Define a Pre-trained Language Models (PLMs) as backbone.

Choose a PLM to support your task. Different models have different attributes, we encourge you to use OpenPrompt to explore the potential of various PLMs. OpenPrompt is compatible with models onhuggingface.

from openprompt.plms import load_plm
plm, tokenizer, model_config, WrapperClass = load_plm("bert", "bert-base-cased")

Step 3: Define a Template.

ATemplateis a modifier of the original input text, which is also one of the most important modules in prompt-learning. We have defined text_a in Step 1.

from openprompt.prompts import ManualTemplate
promptTemplate = ManualTemplate(
    text = '{"placeholder":"text_a"} It was {"mask"}',
    tokenizer = tokenizer,

Step 4: Define a Verbalizer

AVerbalizeris another important (but not neccessary) in prompt-learning,which projects the original labels (we have defined them asclasses, remember?) to a set of label words. Here is an example that we project thenegativeclass to the wordbad, and project thepositiveclass to the wordsgood,wonderful,great.

from openprompt.prompts import ManualVerbalizer
promptVerbalizer = ManualVerbalizer(
    classes = classes,
    label_words = {
        "negative": ["bad"],
        "positive": ["good", "wonderful", "great"],
    tokenizer = tokenizer,

Step 5: Combine them into a PromptModel

Given the task, now we have aPLM, aTemplateand aVerbalizer, we combine them into aPromptModel. Note that although the example naively combine the three modules, you can actually define some complicated interactions among them.

from openprompt import PromptForClassification
promptModel = PromptForClassification(
    template = promptTemplate,
    plm = plm,
    verbalizer = promptVerbalizer,

Step 6: Define a DataLoader

A PromptDataLoader is basically a prompt version of pytorch Dataloader, which also includes a Tokenizer, a Template and a TokenizerWrapper.

    from openprompt import PromptDataLoader
    data_loader = PromptDataLoader(
        dataset = dataset,
        tokenizer = tokenizer, 
        template = promptTemplate, 

Step 7: Train and inference

Done! We can conduct training and inference the same as other processes in Pytorch.

    # making zero-shot inference using pretrained MLM with prompt
    with torch.no_grad():
        for batch in data_loader:
            logits = promptModel(batch)
            preds = torch.argmax(logits, dim = -1)
    # predictions would be 1, 0 for classes 'positive', 'negative'

Please refer to our tutorial scripts, and documentation for more details.


We provide a series of download scripts in the dataset/ folder, feel free to use them to download benchmarks.

Performance Report

There are too many possible combinations powered by OpenPrompt. We are trying our best to test the performance of different methods as soon as possible. The performance will be constantly updated into the Tables. We also encourage the users to find the best hyper-parameters for their own tasks and report the results by making pull request.

Known Issues

Major improvement/enhancement in future.

  • We made some major changes from the last version, so part of the docs is outdated. We will fix it soon.


Please cite our paper if you use OpenPrompt in your work

  title={OpenPrompt: An Open-source Framework for Prompt-learning},
  author={Ding, Ning and Hu, Shengding and Zhao, Weilin and Chen, Yulin and Liu, Zhiyuan and Zheng, Hai-Tao and Sun, Maosong},
  journal={arXiv preprint arXiv:2111.01998},


We thank all the contributors to this project, more contributors are welcome!

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Python (1,142,951
Deep Learning (23,727
Pytorch (11,616
Nlp (8,378
Ai (4,813
Natural Language Processing (4,747
Transformer (1,689
Nlp Machine Learning (1,222
Prompt (473
Natural Language Understanding (372
Pretrained Models (324
Nlp Library (168
Prompt Toolkit (35
Pre Trained Language Models (19
Prompt Learning (8
Related Projects