Awesome Open Source
Awesome Open Source


Documentation Status

multi_task_NLP is a utility toolkit enabling NLP developers to easily train and infer a single model for multiple tasks. We support various data formats for majority of NLU tasks and multiple transformer-based encoders (eg. BERT, Distil-BERT, ALBERT, RoBERTa, XLNET etc.)

For complete documentation for this library, please refer to documentation

What is multi_task_NLP about?

Any conversational AI system involves building multiple components to perform various tasks and a pipeline to stitch all components together. Provided the recent effectiveness of transformer-based models in NLP, it’s very common to build a transformer-based model to solve your use case. But having multiple such models running together for a conversational AI system can lead to expensive resource consumption, increased latencies for predictions and make the system difficult to manage. This poses a real challenge for anyone who wants to build a conversational AI system in a simplistic way.

multi_task_NLP gives you the capability to define multiple tasks together and train a single model which simultaneously learns on all defined tasks. This means one can perform multiple tasks with latency and resource consumption equivalent to a single task.


To use multi-task-NLP, you can clone the repository into the desired location on your system with the following terminal command.

$ cd /desired/location/
$ git clone
$ cd multi-task-NLP
$ pip install -r requirements.txt 

NOTE:- The library is built and tested using Python 3.7.3. It is recommended to install the requirements in a virtual environment.

Quickstart Guide

A quick guide to show how a model can be trained for single/multiple NLU tasks in just 3 simple steps and with no requirement to code!!

Follow these 3 simple steps to train your multi-task model!

Step 1 - Define your task file

Task file is a YAML format file where you can add all your tasks for which you want to train a multi-task model.

    model_type: BERT
    config_name: bert-base-uncased
    dropout_prob: 0.05
    - label1
    - label2
    - label3
    - accuracy
    loss_type: CrossEntropyLoss
    task_type: SingleSenClassification
    - taskA_train.tsv
    - taskA_dev.tsv
    - taskA_test.tsv

    model_type: BERT
    config_name: bert-base-uncased
    dropout_prob: 0.3
    label_map_or_file: data/taskB_train_label_map.joblib
    - seq_f1
    - seq_precision
    - seq_recall
    loss_type: NERLoss
    task_type: NER
    - taskB_train.tsv
    - taskB_dev.tsv
    - taskB_test.tsv

For knowing about the task file parameters to make your task file, task file parameters.

Step 2 - Run data preparation

After defining the task file, run the following command to prepare the data.

$ python \ 
    --task_file 'sample_task_file.yml' \
    --data_dir 'data' \
    --max_seq_len 50

For knowing about the script and its arguments, refer running data preparation.

Step 3 - Run train

Finally you can start your training using the following command.

$ python \
    --data_dir 'data/bert-base-uncased_prepared_data' \
    --task_file 'sample_task_file.yml' \
    --out_dir 'sample_out' \
    --epochs 5 \
    --train_batch_size 4 \
    --eval_batch_size 8 \
    --grad_accumulation_steps 2 \
    --log_per_updates 25 \
    --save_per_updates 1000 \
    --eval_while_train True \
    --test_while_train True \
    --max_seq_len 50 \
    --silent True 

For knowing about the script and its arguments, refer running train

How to Infer?

Once you have a multi-task model trained on your tasks, we provide a convenient and easy way to use it for getting predictions on samples through the inference pipeline.

For running inference on samples using a trained model for say TaskA, TaskB and TaskC, you can import InferPipeline class and load the corresponding multi-task model by making an object of this class.

>>> from infer_pipeline import inferPipeline
>>> pipe = inferPipeline(modelPath = 'sample_out_dir/', maxSeqLen = 50)

infer function can be called to get the predictions for input samples for the mentioned tasks.

>>> samples = [ ['sample_sentence_1'], ['sample_sentence_2'] ]
>>> tasks = ['TaskA', 'TaskB']
>>> pipe.infer(samples, tasks)

For knowing about the infer_pipeline, refer infer.


Here you can find various conversational AI tasks as examples and can train multi-task models in simple steps mentioned in the notebooks.

Example-1 Intent detection, NER, Fragment detection

(Setup : Multi-task , Task type : Multiple)

Intent Detection(Task type : Single sentence classification)

 Query: I need a reservation for a bar in bangladesh on feb the 11th 2032
 Intent: BookRestaurant

NER (Task type :sequence labelling)

Query: ['book', 'a', 'spot', 'for', 'ten', 'at', 'a', 'top-rated', 'caucasian', 'restaurant', 'not', 'far', 'from', 'selmer']

NER tags: ['O', 'O', 'O', 'O', 'B-party_size_number', 'O', 'O', 'B-sort', 'B-cuisine', 'B-restaurant_type', 'B-spatial_relation', 'I-spatial_relation', 'O', 'B-city']

Fragment Detection (Task type : single sentence classification)

Query: a reservation for

Label: fragment

Notebook :- intent_ner_fragment

Transform file :- transform_file_snips

Tasks file :- tasks_file_snips

Example-2 Entailment detection

(Setup : single-task , Task type : sentence pair classification)

Query1: An old man with a package poses in front of an advertisement.

Query2: A man poses in front of an ad.

Label: entailment

Query1: An old man with a package poses in front of an advertisement.

Query2: A man poses in front of an ad for beer.

Label: non-entailment

Notebook :- entailment_snli

Transform file :- transform_file_snli

Tasks file :- tasks_file_snli

Example-3 Answerability detection

(Setup : single-task , Task type : sentence pair classification)

Query: how much money did evander holyfield make

Context: Evander Holyfield Net Worth. How much is Evander Holyfield Worth? Evander Holyfield Net Worth: Evander Holyfield is a retired American professional boxer who has a net worth of $500 thousand. A professional boxer, Evander Holyfield has fought at the Heavyweight, Cruiserweight, and Light-Heavyweight Divisions, and won a Bronze medal a the 1984 Olympic Games.

Label: answerable

Notebook :- answerability_detection_msmarco

Transform file :- transform_file_answerability

Tasks file :- tasks_file_answerability

Example-4 Query type detection

(Setup : single-task , Task type : single sentence classification)

Query: what's the distance between destin florida and birmingham alabama?


Query: who is suing scott wolter


Notebook :- query_type_detection

Transform file :- transform_file_querytype

Tasks file :- tasks_file_querytype

Example-5 POS tagging, NER tagging

(Setup : Multi-task , Task type : sequence labelling)

Query: ['Despite', 'winning', 'the', 'Asian', 'Games', 'title', 'two', 'years', 'ago', ',', 'Uzbekistan', 'are', 'in', 'the', 'finals', 'as', 'outsiders', '.']

NER tags: ['O', 'O', 'O', 'I-MISC', 'I-MISC', 'O', 'O', 'O', 'O', 'O', 'I-LOC', 'O', 'O', 'O', 'O', 'O', 'O', 'O']

POS tags: ['I-PP', 'I-VP', 'I-NP', 'I-NP', 'I-NP', 'I-NP', 'B-NP', 'I-NP', 'I-ADVP', 'O', 'I-NP', 'I-VP', 'I-PP', 'I-NP', 'I-NP', 'I-SBAR', 'I-NP', 'O']

Notebook :- ner_pos_tagging_conll

Transform file :- transform_file_conll

Tasks file :- tasks_file_conll

Example-6 Query correctness

(Setup : single-task , Task type : single sentence classification)

Query: What places have the oligarchy government ?

Label: well-formed

Query: What day of Diwali in 1980 ?

Label: not well-formed

Notebook :- query_correctness

Transform file :- transform_file_query_correctness

Tasks file :- tasks_file_query_correctness

Example-7 Query similarity

(Setup : single-task , Task type : single sentence classification)

Query1: What is the most used word in Malayalam?

Query2: What is meaning of the Malayalam word ""thumbatthu""?

Label: not similar

Query1: Which is the best compliment you have ever received?

Query2: What's the best compliment you've got?

Label: similar

Notebook :- query_similarity

Transform file :- transform_file_qqp

Tasks file :- tasks_file_qqp

Example-8 Sentiment Analysis

(Setup : single-task , Task type : single sentence classification)

Review: What I enjoyed most in this film was the scenery of Corfu, being Greek I adore my country and I liked the flattering director's point of view. Based on a true story during the years when Greece was struggling to stand on her own two feet through war, Nazis and hardship. An Italian soldier and a Greek girl fall in love but the times are hard and they have a lot of sacrifices to make. Nicholas Cage looking great in a uniform gives a passionate account of this unfulfilled (in the beginning) love. I adored Christian Bale playing Mandras the heroine's husband-to-be, he looks very very good as a Greek, his personality matched the one of the Greek patriot! A true fighter in there, or what! One of the movies I would like to buy and keep it in my collection...for ever!

Label: positive

Notebook :- IMDb_sentiment_analysis

Transform file :- transform_file_imdb

Tasks file :- tasks_file_imdb

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Python (1,142,368
Pytorch (11,606
Nlp (8,372
Transformer (1,682
Named Entity Recognition (600
Ranking (314
Nlp Library (168
Sequence Labeling (129
Sentence Classification (76
Multitask Learning (68
Intent Classification (66
Nlp Datasets (63
Machine Comprehension (35
Nlp Apis (32
Context Awareness (17
Entailment (10
Related Projects