Awesome Open Source
Awesome Open Source


This repository contains the source code for the models used for DataStories team's submission for SemEval-2017 Task 4 “Sentiment Analysis in Twitter”. The model is described in the paper "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis".


  author    = {Baziotis, Christos  and  Pelekis, Nikos  and  Doulkeridis, Christos},
  title     = {DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis},
  booktitle = {Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)},
  month     = {August},
  year      = {2017},
  address   = {Vancouver, Canada},
  publisher = {Association for Computational Linguistics},
  pages     = {747--754}

MSA The message-level sentiment analysis model, for SubTask A.

MSA The target-based sentiment analysis model, for SubTasks B,C,D,E.


  • If what you are just interested in the source code for the model then just see models/neural/
  • The models were trained using Keras 1.2. In order for the project to work with Keras 2 some minor changes will have to be made.


1 - Install Requirements

pip install -r /datastories-semeval2017-task4/requirements.txt


sudo apt-get install graphviz

Windows: Install graphiz from here:

2 - Download pre-trained Word Embeddings

The models were trained on top of word embeddings pre-trained on a big collection of Twitter messages. We collected a big dataset of 330M English Twitter messages posted from 12/2012 to 07/2016. For training the word embeddings we used GloVe. For preprocessing the tweets we used ekphrasis, which is also one of the requirements of this project.

You can download one of the following word embeddings:

Place the file(s) in /embeddings folder, for the program to find it.


Word Embeddings

In order to specify which word embeddings file you want to use, you have to set the values of WV_CORPUS and WV_WV_DIM in and respectively. The default values are:

WV_CORPUS = "datastories.twitter"
WV_DIM = 300

The convention we use to identify each file is:


This means that if you want to use another file, for instance GloVe Twitter word embeddings with 200 dimensions, you have to place a file like glove.200d.txt inside /embeddings folder and set:

WV_CORPUS = "glove"
WV_DIM = 200

Model Training

You will find the programs for training the Keras models, in /models folder.

│  : contains the Keras models
│ : script for training the model for Subtask A
│  : script for training the models for Subtask B and D

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Python (1,147,256
Deep Learning (23,829
Neural Network (8,646
Nlp (8,431
Keras (5,781
Twitter (4,011
Sentiment Analysis (2,090
Lstm (1,978
Nlp Machine Learning (1,229
Embeddings (635
Attention Mechanism (598
Word Embeddings (525
Attention (497
Computational Linguistics (182
Glove (147
Keras Models (145
Semeval (41
Semeval Sentiment (3
Twitter Messages (3
Related Projects