Code for NAACL 2019 paper: Adversarial Category Alignment Network for Cross-domain Sentiment Classification
Alternatives To Acan
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Absa Pytorch1,446
a year ago74mitPython
Aspect Based Sentiment Analysis, PyTorch Implementations. 基于方面的情感分析,使用PyTorch实现。
Sentiment Discovery997
3 years ago44otherPython
Unsupervised Language Modeling at scale for robust sentiment classification
6 years ago9gpl-2.0Lua
Tree-structured Long Short-Term Memory networks (
4 years ago8Jupyter Notebook
Multi-label Classification with BERT; Fine Grained Sentiment Analysis from AI challenger
Aspect Based Sentiment Analysis288
2 years agomit
A paper list for aspect based sentiment analysis.
5 years ago6mitPython
Neural Sentiment Classification
a year ago2
Worth-reading papers and related awesome resources on aspect-based sentiment analysis (ABSA). 值得一读的方面级情感分析论文与相关资源集合
Finnlp Progress254
a year ago
NLP progress in Fintech. A repository to track the progress in Natural Language Processing (NLP) related to the domain of Finance, including the datasets, papers, and current state-of-the-art results for the most popular tasks.
Td Lstm252
6 years agoPython
Attention-based Aspect-term Sentiment Analysis implemented by tensorflow.
Indic Bert234
a year ago12mitPython
BERT-based Multilingual Model for Indian Languages
Alternatives To Acan
Select To Compare

Alternative Project Comparisons


Code for NAACL 2019 paper: "Adversarial Category Alignment Network for Cross-domain Sentiment Classification" (pdf)

Dataset & pretrained word embeddings

You can download the datasets (amazon-benchmark) at [Download]. The zip file should be decompressed and put in the root directory.

Download the pretrained Glove vectors []. Decompress the zip file and put the txt file in the root directory.

Train & evaluation

You can find arguments and hyper-parameters defined in with default values.

Under code/, use the following command for training any source-target pair from the amazon benchmark:

--emb ../glove.840B.300d.txt \
--dataset amazon \
--source $source \
--target $target \
--n-class 2  \
--lamda1 -0.1 --lamda2 0.1 --lamda3 5 --lamda4 1.5 \
--epochs 30 

where --emb is the path to the pre-trained word embeddings. $source and $target are domains from the amazon benchmark, both in ['book', 'dvd', 'electronics', 'kitchen']. --n-class denoting the number of output classes is set to 2 as we only consider binary classification (positive or negative) on this dataset. All other hyper-parameters are left as their defaults.


The code was only tested under the environment below:

  • Python 2.7
  • Keras 2.1.2
  • tensorflow 1.4.1
  • numpy 1.13.3


If you use the code, please cite the following paper:

  author    = {Qu, Xiaoye and Zou, Zhikang and Cheng, Yu and Yang, Yang and Zhou, Pan},
  title     = {Adversarial Category Alignment Network for Cross-domain Sentiment Classification},
  booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics},
  publisher = {Association for Computational Linguistics}
Popular Classification Projects
Popular Sentiment Projects
Popular Data Processing Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Word Embeddings
Domain Adaptation
Sentiment Classification