Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Absa Pytorch | 1,446 | a year ago | 74 | mit | Python | |||||
Aspect Based Sentiment Analysis, PyTorch Implementations. 基于方面的情感分析,使用PyTorch实现。 | ||||||||||
Sentiment Discovery | 997 | 3 years ago | 44 | other | Python | |||||
Unsupervised Language Modeling at scale for robust sentiment classification | ||||||||||
Treelstm | 782 | 6 years ago | 9 | gpl-2.0 | Lua | |||||
Tree-structured Long Short-Term Memory networks (http://arxiv.org/abs/1503.00075) | ||||||||||
Sentiment_analysis_fine_grain | 504 | 4 years ago | 8 | Jupyter Notebook | ||||||
Multi-label Classification with BERT; Fine Grained Sentiment Analysis from AI challenger | ||||||||||
Aspect Based Sentiment Analysis | 288 | 2 years ago | mit | |||||||
A paper list for aspect based sentiment analysis. | ||||||||||
Nsc | 280 | 5 years ago | 6 | mit | Python | |||||
Neural Sentiment Classification | ||||||||||
Absapapers | 268 | a year ago | 2 | |||||||
Worth-reading papers and related awesome resources on aspect-based sentiment analysis (ABSA). 值得一读的方面级情感分析论文与相关资源集合 | ||||||||||
Finnlp Progress | 254 | a year ago | ||||||||
NLP progress in Fintech. A repository to track the progress in Natural Language Processing (NLP) related to the domain of Finance, including the datasets, papers, and current state-of-the-art results for the most popular tasks. | ||||||||||
Td Lstm | 252 | 6 years ago | Python | |||||||
Attention-based Aspect-term Sentiment Analysis implemented by tensorflow. | ||||||||||
Indic Bert | 234 | a year ago | 12 | mit | Python | |||||
BERT-based Multilingual Model for Indian Languages |
Code for NAACL 2019 paper: "Adversarial Category Alignment Network for Cross-domain Sentiment Classification" (pdf)
You can download the datasets (amazon-benchmark) at [Download]. The zip file should be decompressed and put in the root directory.
Download the pretrained Glove vectors [glove.840B.300d.zip]. Decompress the zip file and put the txt file in the root directory.
You can find arguments and hyper-parameters defined in train_batch.py with default values.
Under code/, use the following command for training any source-target pair from the amazon benchmark:
CUDA_VISIBLE_DEVICES="0" python train_batchs.py \
--emb ../glove.840B.300d.txt \
--dataset amazon \
--source $source \
--target $target \
--n-class 2 \
--lamda1 -0.1 --lamda2 0.1 --lamda3 5 --lamda4 1.5 \
--epochs 30
where --emb is the path to the pre-trained word embeddings. $source and $target are domains from the amazon benchmark, both in ['book', 'dvd', 'electronics', 'kitchen']. --n-class denoting the number of output classes is set to 2 as we only consider binary classification (positive or negative) on this dataset. All other hyper-parameters are left as their defaults.
The code was only tested under the environment below:
If you use the code, please cite the following paper:
@InProceedings{qu-etal-2019-adversarial,
author = {Qu, Xiaoye and Zou, Zhikang and Cheng, Yu and Yang, Yang and Zhou, Pan},
title = {Adversarial Category Alignment Network for Cross-domain Sentiment Classification},
booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics},
publisher = {Association for Computational Linguistics}
}