Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Danet | 1,556 | 3 years ago | 33 | mit | Python | |||||
Dual Attention Network for Scene Segmentation (CVPR2019) | ||||||||||
Deepvoice3_pytorch | 1,449 | 2 years ago | 29 | other | Python | |||||
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models | ||||||||||
Image_segmentation | 880 | 3 years ago | 20 | Python | ||||||
Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net. | ||||||||||
Transformer | 644 | 2 months ago | 3 | Python | ||||||
PyTorch Implementation of "Attention Is All You Need" | ||||||||||
Vad | 632 | 2 years ago | 32 | MATLAB | ||||||
Voice activity detection (VAD) toolkit including DNN, bDNN, LSTM and ACAM based VAD. We also provide our directly recorded dataset. | ||||||||||
Moran_v2 | 593 | 4 months ago | 24 | mit | Python | |||||
MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition | ||||||||||
Transformer | 583 | 6 months ago | 7 | gpl-3.0 | Jupyter Notebook | |||||
Implementation of Transformer model (originally from Attention is All You Need) applied to Time Series. | ||||||||||
Attention Networks For Classification | 477 | 3 years ago | 8 | Jupyter Notebook | ||||||
Hierarchical Attention Networks for Document Classification in PyTorch | ||||||||||
Long Range Arena | 456 | 6 months ago | 18 | apache-2.0 | Python | |||||
Long Range Arena for Benchmarking Efficient Transformers | ||||||||||
Mac Network | 445 | 2 years ago | 9 | apache-2.0 | Python | |||||
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018) |
This is the repository for the Self-Attentive Hawkes Processes paper where self-attention is used to adapt the intensity function of Hawkes process.
The realword datasets are available on this [Google drive] (https://drive.google.com/drive/folders/0BwqmV0EcoUc8UklIR1BKV25YR1U) while the synthetic dataset is at this [link] (https://drive.google.com/file/d/1lRUIJx5UIPMx4TMwKy6GiAiP-k2vwvDc/view?usp=sharing). To run the model, you should download them to
the parent directory of the source code, with the folder name data
.
To make the data format consistent, it is necessary to run the script convert_realdata_syntheform.py first.
The Python version should be at least 3.5 and the torch version can be 0.4.1
models
defines the self-attentive Hawkes model, multi-head attention and the related.
main_func.py
is the main function to run the experiments, hyper-parameters are provided here.
utils
contains utility functions
To run the model: python main_func.py
@article{zhang2019self,
title={Self-attentive Hawkes processes},
author={Zhang, Qiang and Lipani, Aldo and Kirnap, Omer and Yilmaz, Emine},
journal={arXiv preprint arXiv:1907.07561},
year={2019}
}