Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017,
Alternatives To Pygat
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
a month ago1C++
该仓库主要记录 NLP 算法工程师相关的顶会论文研读笔记
Spektral2,25964 months ago34June 01, 202361mitPython
Graph Neural Networks with Keras and Tensorflow 2.
2 years ago27mitPython
Graph Attention Networks (
2 years ago32mitPython
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017,
3 years ago24mitPython
KGAT: Knowledge Graph Attention Network for Recommendation, KDD2019
Source Code Notebook428
a year ago3Python
Awesome Gcn377
4 years ago1
resources for graph convolutional networks (图卷积神经网络相关资源)
a year ago7mitC++
Efficient Graph Generation with Graph Recurrent Attention Networks, Deep Generative Model of Graphs, Graph Neural Networks, NeurIPS 2019
a year agogpl-3.0Python
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).
Keras Gat301
3 years ago3mitPython
Keras implementation of the graph attention networks (GAT) by Veličković et al. (2017;
Alternatives To Pygat
Select To Compare

Alternative Project Comparisons

Pytorch Graph Attention Network

This is a pytorch implementation of the Graph Attention Network (GAT) model presented by Veličković et. al (2017,

The repo has been forked initially from tkipf/pygcn. The official repository for the GAT (Tensorflow) is available in PetarV-/GAT. Therefore, if you make advantage of the pyGAT model in your research, please cite the following:

  title="{Graph Attention Networks}",
  author={Veli{\v{c}}kovi{\'{c}}, Petar and Cucurull, Guillem and Casanova, Arantxa and Romero, Adriana and Li{\`{o}}, Pietro and Bengio, Yoshua},
  journal={International Conference on Learning Representations},
  note={accepted as poster},

The branch master contains the implementation from the paper. The branch similar_impl_tensorflow the implementation from the official Tensorflow repository.


For the branch master, the training of the transductive learning on Cora task on a Titan Xp takes ~0.9 sec per epoch and 10-15 minutes for the whole training (~800 epochs). The final accuracy is between 84.2 and 85.3 (obtained on 5 different runs). For the branch similar_impl_tensorflow, the training takes less than 1 minute and reach ~83.0.

A small note about initial sparse matrix operations of tkipf/pygcn: they have been removed. Therefore, the current model take ~7GB on GRAM.

Sparse version GAT

We develop a sparse version GAT using pytorch. There are numerically instability because of softmax function. Therefore, you need to initialize carefully. To use sparse version GAT, add flag --sparse. The performance of sparse version is similar with tensorflow. On a Titan Xp takes 0.08~0.14 sec.


pyGAT relies on Python 3.5 and PyTorch 0.4.1 (due to torch.sparse_coo_tensor).

Issues/Pull Requests/Feedbacks

Don't hesitate to contact for any feedback or create issues/pull requests.

Popular Attention Projects
Popular Graph Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Neural Network
Attention Mechanism