Keras Gat

Keras implementation of the graph attention networks (GAT) by Veličković et al. (2017;
Alternatives To Keras Gat
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Spektral2,25964 months ago34June 01, 202361mitPython
Graph Neural Networks with Keras and Tensorflow 2.
2 years ago27mitPython
Graph Attention Networks (
Nlp Journey1,528
5 months ago3April 29, 2020apache-2.0Python
Documents, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation),etc. All codes are implemented intensorflow 2.0.
Hopfield Layers1,258
2 years agootherPython
Hopfield Networks is All You Need
3 months ago76Python
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"
3 years ago6Python
Deep Learning Architecture Genealogy Project
Attention Gated Networks1,099
3 years agomitPython
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation
2 years ago29apache-2.0Python
Text classifier for Hierarchical Attention Networks for Document Classification
Nlp Paper820
10 months ago2May 09, 20221apache-2.0Python
Spatial Transformer Network661
15 years ago3June 02, 201817mitPython
A Tensorflow implementation of Spatial Transformer Networks.
Alternatives To Keras Gat
Select To Compare

Alternative Project Comparisons

Keras Graph Attention Network

This implementation of GAT is no longer actively maintained and may not work with modern versions of Tensorflow and Keras. Check out Spektral and its GAT example for a Tensorflow/Keras implementation of GAT.

This is a Keras implementation of the Graph Attention Network (GAT) model by Veličković et al. (2017, [arXiv link]).


I have no affiliation with the authors of the paper and I am implementing this code for non-commercial reasons.
The authors published their reference Tensorflow implementation here, so check it out for something that is guaranteed to work as intended. Their implementation is slightly different than mine, so that may be something to keep in mind. You should cite the paper if you use any of this code for your research:

  title="{Graph Attention Networks}",
  author={Veli{\v{c}}kovi{\'{c}}, Petar and Cucurull, Guillem and Casanova, Arantxa and Romero, Adriana and Li{\`{o}}, Pietro and Bengio, Yoshua},
  journal={International Conference on Learning Representations},
  note={Accepted as poster},

If you would like to give me credit, feel free to link to my Github profile, blog, or Twitter.

I also copied the code in almost verbatim from this repo by Thomas Kipf, who I thank sincerely for sharing his work on GCNs and GAEs, and for giving me a few pointers on how to split the data into train/test/val sets.

Thanks to mawright, matthias-samwald, and vermaMachineLearning for helping me out with bugs, performance improvements, and running experiments.


I do not own any rights to the datasets distributed with this code, but they are publicly available at the following links:


To install as a module:

$ git clone
$ cd keras-gat
$ pip install .
$ python
>>> from keras_gat import GraphAttention

Or you can just copy and paste into your project.

Replicating experiments

To replicate the experimental results of the paper, simply run:

$ python examples/
Popular Network Projects
Popular Attention Projects
Popular Networking Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Deep Learning
Attention Mechanism