TensorFlow Implementation of Attentional Factorization Machine
Alternatives To Attentional_factorization_machine
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Tensorflow177,850327777 hours ago46October 23, 20192,047apache-2.0C++
An Open Source Machine Learning Framework for Everyone
Transformers112,568641,8694 hours ago114July 18, 2023850apache-2.0Python
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Keras59,4515788 hours ago80June 27, 202397apache-2.0Python
Deep Learning for humans
Tensorflow Examples42,312
a year ago218otherJupyter Notebook
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)
Photoprism28,75145 days ago151April 25, 2021388otherGo
AI-Powered Photos App for the Decentralized Web 🌈💎✨
Ray27,947802984 hours ago87July 24, 20233,442apache-2.0Python
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Data Science Ipython Notebooks25,242
3 months ago34otherPython
Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
Handson Ml25,030
3 months ago139apache-2.0Jupyter Notebook
⛔️ DEPRECATED – See instead.
Handson Ml224,694
5 months ago197apache-2.0Jupyter Notebook
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
Netron24,08646914 hours ago587August 01, 202323mitJavaScript
Visualizer for neural network, deep learning, and machine learning models
Alternatives To Attentional_factorization_machine
Select To Compare

Alternative Project Comparisons


This is our implementation for the paper:

Jun Xiao, Hao Ye, Xiangnan He, Hanwang Zhang, Fei Wu and Tat-Seng Chua (2017). Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks IJCAI, Melbourne, Australia, August 19-25, 2017.

We have additionally released our TensorFlow implementation of Factorization Machines under our proposed neural network framework.

Please cite our IJCAI'17 paper if you use our codes. Thanks!

Author: Xiangnan He ([email protected]) and Hao Ye ([email protected])


  • Tensorflow (version: 1.0.1)
  • numpy
  • sklearn


We use the same input format as the LibFM toolkit ( In this instruction, we use MovieLens. The MovieLens data has been used for personalized tag recommendation, which contains 668,953 tag applications of users on movies. We convert each tag application (user ID, movie ID and tag) to a feature vector using one-hot encoding and obtain 90,445 binary features. The following examples are based on this dataset and it will be referred as ml-tag wherever in the files' name or inside the code. When the dataset is ready, the current directory should be like this:

  • code
  • data
    • ml-tag
      • ml-tag.train.libfm
      • ml-tag.validation.libfm
      • ml-tag.test.libfm

Quick Example with Optimal parameters

Use the following command to train the model with the optimal parameters:

# step into the code folder
cd code
# train FM model and save as pretrain file
python --dataset ml-tag --epoch 100 --pretrain -1 --batch_size 4096 --hidden_factor 256 --lr 0.01 --keep 0.7
# train AFM model using the pretrained weights from FM
python --dataset ml-tag --epoch 100 --pretrain 1 --batch_size 4096 --hidden_factor [8,256] --keep [1.0,0.5] --lamda_attention 2.0 --lr 0.1

The instruction of commands has been clearly stated in the codes (see the parse_args function).

The current implementation supports regression classification, which optimizes RMSE.

Performance Comparison


For the sake of a quick demonstration for the improvement of our AFM model compared to original FM, we set the dimension of the embedding factor to be 16 (instead of 256 in our paper), and epoch as 20.


Step into the code folder and train FM and AFM as follows. This will start to train our AFM model on the dataset frappe based on the pretrained model of FM. The parameters have been initialized optimally according to our experiments. It will loop 20 epochs and print the best epoch depending on the validation result.

# step into the code folder
cd code
# train FM model with optimal parameters
python --dataset ml-tag --epoch 20 --pretrain -1 --batch_size 4096 --hidden_factor 16 --lr 0.01 --keep 0.7
# train AFM model with optimal parameters
python --dataset ml-tag --epoch 20 --pretrain 1 --batch_size 4096 --hidden_factor [16,16] --keep [1.0,0.5] --lamda_attention 100.0 --lr 0.1

After the trainning processes finish, the trained models will be saved into the pretrain folder, which should be like this:

  • pretrain
    • afm_ml-tag_16
      • checkpoint
      • ml-tag_16.index
      • ml-tag_16.meta
    • fm_ml-tag_16
      • checkpoint
      • ml-tag_16.index
      • ml-tag_16.meta


Now it's time to evaluate the pretrained models with the test datasets, which can be done by running and with --process evaluate as follows:

# evaluate the pretrained FM model
python --dataset ml-tag --epoch 20 --batch_size 4096 --lr 0.01 --keep 0.7 --process evaluate
# evaluate the pretrained AFM model
python --dataset ml-tag --epoch 20 --pretrain 1 --batch_size 4096 --hidden_factor [16,16] --keep [1.0,0.5] --lamda_attention 100.0 --lr 0.1 --process evaluate

Last Update Date: Aug 2, 2017

Popular Tensorflow Projects
Popular Machine Learning Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Deep Learning
Recommender System
Factorization Machines