A highly efficient implementation of Gaussian Processes in PyTorch
Alternatives To Gpytorch
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
D2l En19,977
2 days ago2November 13, 2022101otherPython
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
Gpytorch3,2934632 days ago38June 02, 2023333mitPython
A highly efficient implementation of Gaussian Processes in PyTorch
Deep Kernel Transfer142
2 years ago6Python
Official pytorch implementation of the paper "Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels" (NeurIPS 2020)
Data Efficient Reinforcement Learning With Probabilistic Model Predictive Control76
7 months agomitPython
Unofficial Implementation of the paper "Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control", applied to gym environments
Random Fourier Features66
7 days agomitPython
Implementation of random Fourier features for kernel method, like support vector machine and Gaussian process model
4 years agoapache-2.0Jupyter Notebook
Gaussian Processes in Pytorch
Pytorch Minimal Gaussian Process38
a year agoJupyter Notebook
A minimal implementation of Gaussian process regression in PyTorch
5 years agomitPython
Differentiable Gaussian Process implementation for PyTorch
3 months ago4bsd-3-clausePython
Gaussian processes on graphs and lattices in Stan and pytorch.
4 years ago3Python
Official code for "Efficient Deep Gaussian Process Models for Variable-Sized Inputs" - accepted in IJCNN2019
Alternatives To Gpytorch
Select To Compare

Alternative Project Comparisons


Test Suite Documentation Status License

Python Version Conda PyPI

GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.

Internally, GPyTorch differs from many existing approaches to GP inference by performing most inference operations using numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our LinearOperator interface, or by composing many of our already existing LinearOperators. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.

GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility (SKI/KISS-GP, stochastic Lanczos expansions, LOVE, SKIP, stochastic variational deep kernel learning, ...); (3) easy integration with deep learning frameworks.

Examples, Tutorials, and Documentation

See our documentation, examples, tutorials on how to construct all sorts of models in GPyTorch.



  • Python >= 3.8
  • PyTorch >= 1.11

Install GPyTorch using pip or conda:

pip install gpytorch
conda install gpytorch -c gpytorch

(To use packages globally but install GPyTorch as a user-only package, use pip install --user above.)

Latest (Unstable) Version

To upgrade to the latest (unstable) version, run

pip install --upgrade git+
pip install --upgrade git+

Development version

If you are contributing a pull request, it is best to perform a manual installation:

git clone
cd gpytorch
pip install -e .[dev,docs,examples,keops,pyro,test]  # keops and pyro are optional

ArchLinux Package

Note: Experimental AUR package. For most users, we recommend installation by conda or pip.

GPyTorch is also available on the ArchLinux User Repository (AUR). You can install it with an AUR helper, like yay, as follows:

yay -S python-gpytorch

To discuss any issues related to this AUR package refer to the comments section of python-gpytorch.

Citing Us

If you use GPyTorch, please cite the following papers:

Gardner, Jacob R., Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration." In Advances in Neural Information Processing Systems (2018).

  title={GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration},
  author={Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon},
  booktitle={Advances in Neural Information Processing Systems},


See the contributing guidelines for information on submitting issues and pull requests.

The Team

GPyTorch is primarily maintained by:

We would like to thank our other contributors including (but not limited to) Eytan Bakshy, Wesley Maddox, Ke Alexander Wang, Ruihan Wu, Sait Cakmak, David Eriksson, Sam Daulton, Martin Jankowiak, Sam Stanton, Zitong Zhou, David Arbour, Karthik Rajkumar, Bram Wallace, Jared Frank, and many more!


Development of GPyTorch is supported by funding from the Bill and Melinda Gates Foundation, the National Science Foundation, SAP, the Simons Foundation, and the Gatsby Charitable Trust.


GPyTorch is MIT licensed.

Popular Pytorch Projects
Popular Gaussian Processes Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Gaussian Processes
Gpu Acceleration