Awesome Open Source
Awesome Open Source

Deep Bayesian Quadrature Policy Optimization

Akella Ravi Tej1, Kamyar Azizzadenesheli1, Mohammad Ghavamzadeh2, Anima Anandkumar3, Yisong Yue3 1Purdue University, 2Google Research, 3Caltech

Publication: AAAI-21 (also presented at NeurIPS Deep RL and Real-World RL Workshops 2020)
Project Website:

Bayesian Quadrature for Policy Gradient

MIT license contributions welcome

Bayesian quadrature is an approach in probabilistic numerics for approximating a numerical integration. When estimating the policy gradient integral, replacing standard Monte-Carlo estimation with Bayesian quadrature provides

  1. more accurate gradient estimates with a significantly lower variance
  2. a consistent improvement in the sample complexity and average return for several policy gradient algorithms
  3. a methodological way to quantify the uncertainty in gradient estimation.

This repository contains a computationally efficient implementation of BQ for estimating the policy gradient integral (gradient vector) and the estimation uncertainty (gradient covariance matrix). The source code is written in a modular fashion, currently supporting three policy gradient estimators and three policy gradient algorithms (9 combinations overall):

Policy Gradient Estimators :-

  1. Monte-Carlo Estimation
  2. Deep Bayesian Quadrature Policy Gradient (DBQPG)
  3. Uncertainty Aware Policy Gradient (UAPG)

Policy Gradient Algorithms :-

  1. Vanilla Policy Gradient
  2. Natural Policy Gradient (NPG)
  3. Trust-Region Policy Optimization (TRPO)

Project Setup

This codebase requires Python 3.6 (or higher). We recommend using Anaconda or Miniconda for setting up the virtual environment. Here's a walk through for the installation and project setup.

git clone
cd Deep-Bayesian-Quadrature-Policy-Optimization
conda create -n DBQPG python=3.6
conda activate DBQPG
pip install -r requirements.txt

Supported Environments

  1. Classic Control
  2. MuJoCo
  3. PyBullet
  4. Roboschool
  5. DeepMind Control Suite (via dm_control2gym)


Modular implementation:

python --env-name <gym_environment_name> --pg_algorithm <VanillaPG/NPG/TRPO> --pg_estimator <MC/BQ> --UAPG_flag

All the experiments will run for 1000 policy updates and the logs get stored in session_logs/ folder. To reproduce the results in the paper, refer the following command:

# Running Monte-Carlo baselines
python --env-name <gym_environment_name> --pg_algorithm <VanillaPG/NPG/TRPO> --pg_estimator MC
# DBQPG as the policy gradient estimator
python --env-name <gym_environment_name> --pg_algorithm <VanillaPG/NPG/TRPO> --pg_estimator BQ
# UAPG as the policy gradient estimator
python --env-name <gym_environment_name> --pg_algorithm <VanillaPG/NPG/TRPO> --pg_estimator BQ --UAPG_flag

For more customization options, kindly take a look at the


visualize.ipynb can be used to visualize the Tensorboard files stored in session_logs/ (requires jupyter and tensorboard installed).


Vanilla Policy Gradient

Average of 10 runs.

Natural Policy Gradient

Average of 10 runs.

Trust Region Policy Optimization

Average of 10 runs.

Implementation References


Contributions are very welcome. If you know how to make this code better, please open an issue. If you want to submit a pull request, please open an issue first. Also see the todo list below.


  • Implement policy network for discrete action space and test on Arcade Learning Environment (ALE).
  • Add other policy gradient algorithms.


If you find this work useful, please consider citing:

    title={Deep Bayesian Quadrature Policy Optimization},
    author={Akella Ravi Tej and Kamyar Azizzadenesheli and Mohammad Ghavamzadeh and Anima Anandkumar and Yisong Yue},
    journal={arXiv preprint arXiv:2006.15637},
Alternatives To Deep Bayesian Quadrature Policy Optimization
Select To Compare

Alternative Project Comparisons
Related Awesome Lists
Top Programming Languages
Top Projects

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
Python (839,700
Deep Learning (37,747
Pytorch (21,705
Optimization (8,433
Gradient (5,039
Reinforcement Learning (4,621
Bayesian (3,721
Deep Reinforcement Learning (1,055
Monte Carlo (821
Gaussian Processes (535
Mujoco (492
Policy Gradient (299
Actor Critic (204
Trpo (130
Continuous Control (116
Advantage Actor Critic (23
Roboschool (12
Trust Region Policy Optimization (8
Bayesian Quadrature (5
Natural Policy Gradient (4