Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets. Edward fuses three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming.

It supports **modeling** with

- Directed graphical models
- Neural networks (via libraries such as
`tf.layers`

and Keras) - Implicit generative models
- Bayesian nonparametrics and probabilistic programs

It supports **inference** with

- Variational inference
- Black box variational inference
- Stochastic variational inference
- Generative adversarial networks
- Maximum a posteriori estimation

- Monte Carlo
- Gibbs sampling
- Hamiltonian Monte Carlo
- Stochastic gradient Langevin dynamics

- Compositions of inference
- Expectation-Maximization
- Pseudo-marginal and ABC methods
- Message passing algorithms

It supports **criticism** of the model and inference with

- Point-based evaluations
- Posterior predictive checks

Edward is built on top of TensorFlow. It enables features such as computational graphs, distributed training, CPU/GPU integration, automatic differentiation, and visualization with TensorBoard.

- Edward website
- Edward Forum
- Edward Gitter channel
- Edward releases
- Edward papers, posters, and slides

See Getting Started for how to install Edward.

Get A Weekly Email With Trending Projects For These Topics

No Spam. Unsubscribe easily at any time.

Jupyter Notebook (242,165)

Machine Learning (31,776)

Deep Learning (23,747)

Tensorflow (12,804)

Data Science (9,208)

Neural Network (8,625)

Statistics (4,340)

Probabilistic Programming (210)

Bayesian Methods (143)

Related Projects