|Project Name||Stars||Downloads||Repos Using This||Packages Using This||Most Recent Commit||Total Releases||Latest Release||Open Issues||License||Language|
|Tikz Bayesnet||638||6 years ago||4||mit||TeX|
|TikZ library for drawing Bayesian networks, graphical models and (directed) factor graphs in LaTeX.|
|Bayesian Statistics||288||4 months ago||1||cc-by-sa-4.0||TeX|
|This repository holds slides and code for a full Bayesian statistics graduate course.|
|Master Thesis Bayesiancnn||197||4 years ago||mit||TeX|
|Master Thesis on Bayesian Convolutional Neural Network using Variational Inference|
|Awesome Agi Cocosci||152||13 days ago||cc0-1.0||TeX|
|An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.|
|Bml Course||57||3 months ago||1||TeX|
|Public repo for course material on Bayesian machine learning at ENS Paris-Saclay and Univ Lille|
|Bayesian Surprise||52||2 years ago||mit||TeX|
|Bayesian Weighting for De-Biasing Thematic Maps|
|Bayes Vis Paper||49||3 years ago||TeX|
|'Visualization in Bayesian workflow' by Gabry, Simpson, Vehtari, Betancourt, and Gelman. (JRSS discussion paper and code)|
|Stats331||42||3 years ago||1||mit||TeX|
|Lecture notes for an introductory undergraduate course in Bayesian Inference|
|Stats Shortcourse||41||4 years ago||bsd-3-clause||TeX|
|The statistics short course is both a resource and survey of the areas of probability and statistics that are foundational for the data science immersive at Galvanize.|
|Nonparametric Bayes||33||3 years ago||15||other||TeX|
|:notebook: Non-parametric Bayesian Inference for Conservation Decisions|
This repo contains the experiment source code as well as the latex source for a paper that mathematically relates two very different approaches to approximate integration.
This paper was accepted as an oral presentation at the 2012 Uncertainty in Artificial Intelligence conference.
Herding and kernel herding are deterministic methods of choosing samples which summarise a probability distribution. A related task is choosing samples for estimating integrals using Bayesian quadrature. We show that the criterion minimised when selecting samples in kernel herding is equivalent to the posterior variance in Bayesian quadrature. We then show that sequential Bayesian quadrature can be viewed as a weighted version of kernel herding which achieves performance superior to any other weighted herding method. We demonstrate empirically a rate of convergence faster than O(1/N). Our results also imply an upper bound on the empirical error of the Bayesian quadrature estimate.
Running code/demo.m automatically reproduces most of the figures in the paper, with some of the settings turned down to make the demo run fast.
If you want to exactly reproduce the results found in the paper, set
num_samples = 400; num_queries = 10000;
The code used was optimized for legibility and simplicity, not speed. Thus the herding implementation is O(N^3) instead of O(N^2), and the BQ implementation is O(N^4) instead of O(N^3).
Feel free to contact us if you have any questions.