Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Stan | 2,415 | 4 days ago | 166 | bsd-3-clause | C++ | |||||
Stan development repository. The master branch contains the current release. The develop branch contains the latest stable development. See the Developer Process Wiki for details. | ||||||||||
Turing.jl | 1,820 | 21 hours ago | 70 | mit | Julia | |||||
Bayesian inference with probabilistic programming. | ||||||||||
Pymc Resources | 1,722 | 3 months ago | 31 | mit | Jupyter Notebook | |||||
PyMC educational resources | ||||||||||
Pytorch Bayesiancnn | 1,181 | 2 months ago | 15 | mit | Python | |||||
Bayesian Convolutional Neural Network with Variational Inference based on Bayes by Backprop in PyTorch. | ||||||||||
Rstan | 952 | 14 days ago | 325 | C++ | ||||||
RStan, the R interface to Stan | ||||||||||
Bayesian Stats Modelling Tutorial | 601 | a year ago | 14 | mit | Jupyter Notebook | |||||
How to do Bayesian statistical modelling using numpy and PyMC3 | ||||||||||
Bayesian Analysis Recipes | 510 | a year ago | 2 | mit | Jupyter Notebook | |||||
A collection of Bayesian data analysis recipes using PyMC3 | ||||||||||
Soss.jl | 401 | a month ago | 106 | mit | Julia | |||||
Probabilistic programming via source rewriting | ||||||||||
Rstanarm | 341 | 2 months ago | 148 | gpl-3.0 | R | |||||
rstanarm R package for Bayesian applied regression modeling | ||||||||||
Statsexpressions | 295 | 1 | 2 | 12 days ago | 26 | August 11, 2022 | 17 | other | R | |
Tidy data frames and expressions with statistical summaries 📜 |
Soss is a library for probabilistic programming.
Let's look at an example. First we'll load things:
using MeasureTheory
using Soss
MeasureTheory.jl is designed specifically with PPLs like Soss in mind, though you can also use Distributions.jl.
Now for a model. Here's a linear regression:
m = @model x begin
α ~ Lebesgue(ℝ)
β ~ Normal()
σ ~ Exponential()
y ~ For(x) do xj
Normal(α + β * xj, σ)
end
return y
end
Next we'll generate some fake data to work with. For x
-values, let's use
x = randn(20)
Now loosely speaking, Lebesgue(ℝ)
is uniform over the real numbers, so we can't really sample from it. Instead, let's transform the model and make α
an argument:
julia> predα = predictive(m, :α)
@model (x, α) begin
σ ~ Exponential()
β ~ Normal()
y ~ For(x) do xj
Normal(α + β * xj, σ)
end
return y
end
Now we can do
julia> y = rand(predα(x=x,α=10.0))
20-element Vector{Float64}:
10.554133456468438
9.378065258831002
12.873667041657287
8.940799408080496
10.737189595204965
9.500536439014208
11.327606120726893
10.899892855024445
10.18488773139243
10.386969795947177
10.382195272387214
8.358407507910297
10.727173015711768
10.452311211064654
11.076232496702387
11.362009520020141
9.539433052406448
10.61851691333643
11.586170856832645
9.197496058151618
Now for inference! Let's use DynamicHMC
, which we have wrapped in SampleChainsDynamicHMC
.
julia> using SampleChainsDynamicHMC
[ Info: Precompiling SampleChainsDynamicHMC [6d9fd711-e8b2-4778-9c70-c1dfb499d4c4]
julia> post = sample(m(x=x) | (y=y,), dynamichmc())
4000-element MultiChain with 4 chains and schema (σ = Float64, β = Float64, α = Float64)
(σ = 1.0±0.15, β = 0.503±0.26, α = 10.2±0.25)
First, a fine point: When people say "the Turing PPL" they usually mean what's technically called "DynamicPPL".
Soss and DynamicPPL are both maturing and becoming more complete, so the above will change over time. It's also worth noting that we (the Turing team and I) hope to move toward a natural way of using these systems together to arrive at the best of both.
I'm glad you asked! Lots of things:
For more details, please see the documentation.