Bayesian inference is just a convenient name for conditional probabilities according to probability theory. In other words, Bayesian statistics is mathematically grounded and based on three mathematical axioms (in addition to the axioms of math itself) from which all Bayesian statistics can be derived and proved.
Purpose: intuitive understanding of gibbs samplers
Being mathematically coherent, Bayesian statistics is actually really easy to understand. You just have to learn a few basic ideas and use logic to understand the rest. This is less the case with classical statistics where the lack of coherence make you prone to misinterpret p-values (which is not the probability of the model being true), confidence intervals (which isn’t the interval that the parameter lies in with 95 % probability) etc. Bayesian stats quantifies these things directly.
This page is about getting these fundamental intuitions right and in doing that, making it easier to understand what BUGS, JAGS and other Gibbs samplers do.
This page is under construction but have a look in the menu at the content that’s already here. Coming content includes:
- Deriving Bayes formula from the three axioms of probability theory. Chaper 13 and of Russell & Norvigs “Artificial Intelligence – A Modern Approach” is highly recommended.
- Deriving Bayes formula using Venn diagrams and extending this to Gibbs sampling. See this page for a really nice deduction.
- Ultra-simple Gibbs samplers coded in pure R. It’s really easy to do and really easy to understand! See a first draft here.
- A super-fleshed-out account of what the universe looks like to a Gibbs sampler. I plan to do an analogy where the Gibbs sampler is an incredible confused god who has a completely fixed idea about what the universe should look like but only has a few controls (parameters) to tune the universe with. He keeps changing the universe based on the observations he make from each tuning.