Bayesian Model and Conjugate Priors
2021-01-30 by xiaoguang
When reading “Thompson Sampling for Dynamic Multi-Armed Bandits”, I learned a new concept called “conjugate priors”. For Bayesian models, we can choose a conjugate prior for a likelyhood function, and then the posterior distribution will have the same form as the conjugate prior distribution. This will greatly simplify the updating of the Bayesian model by just updating the hyperparameters of the prior distribution.
In the following parts, I’ll first go through some concepts related to the Bayesian model, and then give an example of the usage of conjugate prior.
Bayesian Model
The Bayesian model is defined as the product of the likelyhood and the prior probability divided by the probability of the observation data :
Let’s use coin tossing as a example to explain it.
Likelyhood Function
Most of the time, the likelyhood function can be considered as fixed, since we can determin it from the description of the problem. For example, for the coin tossing model, we can use the Bernoulli trial as the likelyhood function, and the data distribution can then be described using Binomial distribution:
(h
stands for the times of heads up.)
Prior Distribution
The is called prior distribution because it’s our belief/assumption of .
With more and more trials, we’ll collect more and more data too, and using these data we can get the posterior distribution, and then we can use the posterior distribution to replace the prior distribution for future predictions.
This is the core process of Bayesian model updating.
However, choosing the prior belief can be complicated. For coin tossing model, we may assume the at first, this is reasonable for most of the coins in the world. But what’s the distribution of the model parameter ?
Probability of the Observed Data
We can only get the probability of the observed data based on the assumption of the prior probability. With the prior distribution:
It’s an integral, how scare is that, what if we choose a wrong prior distribution function that make this calculation really complicated?
Actually this is a serious problem. It will be answered in the Conjugate Prior section.
Expand the Bayesian Model
And according to the above descriptions, we can now can expand the Bayesian model to:
Get Prediction from Bayesian Model
Let’s put aside the prior distribution selection problem, assume it’s been done and focus on how to use a Bayesian model to do the prediction :
This is called the posterior prediction. Note, is the posterior distribution, and is the likelyhood function. So we’re using posterior and likelyhood function to predict the future events.
Looks like both the posterior and the posterior prediction contains integrals. However both of them can be simplified by carefully choosing a conjugate prior.
Conjugate Prior
In the prior distribution section, we described the process of updating the Bayesian model, simply put it as a summary: it will use the previous posterior as the next prior during the model update iteration.
It’s reasonable to think that the prior and the posterior is of the same form, ie. they are belonging to the same kind of distribution. It’ll be even wonderful if we could skip the integral calculation and get the posterior by just updating the hyper parameters of prior distribution. Is that possible? Yes, the answer is conjugate prior.
Even better is, with conjugate prior, the posteror prediction can also be simplified.
Beta Distribution as Conjugate Prior
Beta distribution is the conjugate prior of Bernoulli, Binominal and Geomitric likelyhoods. Let’s simplify the problem in Thompson Sampling papper and use only the single arm bandis as an example.
Let’s say each time we pull the arm, it’ll give back either a success or failure result, and it conforms to the Binominal distribution, such that:
(it means there are s
successes after n
pulls)
and we choose Beta distribution as the prior:
after integral, the posterior would be:
Can you see the beauty in it? It’s like we just need to update the and after each pull of the arm, and we can then get the posterior, another Beta distribution with different hyper parameters and !
More About Conjugate Prior
There are more pairs of likelyhood function and conjugate prior, there is a table of them in the wikipedia page. Apart from the posterior, as I have mentioned above, the posterior prediction can also be simplified when using conjugate prior.
However, notes that conjugate prior is not the only option for a Bayesian model. This post also assumes that the model is continuous not discrete. There are just so many things to learn from teh Bayesian model.
References
- Conjugate Prior from Wikipedia
- Thompson Sampling for Dynamic Multi-Armed Bandits (note this paper has a concrete example usage of Bayesian model and conjuage prior, but I’m afraid it’s a little bit complex for a beginner, and it has many extra things related to CB)
- LaTeX grammers used in this post can be quickly found at LaTeX/Mathematics