I probably overuse the normal-inverse-gamma posterior. Every time I build a bandit system, every time I need uncertainty quantification for sequential decisions, I end up back at conjugate linear regression.
Suppose you’re choosing a continuous value x and observing a noisy reward y. The reward depends on x through some unknown function f(x), and you’re making decisions repeatedly—learning as you go. This post explores how to build scalable Bayesian models for this problem using principled approximations.
Everyone Wants a Churn Model
Rarely do I ever get asked to make churn estimates for someone who needs to bring the full power of a proportional hazards model to bear. Besides, the person asking for churn estimates doesn’t actually want to know “what is the probability someone churns eventually?” (Spoiler: it’s 1.)