Suppose you’re choosing a continuous value x and observing a noisy reward y. The reward depends on x through some unknown function f(x), and you’re making decisions repeatedly—learning as you go. This post explores how to build scalable Bayesian models for this problem using principled approximations.
Everyone Wants a Churn Model
Rarely do I ever get asked to make churn estimates for someone who needs to bring the full power of a proportional hazards model to bear. Besides, the person asking for churn estimates doesn’t actually want to know “what is the probability someone churns eventually?” (Spoiler: it’s 1.)
A Motivating Example
We were studying how microglia affect neuronal networks using a standard imaging experiment: 3 mice, 3 coverslips per condition, about 20 neurons measured per coverslip. Our question: Does LPS activation significantly increase PNA signal?