article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

If the relationship of $X$ to $Y$ can be approximated as quadratic (or any polynomial), the objective and constraints as linear in $Y$, then there is a way to express the optimization as a quadratically constrained quadratic program (QCQP). Crucially, it takes into account the uncertainty inherent in our experiments.

article thumbnail

Why model calibration matters and how to achieve it

The Unofficial Google Data Science Blog

bar{pi} (1 - bar{pi})$: This is the irreducible loss due to uncertainty. isn’t good enough: it optimizes the calibration term, but pays the price in sharpness. In practice, we enforce this by optimizing over $log(beta_i)$. We can optimize the weights using a proper scoring rule, such as log loss, or MSE.

Modeling 122
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

The Lean Analytics Cycle: Metrics > Hypothesis > Experiment > Act

Occam's Razor

Sometimes, we escape the clutches of this sub optimal existence and do pick good metrics or engage in simple A/B testing. You're choosing only one metric because you want to optimize it. Circle of Friends was a social community built atop Facebook that launched in 2007. But it is not routine. So, how do we fix this problem?

Metrics 156
article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

For this reason we don’t report uncertainty measures or statistical significance in the results of the simulation. From a Bayesian perspective, one can combine joint posterior samples for $E[Y_i | T_i=t, E_i=j]$ and $P(E_i=j)$, which provides a measure of uncertainty around the estimate. 2] Scott, Steven L. 2015): 37-45. [3]