Remove 2007 Remove Measurement Remove Optimization Remove Uncertainty
article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

the weight given to Likes in our video recommendation algorithm) while $Y$ is a vector of outcome measures such as different metrics of user experience (e.g., Crucially, it takes into account the uncertainty inherent in our experiments. Figure 2: Spreading measurements out makes estimates of model (slope of line) more accurate.

article thumbnail

The Lean Analytics Cycle: Metrics > Hypothesis > Experiment > Act

Occam's Razor

Sometimes, we escape the clutches of this sub optimal existence and do pick good metrics or engage in simple A/B testing. First, you figure out what you want to improve; then you create an experiment; then you run the experiment; then you measure the results and decide what to do. Measure and decide what to do.

Metrics 156
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

For this reason we don’t report uncertainty measures or statistical significance in the results of the simulation. From a Bayesian perspective, one can combine joint posterior samples for $E[Y_i | T_i=t, E_i=j]$ and $P(E_i=j)$, which provides a measure of uncertainty around the estimate. 2] Scott, Steven L.

article thumbnail

Why model calibration matters and how to achieve it

The Unofficial Google Data Science Blog

The numerical value of the signal became decoupled from the event it was measuring even as the ordinal value remained unchanged. bar{pi} (1 - bar{pi})$: This is the irreducible loss due to uncertainty. isn’t good enough: it optimizes the calibration term, but pays the price in sharpness. This shows why $Pr(mathrm{Spam}) = 0.1$

Modeling 122