Remove 2017 Remove Experimentation Remove Statistics Remove Uncertainty
article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

If $Y$ at that point is (statistically and practically) significantly better than our current operating point, and that point is deemed acceptable, we update the system parameters to this better value. Crucially, it takes into account the uncertainty inherent in our experiments. Why experiment with several parameters concurrently?

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

For example, imagine a fantasy football site is considering displaying advanced player statistics. A ramp-up strategy may mitigate the risk of upsetting the site’s loyal users who perhaps have strong preferences for the current statistics that are shown. One reason to do ramp-up is to mitigate the risk of never before seen arms.

article thumbnail

The trinity of errors in applying confidence intervals: An exploration using Statsmodels

O'Reilly on Data

Because of this trifecta of errors, we need dynamic models that quantify the uncertainty inherent in our financial estimates and predictions. Practitioners in all social sciences, especially financial economics, use confidence intervals to quantify the uncertainty in their estimates and predictions.