Remove 2017 Remove Statistics Remove Testing Remove Uncertainty
article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

If $Y$ at that point is (statistically and practically) significantly better than our current operating point, and that point is deemed acceptable, we update the system parameters to this better value. Crucially, it takes into account the uncertainty inherent in our experiments.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

For example, imagine a fantasy football site is considering displaying advanced player statistics. A ramp-up strategy may mitigate the risk of upsetting the site’s loyal users who perhaps have strong preferences for the current statistics that are shown. We offer two examples where this may be the case.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Measuring Validity and Reliability of Human Ratings

The Unofficial Google Data Science Blog

Editor's note : The relationship between reliability and validity are somewhat analogous to that between the notions of statistical uncertainty and representational uncertainty introduced in an earlier post. But for more complicated metrics like xRR, our preference is to bootstrap when measuring uncertainty.

article thumbnail

The trinity of errors in applying confidence intervals: An exploration using Statsmodels

O'Reilly on Data

Because of this trifecta of errors, we need dynamic models that quantify the uncertainty inherent in our financial estimates and predictions. Practitioners in all social sciences, especially financial economics, use confidence intervals to quantify the uncertainty in their estimates and predictions.