Remove 2017 Remove Blog Remove Statistics Remove Uncertainty
article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

If $Y$ at that point is (statistically and practically) significantly better than our current operating point, and that point is deemed acceptable, we update the system parameters to this better value. This blog post discusses such a comprehensive approach that is used at Youtube. Indeed, such an approach is tractable and often used.

article thumbnail

Our quest for robust time series forecasting at scale

The Unofficial Google Data Science Blog

Quantification of forecast uncertainty via simulation-based prediction intervals. Facebook in a recent blog post unveiled Prophet , which is also a regression-based forecasting tool. Prediction Intervals A statistical forecasting system should not lack uncertainty quantification. Accessed on 20 March 2017.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

For example, imagine a fantasy football site is considering displaying advanced player statistics. A ramp-up strategy may mitigate the risk of upsetting the site’s loyal users who perhaps have strong preferences for the current statistics that are shown. One reason to do ramp-up is to mitigate the risk of never before seen arms.

article thumbnail

Measuring Validity and Reliability of Human Ratings

The Unofficial Google Data Science Blog

That’s the focus of this blog post. Editor's note : The relationship between reliability and validity are somewhat analogous to that between the notions of statistical uncertainty and representational uncertainty introduced in an earlier post. To help ground these terms, imagine you have a bathroom scale.

article thumbnail

Fitting Bayesian structural time series with the bsts R package

The Unofficial Google Data Science Blog

SCOTT Time series data are everywhere, but time series modeling is a fairly specialized area within statistics and data science. They may contain parameters in the statistical sense, but often they simply contain strategically placed 0's and 1's indicating which bits of $alpha_t$ are relevant for a particular computation. by STEVEN L.

article thumbnail

The trinity of errors in applying confidence intervals: An exploration using Statsmodels

O'Reilly on Data

Recall from my previous blog post that all financial models are at the mercy of the Trinity of Errors , namely: errors in model specifications, errors in model parameter estimates, and errors resulting from the failure of a model to adapt to structural changes in its environment. The interval [-a, a] is called a 90% confidence interval.

article thumbnail

Fact-based Decision-making

Peter James Thomas

This piece was prompted by both Olaf’s question and a recent article by my friend Neil Raden on his Silicon Angle blog, Performance management: Can you really manage what you measure? However – as is often the case with issues I deal with on this blog – fact-based decision-making is easier to say than it is to achieve.

Metrics 49