Remove 2006 Remove Optimization Remove Statistics Remove Uncertainty
article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

If $Y$ at that point is (statistically and practically) significantly better than our current operating point, and that point is deemed acceptable, we update the system parameters to this better value. Crucially, it takes into account the uncertainty inherent in our experiments. Why experiment with several parameters concurrently?

article thumbnail

Our quest for robust time series forecasting at scale

The Unofficial Google Data Science Blog

For us, demand for forecasts emerged from a determination to better understand business growth and health, more efficiently conduct day-to-day operations, and optimize longer-term resource planning and allocation decisions. Quantification of forecast uncertainty via simulation-based prediction intervals.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Data Science, Past & Future

Domino Data Lab

He was saying this doesn’t belong just in statistics. It involved a lot of work with applied math, some depth in statistics and visualization, and also a lot of communication skills. The problems down in the mature bucket, those are optimizations, they aren’t showstoppers. Tukey did this paper.

article thumbnail

Fitting Bayesian structural time series with the bsts R package

The Unofficial Google Data Science Blog

SCOTT Time series data are everywhere, but time series modeling is a fairly specialized area within statistics and data science. They may contain parameters in the statistical sense, but often they simply contain strategically placed 0's and 1's indicating which bits of $alpha_t$ are relevant for a particular computation. by STEVEN L.

article thumbnail

Using random effects models in prediction problems

The Unofficial Google Data Science Blog

We often use statistical models to summarize the variation in our data, and random effects models are well suited for this — they are a form of ANOVA after all. In the context of prediction problems, another benefit is that the models produce an estimate of the uncertainty in their predictions: the predictive posterior distribution.