Remove 2006 Remove Optimization Remove Uncertainty Remove Visualization
article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

If the relationship of $X$ to $Y$ can be approximated as quadratic (or any polynomial), the objective and constraints as linear in $Y$, then there is a way to express the optimization as a quadratically constrained quadratic program (QCQP). Crucially, it takes into account the uncertainty inherent in our experiments.

article thumbnail

Using random effects models in prediction problems

The Unofficial Google Data Science Blog

In the context of prediction problems, another benefit is that the models produce an estimate of the uncertainty in their predictions: the predictive posterior distribution. Often our data can be stored or visualized as a table like the one shown below. Cambridge University Press, (2006). [2] bandit problems). ICML, (2005). [3]

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Data Science, Past & Future

Domino Data Lab

He also really informed a lot of the early thinking about data visualization. It involved a lot of work with applied math, some depth in statistics and visualization, and also a lot of communication skills. The problems down in the mature bucket, those are optimizations, they aren’t showstoppers. How could that make sense?

article thumbnail

Fitting Bayesian structural time series with the bsts R package

The Unofficial Google Data Science Blog

If both variances are positive then the optimal estimator of $y_{t+1}$ winds up being "exponential smoothing," where past data are forgotten at an exponential rate determined by the ratio of the two variances. There are also plotting functions that you can use to visualize the regression coefficients. Compare to Figure 2. Gramacy, R.