Remove Data mining Remove Knowledge Discovery Remove Measurement Remove Testing
article thumbnail

Experiment design and modeling for long-term studies in ads

The Unofficial Google Data Science Blog

by HENNING HOHNHOLD, DEIRDRE O'BRIEN, and DIANE TANG In this post we discuss the challenges in measuring and modeling the long-term effect of ads on user behavior. A/B testing is used widely in information technology companies to guide product development and improvements.

article thumbnail

Variance and significance in large-scale online services

The Unofficial Google Data Science Blog

And an LSOS is awash in data, right? Well, it turns out that depending on what it cares to measure, an LSOS might not have enough data. The practical consequence of this is that we can’t afford to be sloppy about measuring statistical significance and confidence intervals.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

Another reason to use ramp-up is to test if a website's infrastructure can handle deploying a new arm to all of its users. The website wants to make sure they have the infrastructure to handle the feature while testing if engagement increases enough to justify the infrastructure. We offer two examples where this may be the case.

article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

Working with highly imbalanced data can be problematic in several aspects: Distorted performance metrics — In a highly imbalanced dataset, say a binary dataset with a class ratio of 98:2, an algorithm that always predicts the majority class and completely ignores the minority class will still be 98% correct. 1988), E-state data (Hall et al.,

article thumbnail

Using Empirical Bayes to approximate posteriors for large "black box" estimators

The Unofficial Google Data Science Blog

Posteriors are useful to understand the system, measure accuracy, and make better decisions. Methods like the Poisson bootstrap can help us measure the variability of $t$, but don’t give us posteriors either, particularly since good high-dimensional estimators aren’t unbiased. Figure 4 shows the results of such a test.

KDD 40
article thumbnail

Explaining black-box models using attribute importance, PDPs, and LIME

Domino Data Lab

After forming the X and y variables, we split the data into training and test sets. Looking at the target vector in the training subset, we notice that our training data is highly imbalanced. but it generally relies on measuring the entropy in the change of predictions given a perturbation of a feature. See Wei et al.

Modeling 139