Remove Data mining Remove Knowledge Discovery Remove Modeling Remove Testing
article thumbnail

Experiment design and modeling for long-term studies in ads

The Unofficial Google Data Science Blog

by HENNING HOHNHOLD, DEIRDRE O'BRIEN, and DIANE TANG In this post we discuss the challenges in measuring and modeling the long-term effect of ads on user behavior. We describe experiment designs which have proven effective for us and discuss the subtleties of trying to generalize the results via modeling.

article thumbnail

Variance and significance in large-scale online services

The Unofficial Google Data Science Blog

But the fact that a service could have millions of users and billions of interactions gives rise to both big data and methods which are effective with big data. Of particular interest to LSOS data scientists are modeling and prediction techniques which keep improving with more data. we have $K(alpha,beta)=21.01$.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

Another reason to use ramp-up is to test if a website's infrastructure can handle deploying a new arm to all of its users. The website wants to make sure they have the infrastructure to handle the feature while testing if engagement increases enough to justify the infrastructure. We offer two examples where this may be the case.

article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

In this article we discuss why fitting models on imbalanced datasets is problematic, and how class imbalance is typically addressed. Their tests are performed using C4.5-generated 1988), E-state data (Hall et al., Data mining for direct marketing: Problems and solutions. Chawla et al., 1998) and others).

article thumbnail

Using Empirical Bayes to approximate posteriors for large "black box" estimators

The Unofficial Google Data Science Blog

But most common machine learning methods don’t give posteriors, and many don’t have explicit probability models. More precisely, our model is that $theta$ is drawn from a prior that depends on $t$, then $y$ comes from some known parametric family $f_theta$. Here, our items are query-ad pairs. Calculate posterior quantities of interest.

KDD 40
article thumbnail

Explaining black-box models using attribute importance, PDPs, and LIME

Domino Data Lab

In this article we cover explainability for black-box models and show how to use different methods from the Skater framework to provide insights into the inner workings of a simple credit scoring neural network model. The interest in interpretation of machine learning has been rapidly accelerating in the last decade. See Ribeiro et al.

Modeling 139