Remove Data Science Remove Knowledge Discovery Remove Measurement Remove Risk
article thumbnail

Variance and significance in large-scale online services

The Unofficial Google Data Science Blog

by AMIR NAJMI Running live experiments on large-scale online services (LSOS) is an important aspect of data science. In this post we explore how and why we can be “ data-rich but information-poor ”. There are many reasons for the recent explosion of data and the resulting rise of data science.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

One reason to do ramp-up is to mitigate the risk of never before seen arms. A ramp-up strategy may mitigate the risk of upsetting the site’s loyal users who perhaps have strong preferences for the current statistics that are shown. For example, imagine a fantasy football site is considering displaying advanced player statistics.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

Working with highly imbalanced data can be problematic in several aspects: Distorted performance metrics — In a highly imbalanced dataset, say a binary dataset with a class ratio of 98:2, an algorithm that always predicts the majority class and completely ignores the minority class will still be 98% correct. Chawla et al. References.

article thumbnail

Using Empirical Bayes to approximate posteriors for large "black box" estimators

The Unofficial Google Data Science Blog

Posteriors are useful to understand the system, measure accuracy, and make better decisions. Methods like the Poisson bootstrap can help us measure the variability of $t$, but don’t give us posteriors either, particularly since good high-dimensional estimators aren’t unbiased.

KDD 40
article thumbnail

Explaining black-box models using attribute importance, PDPs, and LIME

Domino Data Lab

For this demo we’ll use the freely available Statlog (German Credit Data) Data Set, which can be downloaded from Kaggle. This dataset classifies customers based on a set of attributes into two credit risk groups – good or bad. Conference on Knowledge Discovery and Data Mining, pp. See Wei et al.

Modeling 139