article thumbnail

Performing Non-Compartmental Analysis with Julia and Pumas AI

Domino Data Lab

This tutorial will show how easy it is to integrate and use Pumas in the Domino Data Science Platform , and we will carry out a simple non-compartmental analysis using a freely available dataset. The Domino data science platform empowers data scientists to develop and deliver models with open access to the tools they love.

Metrics 59
article thumbnail

Experiment design and modeling for long-term studies in ads

The Unofficial Google Data Science Blog

by HENNING HOHNHOLD, DEIRDRE O'BRIEN, and DIANE TANG In this post we discuss the challenges in measuring and modeling the long-term effect of ads on user behavior. Nevertheless, A/B testing has challenges and blind spots, such as: the difficulty of identifying suitable metrics that give "works well" a measurable meaning.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Variance and significance in large-scale online services

The Unofficial Google Data Science Blog

by AMIR NAJMI Running live experiments on large-scale online services (LSOS) is an important aspect of data science. In this post we explore how and why we can be “ data-rich but information-poor ”. There are many reasons for the recent explosion of data and the resulting rise of data science.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

For this reason we don’t report uncertainty measures or statistical significance in the results of the simulation. Ramp-up solution: measure epoch and condition on its effect If one wants to do full traffic ramp-up and use data from all epochs, they must use an adjusted estimator to get an unbiased estimate of the average reward in each arm.

article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

Working with highly imbalanced data can be problematic in several aspects: Distorted performance metrics — In a highly imbalanced dataset, say a binary dataset with a class ratio of 98:2, an algorithm that always predicts the majority class and completely ignores the minority class will still be 98% correct. References. link] Fisher, R.

article thumbnail

Using Empirical Bayes to approximate posteriors for large "black box" estimators

The Unofficial Google Data Science Blog

Posteriors are useful to understand the system, measure accuracy, and make better decisions. Methods like the Poisson bootstrap can help us measure the variability of $t$, but don’t give us posteriors either, particularly since good high-dimensional estimators aren’t unbiased.

KDD 40
article thumbnail

LSOS experiments: how I learned to stop worrying and love the variability

The Unofficial Google Data Science Blog

And since the metric average is different in each hour of day, this is a source of variation in measuring the experimental effect. Let’s go back to our example of measuring the fraction of user sessions with purchase. Let $Y_i$ be the response measured on the $i$th user session.