Remove Experimentation Remove Measurement Remove Reference Remove Uncertainty
article thumbnail

What you need to know about product management for AI

O'Reilly on Data

Machine learning adds uncertainty. Underneath this uncertainty lies further uncertainty in the development process itself. There may even be someone on your team who built a personalized video recommender before and can help scope and estimate the project requirements using that past experience as a point of reference.

article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

the weight given to Likes in our video recommendation algorithm) while $Y$ is a vector of outcome measures such as different metrics of user experience (e.g., Crucially, it takes into account the uncertainty inherent in our experiments. Figure 2: Spreading measurements out makes estimates of model (slope of line) more accurate.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Variance and significance in large-scale online services

The Unofficial Google Data Science Blog

Unlike experimentation in some other areas, LSOS experiments present a surprising challenge to statisticians — even though we operate in the realm of “big data”, the statistical uncertainty in our experiments can be substantial. We must therefore maintain statistical rigor in quantifying experimental uncertainty.

article thumbnail

Getting ready for artificial general intelligence with examples

IBM Big Data Hub

AGI, sometimes referred to as strong AI , is the science-fiction version of artificial intelligence (AI), where artificial machine intelligence achieves human-level learning, perception and cognitive flexibility. The AGI would need to handle uncertainty and make decisions with incomplete information.

article thumbnail

The Lean Analytics Cycle: Metrics > Hypothesis > Experiment > Act

Occam's Razor

First, you figure out what you want to improve; then you create an experiment; then you run the experiment; then you measure the results and decide what to do. For each of them, write down the KPI you're measuring, and what that KPI should be for you to consider your efforts a success. Measure and decide what to do.

Metrics 156
article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

This post considers a common design for an OCE where a user may be randomly assigned an arm on their first visit during the experiment, with assignment weights referring to the proportion that are randomly assigned to each arm. There are two common reasons assignment weights may change during an OCE.

article thumbnail

LSOS experiments: how I learned to stop worrying and love the variability

The Unofficial Google Data Science Blog

Despite a very large number of experimental units, the experiments conducted by LSOS cannot presume statistical significance of all effects they deem practically significant. The result is that experimenters can’t afford to be sloppy about quantifying uncertainty. At Google, we tend to refer to them as slices.