Remove Data-driven Remove Experimentation Remove Measurement Remove Uncertainty
article thumbnail

Business Strategies for Deploying Disruptive Tech: Generative AI and ChatGPT

Rocket-Powered Data Science

Third, any commitment to a disruptive technology (including data-intensive and AI implementations) must start with a business strategy. Those F’s are: Fragility, Friction, and FUD (Fear, Uncertainty, Doubt). These changes may include requirements drift, data drift, model drift, or concept drift.

Strategy 290
article thumbnail

What you need to know about product management for AI

O'Reilly on Data

AI products are automated systems that collect and learn from data to make user-facing decisions. All you need to know for now is that machine learning uses statistical techniques to give computer systems the ability to “learn” by being trained on existing data. Machine learning adds uncertainty.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Towards optimal experimentation in online systems

The Unofficial Google Data Science Blog

the weight given to Likes in our video recommendation algorithm) while $Y$ is a vector of outcome measures such as different metrics of user experience (e.g., Crucially, it takes into account the uncertainty inherent in our experiments. Figure 2: Spreading measurements out makes estimates of model (slope of line) more accurate.

article thumbnail

Uncertainties: Statistical, Representational, Interventional

The Unofficial Google Data Science Blog

by AMIR NAJMI & MUKUND SUNDARARAJAN Data science is about decision making under uncertainty. Some of that uncertainty is the result of statistical inference, i.e., using a finite sample of observations for estimation. But there are other kinds of uncertainty, at least as important, that are not statistical in nature.

article thumbnail

The Lean Analytics Cycle: Metrics > Hypothesis > Experiment > Act

Occam's Razor

We are far too enamored with data collection and reporting the standard metrics we love because others love them because someone else said they were nice so many years ago. First, you figure out what you want to improve; then you create an experiment; then you run the experiment; then you measure the results and decide what to do.

Metrics 156
article thumbnail

AI Product Management After Deployment

O'Reilly on Data

From a technical perspective, it is entirely possible for ML systems to function on wildly different data. For example, you can ask an ML model to make an inference on data taken from a distribution very different from what it was trained on—but that, of course, results in unpredictable and often undesired performance. I/O validation.

article thumbnail

Product Management for AI

Domino Data Lab

Skomoroch proposes that managing ML projects are challenging for organizations because shipping ML projects requires an experimental culture that fundamentally changes how many companies approach building and shipping software. Without large amounts of labeled training data solving most AI problems is not possible.