Remove 2019 Remove Deep Learning Remove Risk Remove Statistics
article thumbnail

Why you should care about debugging machine learning models

O'Reilly on Data

1] This includes C-suite executives, front-line data scientists, and risk, legal, and compliance personnel. These recommendations are based on our experience, both as a data scientist and as a lawyer, focused on managing the risks of deploying ML. That’s where model debugging comes in. Sensitivity analysis.

article thumbnail

What you need to know about product management for AI

O'Reilly on Data

Pragmatically, machine learning is the part of AI that “works”: algorithms and techniques that you can implement now in real products. We won’t go into the mathematics or engineering of modern machine learning here. Machine learning adds uncertainty. Managing Machine Learning Projects” (AWS).

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

5 key areas for tech leaders to watch in 2020

O'Reilly on Data

Growth is still strong for such a large topic, but usage slowed in 2018 (+13%) and cooled significantly in 2019, growing by just 7%. But sustained interest in cloud migrations—usage was up almost 10% in 2019, on top of 30% in 2018—gets at another important emerging trend. Still cloud-y, but with a possibility of migration.

article thumbnail

AI adoption in the enterprise 2020

O'Reilly on Data

The new survey, which ran for a few weeks in December 2019, generated an enthusiastic 1,388 responses. Supervised learning is the most popular ML technique among mature AI adopters, while deep learning is the most popular technique among organizations that are still evaluating AI. But what kind?

article thumbnail

Adding Common Sense to Machine Learning with TensorFlow Lattice

The Unofficial Google Data Science Blog

On the one hand, basic statistical models (e.g. On the other hand, sophisticated machine learning models are flexible in their form but not easy to control. More knots make the learned feature transformation smoother and more capable of approximating any monotonic function.

article thumbnail

Proposals for model vulnerability and security

O'Reilly on Data

Apply fair and private models, white-hat and forensic model debugging, and common sense to protect machine learning models from malicious actors. Like many others, I’ve known for some time that machine learning models themselves could pose security risks. Watermark attacks. Newer types of fair and private models (e.g.,

Modeling 222
article thumbnail

Themes and Conferences per Pacoid, Episode 9

Domino Data Lab

That’s a risk in case, say, legislators – who don’t understand the nuances of machine learning – attempt to define a single meaning of the word interpret. For example, in the case of more recent deep learning work, a complete explanation might be possible: it might also entail an incomprehensible number of parameters.