Remove Knowledge Discovery Remove Modeling Remove Risk Remove Testing
article thumbnail

Explaining black-box models using attribute importance, PDPs, and LIME

Domino Data Lab

In this article we cover explainability for black-box models and show how to use different methods from the Skater framework to provide insights into the inner workings of a simple credit scoring neural network model. The interest in interpretation of machine learning has been rapidly accelerating in the last decade. See Ribeiro et al.

Modeling 139
article thumbnail

Unlocking the Power of Better Data Science Workflows

Smart Data Collective

Phase 4: Knowledge Discovery. Finally, models are developed to explain the data. Algorithms can also be tested to come up with ideal outcomes and possibilities. When these two elements are in harmony, there are fewer delays and less risk of data corruption. Make the Workflow Obvious and Apparent to Others.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

One reason to do ramp-up is to mitigate the risk of never before seen arms. A ramp-up strategy may mitigate the risk of upsetting the site’s loyal users who perhaps have strong preferences for the current statistics that are shown. For example, imagine a fantasy football site is considering displaying advanced player statistics.

article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

In this article we discuss why fitting models on imbalanced datasets is problematic, and how class imbalance is typically addressed. Their tests are performed using C4.5-generated This carries the risk of this modification performing worse than simpler approaches like majority under-sampling. Chawla et al., 1998) and others).

article thumbnail

Using Empirical Bayes to approximate posteriors for large "black box" estimators

The Unofficial Google Data Science Blog

But most common machine learning methods don’t give posteriors, and many don’t have explicit probability models. More precisely, our model is that $theta$ is drawn from a prior that depends on $t$, then $y$ comes from some known parametric family $f_theta$. Here, our items are query-ad pairs. Calculate posterior quantities of interest.

KDD 40
article thumbnail

Variance and significance in large-scale online services

The Unofficial Google Data Science Blog

Of particular interest to LSOS data scientists are modeling and prediction techniques which keep improving with more data. For this purpose, let’s assume we use a t-test for difference between group means. To observe this, let $W$ be the sample average differences between groups (our test statistic).

article thumbnail

AI, the Power of Knowledge and the Future Ahead: An Interview with Head of Ontotext’s R&I Milena Yankova

Ontotext

Milena Yankova : Our work is focused on helping companies make sense of their own knowledge. Within a large enterprise, there is a huge amount of data accumulated over the years – many decisions have been made and different methods have been tested. Some of this knowledge is locked and the company cannot access it.