Skip to main content
HomeBlogMachine Learning

Measuring Bias in Machine Learning: The Statistical Bias Test

This tutorial will define statistical bias in a machine learning model and demonstrate how to perform the test on synthetic data.
Updated May 2020  · 6 min read

This article was written by Sarah Khatry and Haniyeh Mahmoudian, data scientists at DataRobot.

The question of bias in machine learning models has been the subject of a lot of attention in recent years. Stories of models going wrong make headlines, and humanitarian lawyers, politicians, and journalists have all contributed to the conversation about what ethics and values we want to be reflected in the models we build.

While human bias is a thorny issue and not always easily defined, bias in machine learning is, at the end of the day, mathematical. There are many different types of tests that you can perform on your model to identify different types of bias in its predictions. Which test to perform depends mostly on what you care about and the context in which the model is used.

One of the most broadly applicable tests out there is statistical parity, which this hands-on tutorial will walk through. Now, bias is always assessed relative to different groups of people identified by a protected attribute in your data, e.g., race, gender, age, sexuality, nationality, etc.

With statistical parity, your goal is to measure if the different groups have equal probability of achieving a favorable outcome. A classic example is a hiring model, for which you would like to ensure that male and female applicants have an equal probability of being hired. In a biased model, you will instead identify that one group is privileged with a higher probability of being hired, while the other group is underprivileged.

To demonstrate how this works in practice, we’ll first construct synthetic data with bias we’ve predefined, then confirm via analysis that the data reflects the situation we intended, and finally apply the statistical parity test.

Generating Synthetic Data

In this tutorial, we'll be using the pandas package in Python, but every step in this process can also be reproduced in R.

import pandas as pd

To generate synthetic data with one protected attribute and model predictions, we first need to specify a few inputs: the total number of records, the protected attribute itself (here two generic values, A and B), and the model prediction that is associated with the favorable outcome, in this example the value 1.

num_row = 1000 # number of rows in the data
prot_att = 'prot_att' # column name for protected attribute
model_pred_bin = 'prediction' # column name for predictions
pos_label = 1 # prediction value associated to positive/favorable outcome
prot_group = ['A','B'] # two groups in our protected attribute

As in real life, groups A and B may not be evenly distributed in our data. In the below code, we have decided that 60% of the population in our data will be from privileged group B, who have a 30% chance of receiving the favorable outcome. Unprivileged group A will make up the remaining 40% of the data and have only a 15% probability for the favorable outcome.

priv_g = 'B'  # privileged group
priv_g_rate = 60 # 60% of the population is from group B
priv_p_rate = 30 # 30% of the predictions for group B was for favorable outcome 1
unpriv_p_rate = 15 # 15% of the predictions for group A was for favorable outcome 1
biased_list_priv = [prot_group[0]] * (100 - priv_g_rate) + [prot_group[1]] * priv_g_rate
biased_list_pos_priv = [0] * (100 - priv_p_rate) + [1] * priv_p_rate
biased_list_pos_unpriv = [0] * (100 - unpriv_p_rate) + [1] * unpriv_p_rate

For each record of the data, we randomly assign a protected group and a prediction, using the bias we specified before as weights, and then create a dataframe from the list of records.

list_df = [] # empty list to store the synthetic records
for i in range(0, num_row):
   # generating random value representing protected groups with bias towards B
   prot_rand = random.choices(biased_list_priv)[0]
   if prot_rand == priv_g:
       prot_g = priv_g
       # generating random binary value representing prediction with bias towards 0
       pred = random.choices(biased_list_pos_priv)[0]
       # adding the new record to the list
       list_df.append([prot_g,pred])
   else:
       prot_g = prot_group[0]
       pred = random.choices(biased_list_pos_unpriv)[0]
       list_df.append([prot_g,pred])
# create a dataframe from the list
df = pd.DataFrame(list_df,columns=['prot_att','prediction'])

Interpreting the Data

Now that we have our synthetic data, let’s analyze what we’ve built. For each group, A versus B, what is their class probability of achieving the favorable or unfavorable outcome?

df_group = (df.groupby([prot_att])[model_pred_bin].value_counts() / df.groupby([prot_att])[
   model_pred_bin].count()).reset_index(name='probability')
print(df_group)
  prot_att  prediction  probability
0        A           0     0.849490
1        A           1     0.150510
2        B           0     0.713816
3        B           1     0.286184

Inspecting the table, it’s not hard for us to see that group B has an almost double likelihood of achieving the favorable outcome, with a probability of 28.6%. Our synthetic data was designed to have a probability of 30%, so we’re close to the mark. We then save the probabilities in a dictionary.

Since it’s randomly generated, your code might give a slightly different result.

prot_att_dic = {}
for att in prot_group:
   temp = df_group[(df_group[prot_att] == att) & (df_group[model_pred_bin] == pos_label)]
   prob = float(temp['probability'])
   prot_att_dic[att] = prob

The Statistical Parity Test

For each group, statistical parity outputs the ratio of their probability of achieving the favorable outcome compared to the privileged group’s probability of achieving the favorable outcome. We iterate over the dictionary of each protected group with their probability of a favorable outcome to construct the ratios.

prot_test_res = {}
for k in prot_att_dic.keys():
   res = float(prot_att_dic[k] / prot_att_dic[priv_g])
   prot_test_res[k] = res
for key in prot_test_res.keys():
   value = prot_test_res[key]
   print(key, ' : ', value)
A  :  0.5259207131128314
B  :  1.0

For the privileged group, B, the statistical parity score is 1, as it should be. For the other group, A, their score is 0.526, which indicates that they have roughly half the likelihood of achieving the favorable outcome as group B.

The statistical bias test provides a simple assessment of how different the predicted outcomes may be for select groups in your data. The goal of measuring bias is two-fold. On the one hand, this test results in a transparent metric, making it easier and more concrete to communicate. But ideally, identifying bias is the first step in beginning to mitigate it in your model. This is a hot area of research in machine learning, with many techniques being developed to accommodate different kinds of bias and modelling approaches.

With the right combination of testing and mitigation techniques, it becomes possible to iteratively improve your model, reduce bias, and preserve accuracy. You can design machine learning systems to not just learn from historical outcomes, but reflect your values in future decision-making.

Find out how to build AI you can trust with DataRobot.

Topics
Related

How to Learn Machine Learning in 2024

Discover how to learn machine learning in 2024, including the key skills and technologies you’ll need to master, as well as resources to help you get started.
Adel Nehme's photo

Adel Nehme

15 min

Becoming Remarkable with Guy Kawasaki, Author and Chief Evangelist at Canva

Richie and Guy explore the concept of being remarkable, growth, grit and grace, the importance of experiential learning, imposter syndrome, finding your passion, how to network and find remarkable people, measuring success through benevolent impact and much more. 
Richie Cotton's photo

Richie Cotton

55 min

OpenCV Tutorial: Unlock the Power of Visual Data Processing

This article provides a comprehensive guide on utilizing the OpenCV library for image and video processing within a Python environment. We dive into the wide range of image processing functionalities OpenCV offers, from basic techniques to more advanced applications.
Richmond Alake's photo

Richmond Alake

13 min

An Introduction to the Mamba LLM Architecture: A New Paradigm in Machine Learning

Discover the power of Mamba LLM, a transformative architecture from leading universities, redefining sequence processing in AI.
Kurtis Pykes 's photo

Kurtis Pykes

9 min

A Beginner's Guide to Azure Machine Learning

Explore Azure Machine Learning in our beginner's guide to setting up, deploying models, and leveraging AutoML & ML Studio in the Azure ecosystem.
Moez Ali's photo

Moez Ali

11 min

ML Workflow Orchestration With Prefect

Learn everything about a powerful and open-source workflow orchestration tool. Build, deploy, and execute your first machine learning workflow on your local machine and the cloud with this simple guide.
Abid Ali Awan's photo

Abid Ali Awan

See MoreSee More