How to Build Govern Trusted AI Systems Process Blog BKG

How to Build and Govern Trusted AI Systems: Process

November 22, 2021
by
· 2 min read

This is a three part blog series in partnership with Amazon Web Services describing the essential components to build, govern, and trust AI systems: People, Process and Technology.  All are required for trusted AI,  technology systems that align to our individual, corporate, and societal ideals. This second post is focused on building the organization-wide process for AI you can trust. 

Trusted AI as a culture and practice is difficult at any level; from an individual data scientist trying to understand data disparity in a vacuum to an organization trying to govern multiple models in production. 

However, just because it’s difficult, trusted AI doesn’t have to be an unattainable goal. There is a path forward: a framework that revolves around people, process, and technology. In our first joint blog post, we learned about different stakeholders in any AI system lifecycle and how their collaboration is crucial to implementing effective processes and building technological guardrails that collectively stand up an ethical system. Our focus today will be on the processes that our stakeholders utilize to create structure, repeatability, and standardization. 

All AI-supported decisions are not equal. Using a risk assessment matrix, we can decide where to put the boundaries when it comes to the model’s input versus a potential human intervention. One solution is to use a decision system with ascending levels of risk, plausibility, and mitigation strategy. Once an AI-supported decision type is determined, we can now conduct an impact assessment that will enable stakeholders to maintain control and have a failsafe method for an override if necessary.

There are many steps to building an AI system. First, a business sponsor will champion an idea. Then a data scientist might gather data and work with business analysts to understand the context. Next, if machine learning is a feasible solution, a model is built and validated. Finally, a model may be put into production and predictions will be made on new data. At each step, there are different stakeholders and perspectives. In order to unify stakeholders’ opinions and fully comprehend the risks at each level, the creation of an impact assessment can be an effective tool. The collaboration and diversity-centered approach yield a true impact analysis of the AI system including stakeholders’ points of view, data provenance, model building, bias and fairness, and model deployment. 

The trick to ensuring that a model continues providing value in deployment is to support it with strong lifecycle management and governance. By continuously monitoring our models in production, we can quickly identify issues, such as data drift or prediction latency during high traffic, and take action. We can even instill humility by allowing users to set up triggers and actions when criteria are met, such as predictions near the threshold. These guardrails allow stakeholders to remain confident in the AI system and establish trust. 

GUIDE
Trusted AI 101

A Guide to Building Trustworthy and Ethical AI Systems

Read Now
About the author
Scott Reed
Scott Reed

Trusted AI Data Scientist

Scott Reed is a Trusted AI Data Scientist at DataRobot. On the Applied AI Ethics team, his focus is to help enable customers on trust features and sensitive use cases, contribute to product enhancements in the platform, and provide thought leadership on AI Ethics. Prior to DataRobot, he worked as a data scientist at Fannie Mae.  He has a M.S. in Applied Information Technology from George Mason University and a B.A. in International Relations from Bucknell University.

Meet Scott Reed
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog