Risk, Roles, and Realities of AI Governance

Scaling AI Joy Looney

In a recent Dataiku Product Days session, Krishna Vadakattu, Senior Product Manager at Dataiku, interviewed Sulabh Soral, Chief AI Officer Deloitte Consulting, U.K. This blog summarizes and shares Soral’s perspective on the risk, roles, and realities of AI Governance from the session. 

→ Watch the Interview

On the Topic of AI Governance 

AI Governance is a popular conversation topic among many organizations across many fields, and one that will continue to be so for years to come. Viewing AI Governance as a singular definition is difficult as it is a very broad topic, but generally speaking, it can be thought of as the practice of control of automated systems and the way that a business enables those automated systems. 

If you think about AI Governance from the point of view of a bank applying AI in one particular project, for example, in a narrow sense, AI Governance is the same for them as their traditional model governance has been. They may start using new rules, but the process is essentially the same. For other organizations, we are looking at something at play that is entirely different. The full scope of AI Governance changes depending on who you’re talking to. 

If you’re talking about the modern use of AI to empower every aspect of a business in a new way, which is what many organizations are aiming to do, it is not just about implementing new tools and techniques into legacy processes. The definition of governance expands when you are viewing AI as a tool to transform your entire organization instead of just specific projects.

How Does Organizational-Level AI Governance Differ From Traditional Governance? 

In the world of tech, a lot of organizations already have processes in place to manage traditional software and those processes are already built out. However, AI is not the same as these traditional softwares; it has other emerging behaviors that normal software does not have. The risks and implications of the integration increase with AI in comparison to traditional software. It is a more complex system that depends not only on how you code but also on what data you consume and how you plan to consume that data. 

Then there is the finish line of the processes where you must consider how the AI interface will interact with its users. In the traditional software sense, organizations are very focused on making sure that the software is not hackable. If you look at how to hack a normal software or program you can only hack it if you are a coder yourself. However, in terms of AI, you can have many types of adversarial attacks that do not necessarily require someone to know how to code. It may just take someone assimilating particular kinds of data to attack the AI system. So, the nature of the beast has entirely changed. 

The New Environment & Its Challenges

We have talked about how data can impact your AI systems. We’ve also talked about the cyber side of things, but there is yet another component of AI integration to look into. As AI becomes bigger, you create more complex models, and it gets packaged by third parties to be consumed by businesses. There are emerging properties of these AI systems that are unknown. The implications of these unmapped properties is also unknown.

Up until now, a lot of focus has been on how the AI software accentuates biases in data (especially in legacy data), but we need to increase our focus on the exact ways in which the AI-enabled technology interacts with customers. That interaction has risks and is in and of itself a source of even more data to manage and control. 

So, it’s all these dimensions that make AI Governance so different and perhaps significantly more challenging than traditional software governance. 

What Do We Do? 

As we have touched on above, there are a lot of gaps in understanding when it comes to the implications of AI integrations. There are gaps in respect to the connection of AI across organizational hierarchies as well as within the AI objectives themselves. AI failures are not a surprising outcome when an organization lacks clear roles and goals. 

We will continue to see these cases of failure, but organizations should aim to learn very quickly from mistakes and revisit processes as the surrounding AI ecosystem evolves.

Realistic Steps for the Journey Ahead 

MLOps strategies are at the beginning of this governance process and at the core of appropriate measurements. You can get a lot of new people involved in decision-making, but until roles, responsibilities, and quantitative reporting are all clearly defined, it is essentially a lost cause. A systemized and standardized approach to the whole data pipeline is the core foundation that AI Governance rests upon. 

Education among executives is also quite important. Senior execs need to look at the transformation they aim for and understand the specific role that AI will play in that transformation. An exec that understands the language of AI and is able to study the larger implications of the technology will be extremely valuable. A defined strategy and a contextualization of AI in terms of business processes that permeates the entire organization is key. Everyone needs to be able to look at measurements and determine how they should be stacked in order to support an AI project from beginning to end. This understanding and knowledgeable ambition needs to come from the top down. 

balancing rocks

Finding Balance in a State of AI Maturity 

For organizations that have progressed further down the line, in that future state, how can they find a balance between the red tape that is necessary to govern risks and mitigate liabilities with an equal interest in rapid innovation? 

As always with acceleration and evolution, there is an inherent risk and potentially unintended consequences lurking on the horizon. The key to finding balance is meticulous measurement and incremental development of the governance processes. This means investigating and  finding that sweet spot between automation and human-in-the-loop

As you gain more knowledge on certain areas of risk pertaining to your AI applications, you can begin automating more of those areas and use technology to lighten your workload, but remember there is always the factor of the unknown. Fundamental machine learning (ML) architecture designs and principles must remain as your foundation to prepare for those unpredictable circumstances which could emerge. As a general rule: Automate where you have achieved stability, keep humans in the loop especially in rapidly changing and new areas, and make sure you have a steady foundation to fall back on. 

Addressing Intangibility and Ethical Concerns 

Many find it hard to attribute real value to AI and identify the risks of AI when AI is something that is often viewed as intangible. One way to combat this concern is to understand that while AI may be viewed in such a way, the KPIs that you can put in place to measure the explainability, trustworthiness, and effectiveness of your AI applications can and should be tangible proxies. 

Having tangible measurement and monitoring practices in place will also help introduce clarity and calm anxieties about the ethical ambiguity of certain aspects of AI integration. As we know, your intentions can be ethical, but ML is quite dependent upon the data that it involves as well. Additionally, something that may be ethical in the eyes of today’s society might not be considered so years from now. So, as society evolves and we understand more, the way we clarify and communicate measurements of our data must be flexible to accommodate those changes. 

The Dynamism of AI 

AI is not only being trained on historical data with historical biases, AI is also active learning systems. These systems change as they perform tasks and their outputs interact with customers in real time. Organizations must pay keen attention to these systems in order to prevent the propagation of inherent human biases. Consistent and constant monitoring of AI systems is necessary all the way through model production and consumer/user interaction levels. The governance strategies that you are employing should cater specifically to the particular ML technology that you are integrating with careful attention to the way in which the systems reach users/consumers. This may mean that you need to slow things down and take extra time developing these governance strategies. You need a complete understanding of what you are jumping into before you deploy your AI integrations. Many execs will struggle to make this call as they face the pressures of highly competitive, fast-paced markets. 

Hope for the Future 

There is huge potential for what AI can do on a global scale. The opportunities popping up, new roles and conversations, which have not existed prior to this day and age, imply an exciting stir of thought and shift in topic across all industries. As we navigate the road ahead, we need to make an intentional effort to educate ourselves and apply what we learn when it comes to governing our AI integrations. At the end of the day, it is all about informed decision-making, striking a balance, and having the right tools and processes in place.

You May Also Like

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More

6 Top-of-Mind Topics About AI & Trust in 2024

Read More

3 Concrete Ways to Drive AI ROI

Read More

Alteryx to Dataiku: The Visual Flow

Read More