Maria Korolov
Contributing writer

6 business risks of shortchanging AI ethics and governance

Feature
May 17, 202212 mins
Artificial IntelligenceData ManagementData Science

Factors inherent to artificial intelligence and its implementation can have dire ramifications for your company if ethics and governance aren’t baked into your AI strategy.

virtual brain / digital mind / artificial intelligence / machine learning / neural network
Credit: MetamorWorks / Getty Images

Depending on which Terminator movies you watch, the evil artificial intelligence Skynet has either already taken over humanity or is about to do so. But it’s not just science fiction writers who are worried about the dangers of uncontrolled AI.

In a 2019 survey by Emerj, an AI research and advisory company, 14% of AI researchers said that AI was an “existential threat” to humanity. Even if the AI apocalypse doesn’t come to pass, shortchanging AI ethics poses big risks to society — and to the enterprises that deploy those AI systems.

Central to these risks are factors inherent to the technology — for example, how a particular AI system arrives at a given conclusion, known as its “explainability” — and those endemic to an enterprise’s use of AI, including reliance on biased data sets or deploying AI without adequate governance in place.

And while AI can provide businesses competitive advantage in a variety of ways, from uncovering overlooked business opportunities to streamlining costly processes, the downsides of AI without adequate attention paid to AI governance, ethics, and evolving regulations can be catastrophic.

The following real-world implementation issues highlight prominent risks every IT leader must account for in putting together their company’s AI deployment strategy.

Public relations disasters

Last month, a leaked Facebook document obtained by Motherboard showed that Facebook has no idea what’s happening with its users’ data.

“We do not have an adequate level of control and explainability over how our systems use data,” said the document, which was attributed to Facebook privacy engineers.

Now the company is facing a “tsunami of inbound regulations,” the document said, which it can’t address without multi-year investments in infrastructure. In particular, the company has low confidence in its ability to address fundamental problems with machine learning and AI applications, according to the document. “This is a new area for regulation and we are very likely to see novel requirements for several years to come. We have very low confidence that our solutions are sufficient.”

This incident, which provides insight into what can go wrong for any enterprise that has deployed AI without adequate data governance, is just the latest in a series of high-profile companies that have seen their AI-related PR disasters all over the front pages.

In 2014, Amazon built AI-powered recruiting software that overwhelmingly preferred male candidates.

In 2015, Google’s Photos app labeled pictures of black people as “gorillas.” Not learning from that mistake, Facebook had to apologize for a similar error last fall, when its users were asked whether they wanted to “keep seeing videos about primates” after watching a video featuring black men. 

Microsoft’s Tay chatbot, released on Twitter in 2016, quickly started spewing racist, misogynist, and anti-Semitic messages.

Bad publicity is one of the biggest fears companies have when it comes to AI projects, says Ken Adler, chair of the technology and sourcing practice at law firm Loeb & Loeb.

“They’re concerned about implementing a solution that, unbeknownst to them, has built-in bias,” he says. “It could be anything — racial, ethnic, gender.”

Negative social impact

Biased AI systems are already causing harm. A credit algorithm that discriminates against women or a human resources recommendation tool that fails to suggest leadership courses to some employees will put those individuals at a disadvantage.

In some cases, those recommendations can literally be a matter of life and death. That was the case at one community hospital that Carm Taglienti, a distinguished engineer at Insight, once worked with.

Patients who come to a hospital emergency room often have problems beyond the ones that they’re specifically there about, Taglienti says. “If you come to the hospital complaining of chest pains, there might also be a blood issue or other contributing problem,” he explains.

This particular hospital’s data science team had built a system to identify such comorbidities. The work was crucial given that if a patient comes in to the hospital and has a second problem that’s potentially fatal but the hospital doesn’t catch it, then the patient could be sent home and end up dying.

The question was, however, at which point should the doctors act on the AI system’s recommendation, given health considerations and the limits of the hospital’s resources? If a correlation uncovered by the algorithm is weak, doctors might be subjecting patients to unnecessary tests that would be a waste of time and money for the hospital. But if the tests are not conducted, and an issue arises that could prove deadly, greater questions come to bear on the value of service the hospital provides to its community, especially if its algorithms suggested the possibility, however scant.

That’s where ethics comes in, he says. “If I’m trying to do the utilitarian approach, of the most good for the most people, I might treat you whether or not you need it.”

But that’s not a practical solution when resources are limited.

Another option is to gather better training data to improve the algorithms so that the recommendations are more precise. The hospital did this by investing more in data collection, Taglienti says.

But the hospital also found ways to rebalance the equation around resources, he adds. “If the data science is telling you that you’re missing comorbidities, does it always have to be a doctor seeing the patients? Can we use nurse practitioners instead? Can we automate?”

The hospital also created a patient scheduling mechanism, so that people who didn’t have primary care providers could visit an emergency room doctor at times when the ER was less busy, such as during the middle of a weekday.

“They were able to focus on the bottom line and still use the AI recommendation and improve outcomes,” he says.

Systems that don’t pass regulatory muster

Sanjay Srivastava, chief digital strategist at Genpact, worked with a large global financial services company that was looking to use AI to improve its lending decisions.

A bank isn’t supposed to use certain criteria, such as age or gender, when making some decisions, but simply taking age or gender data points out of AI training data isn’t enough, says Srivastava, because the data might contain other information that is correlated with age or gender.

“The training data set they used had a lot of correlations,” he says. “That exposed them to a larger footprint of risk than they had planned.”

The bank wound up having to go back to the training data set and track down and remove all those other data points, a process which set them back several months.

The lesson here was to make sure that the team building the system isn’t just data scientists, he says, but also includes a diverse set of subject matter experts. “Never do an AI project with data scientists alone,” he says.

Healthcare is another industry in which failing to meet regulatory requirements can send an entire project back to the starting gate. That’s what happened to a global pharmaceutical company working on a COVID vaccine.

“A lot of pharmaceutical companies used AI to find solutions faster,” says Mario Schlener, global financial services risk leader at Ernst & Young. One company made some good progress in building algorithms, he says. “But because of a lack of governance surrounding their algorithm development process, it made the development obsolete.”

And because the company couldn’t explain to regulators how the algorithms worked, they wound up losing nine months of work during the peak of the pandemic.

GDPR fines

The EU General Data Protection Regulation is one of the world’s toughest data protection laws, with fines up to €20 million or 4% of worldwide revenue — whichever is higher. Since the law took effect in 2018, more than 1,100 fines have been issued, and the totals keep going up.

The GDPR and similar regulations emerging across the globe restrict how companies can use or share sensitive private data. Because AI systems require massive amounts of data for training, without proper governance practices, it’s easy to run afoul of data privacy laws when implementing AI.

“Unfortunately, it seems like many organizations have a ‘we’ll add it when we need it’ attitude toward AI governance,” says Mike Loukides, vice president of emerging tech content at O’Reilly Media. “Waiting until you need it is a good way to guarantee that you’re too late.”

The European Union is also working on an AI Act, which would create a new set of regulations specifically around artificial intelligence. The AI Act was first proposed in the spring of 2021 and may be approved as soon as 2023. Failure to comply will result in a range of punishments, including financial penalties up to 6% of global revenue, even higher than the GDPR.

Unfixable systems

In April, a self-driving car operated by Cruise, an autonomous car company backed by General Motors, was stopped by police because it was driving without its headlights on. The video of a confused police officer approaching the car and finding that it had no driver quickly went viral.

The car subsequently drove off, then stopped again, allowing the police to catch up. Figuring out why the car did this can be tricky.

“We need to understand how decisions are made in self-driving cars,” says Dan Simion, vice president of AI and analytics at Capgemini. “The car maker needs to be transparent and explain what happened. Transparency and explainability are components of ethical AI.”

Too often, AI systems are inscrutable “black boxes,” providing little insight into how they draw conclusions. As such, finding the source of a problem can be highly difficult, shedding doubt on whether the problem can even be fixed.

“Eventually, I think regulations are going to come, especially when we talk about self-driving cars, but also for autonomous decisions in other industries,” says Simion.

But companies shouldn’t wait to build explainability into their AI systems, he says. It’s easier and cheaper in the long run to build in explainability from the ground up, instead of trying to tack it on at the end. Plus, there are immediate, practical, business reasons to build explainable AI, says Simion.

Beyond the public relations benefits of being able to explain why the AI system did what it did, companies that embrace explainability will also be able to fix problems and streamline processes more easily.

Was the problem in the model, or in its implementation? Was it in the choice of algorithms, or a deficiency in the training data set?

Enterprises that use third-party tools for some or all of their AI systems should also work with their vendors to require explainability from their products.

Employee sentiment risks

When enterprises build AI systems that violate users’ privacy, that are biased, or that do harm to society, it changes how their own employees see them.

Employees want to work at companies that share their values, says Steve Mills, chief AI ethics officer at Boston Consulting Group. “A high number of employees leave their jobs over ethical concerns,” he says. “If you want to attract technical talent, you have to worry about how you’re going to address these issues.”

According to a survey released by Gartner earlier this year, employee attitudes toward work have changed since the start of the pandemic. Nearly two-thirds have rethought the place that work should have in their life, and more than half said that the pandemic has made them question the purpose of their day job and made them want to contribute more to society.

And, last fall, a study by Blue Beyond Consulting and Future Workplace demonstrated the importance of values. According to the survey, 52% of workers would quit their job — and only 1 in 4 would accept one — if company values were not consistent with their values. In addition, 76% said they expect their employer to be a force for good in society.

Even though companies might start AI ethics programs for regulatory reasons, or to avoid bad publicity, as these programs mature, the motivations change.

“What we’re starting to see is that maybe they don’t start this way, but they land on it being a purpose and values issue,” says Mills. “It becomes a social responsibility issue. A core value of the company.”