A blue sign with the word STOP in white, in Galapagos Islands, Ecuador
Feature

Why Enterprise AI Needs Human Intervention

8 minute read
David Roe avatar
SAVED
In the wake of the Pegasus Spyware controversy there were calls from many quarters for deeper control of AI by humans. Is it necessary?

In the aftermath of the Pegasus Spyware software revelation there were calls from all sectors of society for vendors to better regulate and control their software development. The reactions were not really surprising. There has been a well-documented suspicion of technology and what it can do going back for years.

The Problem With AI

The Chapman University Survey of American Fears Wave 7 for 2020 to 2021, for example, which as was the result of a random sample of 1,035 adults across the United States taken in January of this year, showed that cyber-terrorism was 8th in the list of 95 fears followed by government tracking of personal data in 16th place Corrupt government officials scared people the most ahead of the COVID, according to a majority of those surveyed. In fact, in the list of fears, while technology was not directly mentioned, nearly 28% of people cited computers replacing people in the workforce as a major fear. That said, how scared people really are of these things can be best judged by the fact that nearly 10% appear to be scared of zombies!

Even still, the fact that people are genuinely scared they will lose their jobs to AI has already been discussed and the fact  that UN human rights chief, Michelle Bachelet, recently called for moratoriums on the sale and use of artificial intelligence (AI) systems until adequate safeguards are put in place, shows how serious concerns about AI are. This also includes, of course, other technologies built using AI like deep learning platforms(DLP) robotic process automation (RPA), text analytics and natural language processing (NLP). Bachelet warned, “Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights”.

So how should enterprises respond, or is there even any need to respond given the limited environments in which they use AI? Recent research from New York City-based Bright Data on the use of Bots in the enterprise, indicates that despite calls for tight regulation on AI and its use, many enterprises have shown an appetite for tighter external regulation of bot use among technology and financial services decision-makers. The research reveals that most US and UK organizations that utilize bots have developed clear guidelines to ensure they are used responsibly. In the US, 48% of those surveyed say they have guidelines in place to moderate all uses of bots, while another 48% say they have guidelines relating to some uses of bots. In the UK, these figures are 57% and 40% respectively.

The research evaluated respondents' attitudes towards bot regulation. Overall, a slim majority of respondents are satisfied with the current level of regulation related to bot use — 47% of those in US organizations and 60% of those in the UK. Meanwhile, 45% of US organizations and 33% of UK organizations say they actively want to see increased external regulation of bots.

Related Articles: 4 Reasons Why Explainable AI Is the Future of AI

Where Are Bots Being Used

It also revealed that the most common uses of bots in corporate environments. Customer service topped the list, with 76% of organizations utilizing bots to deal with customer queries and feedback.

Of organizations which use bots to retrieve data insights, 66% report occasionally using an external provider, whilst 8% of surveyed IT leaders report that their organization does not outsource operations carried out by bots to third parties.

The other common uses of bots revealed in the survey include cybersecurity (51%), the automation of backend tasks (35%), automated trading (23%) and social media engagement (22%).

Related Article: Make Responsible AI Part of Your Company's DNA

Learning Opportunities

Why AI Needs Human Intervention

Chris Bergh, CEO of Cambridge, Mass.-based DataKitchen, a DataOps consultancy and platform provider, argues that AI needs human oversight because the humans that created AI need oversight from other humans, and those humans will also need oversight. The reality is, no matter how automated AI is, or what sources they derive data from, all aspects of AI were designed by humans. Engineers unleashed AI bias, and it will be engineers that design the solutions that eliminate it.

AI bias is probably one of the biggest problems enterprises face in terms of AI advancement. The industry can adopt a proactive, process-oriented approach to addressing AI bias, and when our work processes for creating and monitoring analytics contain built-in controls against bias, data-analytics organizations will no longer be dependent on individual social awareness or heroism.

The present problem is that algorithms can absorb and perpetuate racial, gender, ethnic and other social inequalities. Amazon developers in one example disclosed that an AI model, designed to screen job candidates, favored men over women. The algorithm had been trained using a database of its engineering hires over a 10-year period. 

Since the training data contained a majority of male developers, the AI model taught itself that men are preferable and downgraded references such as women's team captain or mentions of an all-female educational institution in a resume. “If Amazon had not recognized the problem, the AI algorithm might have been deployed on a large scale, further perpetuating existing gender biases,” he said.

Related Article: What Is Explainable AI (XAI)?

Building on AI’s Positive Promise 

But there are many who believe that the positive aspects of AI should be the focus and not issues around control. Anthony Habayeb founding CEO of Boston-based Monitaur argues that we should not be talking about how to “rein in AI,” but rather, we should be passionately focusing on how we can accelerate AI’s positive promise and limit its detrimental effects through intentional enablement of transparency and assurances.

Broadly, and across sectors, the types of systems actually making the impactful decisions Bachalet is referring to are types of machine learning applications we can instrument with clear means of auditability and understanding. “We can build evidence of thoughtful design, data use, and development as well as implement controls watching for biases or adverse events to mitigate the risks or harm from these systems,” he said.

AI is deceptively complex. Consumers, business leaders, and regulators can all get overwhelmed by the techno-speak that surrounds AI and choose to disengage, but we need people — not just technologists — to do the opposite. We need more people to engage in how these systems are making decisions, not fewer. “You don’t have to have a degree in data science to ask the right kinds of questions,” he adds. “The kind of questions business leaders should be asking include: Why is AI the right choice to solve this problem? What can it do that is unique to the circumstances? How do we know it’s making the best decisions? How will we know when it’s not acting in our best interests, both as individuals and as a society? How can we test it and know that it is?” 

Related Article: Ethics and Transparency: How We Can Reach Trusted AI

Create Meaningful AI Oversight

Enterprises and AI providers still have a long way to go in both the training, and the application, of AI systems without meaningful human oversite required to mitigate bias and prevent AI harm, Doug Gilbert, CIO and chief digital officer at Rochester, New York-based Sutherland, said. Companies may get a false sense of comfort focusing on a single aspect of oversite, such as the review of training data to remove sampling bias, or the implementation of HITL (Human in the Loop) models to negate these issues, however applying human oversite on only a single dimension of the AI development cycle has proven to not be fully adequate in preventing undesired outcomes to occur. Therefore, meaningful human oversite should be applied across three critical areas:

  • Oversight of the underlying training data, as well as the training environment, to prevent bias from skewing training data.
  • Oversight of the rules governing the application of AI systems to ensure they are applied against use cases that match the training data & environment they were trained on.
  • Oversight of the outcomes of the AI system through continuous regression testing which may help expose faults in underlying training conditions and bias in the data

“As proven in various cases, the impact that an AI system can have in a remarkably short period of time is staggering, both good and bad,” he said. “Therefore, companies should invest in human oversite to help negate the bad from occurring as much as possible.”

About the Author

David Roe

David is a full-time journalist based in Paris, who spends his time working between Ireland, the UK and France. A partisan of ‘green’ living and conservation, he is particularly interested in information management and how enterprise content management, analytics, big data and cloud computing impact on it. Connect with David Roe:

Main image: Unsplash/Jose Aragones