Peter Sayer
Executive Editor, News

Generative AI’s change management challenge

News Analysis
Jun 15, 20236 mins
Artificial IntelligenceCIOEmerging Technology

Workers are becoming more accepting of AI, but more work remains to reduce concerns, a study by Boston Consulting Group has found.

A row of business people / employees / professionals / team
Credit: Skynesher / Getty Images

Despite headlines warning that artificial intelligence poses a profound risk to society, workers are curious, optimistic, and confident about the arrival of AI in the enterprise, and becoming more so with time, according to a recent survey by Boston Consulting Group (BCG).

For many, their feelings are based on sound experience. Although ChatGPT, the poster child for generative AI applications, only launched in November 2022, already 26% of workers say they use generative AI several times a week, while 46% have experimented with it at least once, BCG found.

BCG asked 12,898 frontline employees, managers, and leaders in large organizations around the world how they felt about AI: 61% listed curiosity as one of their two strongest feelings, 52% listed optimism, 30% concern, and 26% confidence. A smaller BCG survey five years ago saw 60% listing curiosity, 35% optimism, 40% concern, and 17% confidence. A lot has happened since that last survey on attitudes to AI in 2018. Back then, “AI was still emerging and was not something many people used or saw,” said Vinciane Beauchene, BCG’s global leader for talent and skills, on a recent conference call to discuss the survey findings. But after the pandemic shook up the world of work, generative AI has flourished. “It’s the new normal: 80% of leaders claim they are using it on a weekly basis,” she said. “What was striking for me was how ill-equipped companies still are to deal with that.”

Familiarity breeds content

The study further found that the more workers used AI tools, the less they were concerned and more optimistic about their impact. Just 22% of regular AI users and 27% of rare users said they were concerned, compared with 42% of non-users. On the other hand, 36% of non-users said they were optimistic about AI, compared with 55% of rare users and 62% of regular users.

Enterprises can help their employees make that transition from concern to optimism, according to Nicolas de Bellefonds, global leader of AI at BCG X, the company’s technology build and design unit.

“What we’ve seen—and we have been helping companies make the most of AI for the past eight years now—is that AI acceptance by workers is directly linked to their understanding of how it will improve and augment their job,” he said.

There’s a lot of work still to be done on that score. So far, just 14% of frontline employees and 44% of leaders say they’ve received training on AI—but 86% of survey respondents believe they’ll need it.

“This is a massive number,” Bellefonds said. “We really have to address this upskilling issue.”

Change management

Helping workers understand what AI can do for them will be tricky, though, as they want to feel that AI is augmenting their work, and not just replacing the enjoyable parts, BCG found.

“The hardest part of AI acceptance is creating a space where employees can still add value and not feel they are competing with AI to create value,” Bellefonds added. “A lot of the work we do when it comes to change management and coaching is to help employees work with AI and at the same time, change the way they add value, so that a part of their job is taken by AI but their part refocuses on higher value-adding tasks.”

Exactly how those processes are rewired and the working methods changed will vary from one enterprise to another, he said.

There are other ways in which employees’ concerns about AI is unevenly distributed, too. Leaders are more likely to be optimistic, and frontline workers concerned, BCG found. And while 68% of leaders believe their companies have implemented adequate measures to ensure responsible use of AI, only 29% of their frontline employees feel that way.

Despite BCG’s findings of optimism in the workforce, there’s a darker side. Over one-third of respondents think their job is likely to be eliminated by AI, and almost four-fifths want governments to step in and deliver AI-specific regulations to ensure it’s used responsibly. That proportion was highest in India (89%), Spain (88%), Italy (84%), Brazil and France (both 83%), and lowest in Japan (64%), Germany (73%), the US (74%), and the Middle East and the Netherlands (both 76%).

Five-step program

While we wait for regulators to figure out what we can and can’t do, BCG’s chief AI ethics officer Steven Mills has some suggestions for CIOs on how to introduce generative AI into the workplace safely.

With familiarity with generative AI being a key factor for its successful adoption, employees must get a chance to test it themselves.

“It’s important folks get a chance to interact with these technologies and use them; stopping experimentation is not the answer,” Mills said, noting that it’s also not practical. “AI is going to be developed across an organization by employees whether you know about it or not,” he added.

Instead, he suggested, “rather than trying to pretend it won’t happen, let’s put in place a quick set of guidelines that lets your employees know where the guardrails are, what they can and can’t do, and actually encourage responsible innovation and responsible experimentation.”

Investing in upskilling—continuous, not one-off—will also help, especially among front-line employees where familiarity with the technology is lower. “That disparity across levels is going to be really important for companies to understand as they think about this continuous change journey they’re on … to give them the right skills to be successful as an organization,” he said.

Finally, he advised enterprises to build a responsible AI program to reassure employees that they use generative AI ethically.

In the abstract, there are five key pieces to such a program, Mills said: the overall principles setting out the enterprise’s strategy and risk tolerance; the governance structure setting out the organization and escalation paths; the processes for integrating AI into product development; the tooling needed to do all this; and the drive for cultural change.

Concretely, there are some immediate actions enterprises can take, he said, including handing responsibility and accountability for the program to a senior executive; ensuring that person and his or her team have the funding and resources to build the program; and quickly putting in place an initial set of guardrails and making sure they’re followed.

That, he said, means “having an agile review process that teams experimenting with AI can work with and reach out to, to get their questions answered.”