Peter Sayer
Executive Editor, News

AI Safety Summit: What to expect as global leaders eye AI regulation

News Analysis
Oct 30, 20238 mins
Artificial IntelligenceGenerative AIGovernment

The UK government has grand ambitions for the event this week, but its place among other similar summits and the absence of key heads of state may undo the main focus of addressing urgent AI risks.

The AI Safety Summit, convened by the UK government, is the latest in a series of regional and global political initiatives to shape the role AI will play in society.

Prime Minister Rishi Sunak sees the summit as an opportunity for the UK, sidelined since its departure from the European Union, to create a role for itself alongside the US, China, and the EU in defining the future of AI.

The summit, on November 1-2, is to consider the risks posed by AI, especially “frontier” AI models such as the more advanced examples of generative AI. Its goals are to convince people of the need to take action to reduce risks; identify measures organizations should take to increase AI safety; and to agree on processes for international collaboration on AI safety, including on research and governance standards.

If Sunak’s ambitions are realized, then the summit could lead to requirements for enterprises to take more precautions in their deployment of advanced AI technologies, and limitations on the development of such tools by software vendors.

At the same time, there already exist many regulations — guaranteeing privacy, for example, or prohibiting discrimination — that implicitly impose limits on what enterprises can or should do with AI or any other technology.

What is frontier AI?

Frontier AI, as defined by the UK government, refers to highly capable general-purpose AI models that can perform a wide variety of tasks at a level that meets or exceeds the most powerful technologies available today.

Today’s frontier AI includes foundation models using transformer architectures such as GPT-4, its rivals and successors — although as the technology advances, views on what constitutes the frontier are likely to move, too.

Enterprises such as Unilever are already using GPT to deliver business value, although rarely in business-critical situations and almost always only to recommend courses of action for an employee to review and approve — the so-called “human in the loop” approach.

Why is frontier AI considered unsafe?

Frontier AI models may take significant computing and financial resources to train, but once that’s done, they can be deployed to, or accessed from, almost anywhere for relatively little cost.

All new technologies come with a range of risks and benefits, but people are particularly concerned about the safety of frontier AI technologies because of the speed and scale at which their impact could be felt, especially if they’re left to function autonomously without human supervision or intervention.

The potential risks identified by the UK government include threats to biosecurity, cybersecurity, and election fairness, as well as the potential loss of control over the development and operation of the foundation AI models themselves. There’s also the possibility of “unknown unknowns” arising from unpredictable leaps in the capabilities of frontier AI models as they develop.

How might the AI Safety Summit change things?

Behind the references to biosecurity and cybersecurity on the summit agenda are fears that super-powered AI could facilitate or accelerate the development of lethal bioweapons or cyberattacks that bring down the global internet, posing an existential risk to humanity as a whole, or to modern civilization.

There’s also the alignment problem to contend with: whether an AI system will pursue its programmers’ intended goals, or follow its instructions to the letter, ignoring implicit moral considerations such as the need not to harm humans. A classic thought experiment illustrating this is to consider just how far an AI system might go if given the narrow goal of optimizing the output of a factory, making paper clips, for example, and pursuing it to the exclusion of all else.

The threat of something like this happening has prompted much letter-writing and hand wringing — and even a few street protests around the world, such as those by Pause AI, which is calling for a global halt to training of general AI systems more powerful than GPT-4 until the alignment problem is provably solved.

While creating such things is probably not something AI developers intend, unconstrained enhancement of AI capabilities could make it possible for bad actors to misuse them or, if the alignment problem isn’t solved, for the use of AI systems to have unintended side-effects. That’s why learning to better forecast unpredictable leaps in AI capability, and keeping AI under human control and oversight, are also on the summit agenda.

But there’s a danger, say some observers, that by focusing on the unlikely but existential risks to civilization that frontier AI may pose, longstanding concerns about algorithmic bias, fairness, transparency, and accountability will be pushed to the fringe.

What to do about those risks, both existential and everyday, is less clear.

The UK government’s first suggestion is “responsible capability scaling” — asking industry to set its own risk thresholds, assess the threat its models pose, choose to follow less risky paths, and to specify in advance what it will do if something goes wrong.

At a national level, the UK government is suggesting it and other countries monitor what enterprises are up to, and perhaps require enterprises to obtain a license for some AI activities.

As for international collaboration and regulation, more research is needed, the UK government says. It’s inviting other countries to talk about how they can work together to talk about the most urgent areas for research, and where promising ideas are already emerging.

Who is attending the AI Safety Summit?

When the UK government first announced the summit, its intention was to include “country leaders” from the world’s largest economies, alongside academics and representatives of tech companies leading AI development, with a view to set a new global regulatory agenda.

A week or two before the summit, though, reports emerged that leaders of several countries with strong AI industries were unlikely to attend, raising doubts about how effective the summit will be.

French President Emmanuel Macron will not be there, and German Chancellor Olaf Scholz is unlikely to show up either, European political news site Politico.eu reported. US President Joe Biden will not attend either, although Vice President Kamala Harris may.

While some of the European Union’s biggest member states are disengaging from the summit, the bloc as a whole will be well-represented. European Commission President Ursula von der Leyen will be there and, according to her official engagement calendar, she plans to meet Secretary-General of the United Nations António Guterres at the event.

Meanwhile, European Commission Vice-President Věra Jourová’s calendar indicates she’ll meet South Korean Minister of Science and ICT Lee Jong-ho there.

Google DeepMind CEO Demis Hassabis is expected to be among the 100 or so attendees — a safe bet since the company was founded in London and maintains its headquarters there.

The UK government has been playing up the recent decisions of a number of other AI companies to open offices in London, including ChatGPT developer OpenAI and Anthropic, whose CEO Dario Amodei is reportedly also attending. Palantir Technologies, too, has announced plans to move its European headquarters to the UK, and is said to be sending a representative to the event. A Microsoft representative will also reportedly attend, although not its CEO.

Where else are AI directions being set?

The UK’s AI Safety Summit is far from the only place that governments and enterprises are attempting to influence AI policy and development.

One of the first big attempts of a commitment to ethical AI in the enterprise was the Rome Call. In 2020, Microsoft and IBM signed on to a non-denominational initiative of the Vatican to promote six principals of AI development: transparency, inclusion, responsibility, impartiality, reliability, and security/privacy.

Since then, legislative, regulatory, industry, and civil society initiatives have multiplied. The European Union’s all-encompassing Artificial Intelligence Act seemed ahead of its time and full of good intention, but has drawn criticism and calls for stronger action from civil society groups, including Statewatch and service workers’ union Uni Europa.

Also, the White House has secured voluntary commitments to AI safety standards from seven of the largest AI developers, the Cyberspace Administration of China has issued regulations on generative AI training, and New York City has set rules on the use of AI in hiring.

Even the United Nations Security Council has been debating the issue.

Software developers are joining in, too. The Frontier Model Forum is the industry’s attempt to get ahead of state or international controls by demonstrating its members — including Microsoft, Google, Anthropic, and OpenAI — can be good global citizens through self-regulation.

All this activity puts the UK AI Safety Summit in a highly competitive environment, with legislators competing on the one hand to create a safe environment for their citizens, free from the menace of opaque automated discrimination or even — if the most alarmist critics are to be believed — global extinction, while on the other hand allowing businesses to innovate and benefit from the increases in productivity that AI may enable.

Who gets to set those regulations, and who will have to abide by them, is unlikely to be decided any time soon, much less this week.