Unlock the Editor’s Digest for free

The UK government is beginning to craft new legislation to regulate artificial intelligence, months after the prime minister vowed “not to rush” setting up rules for the fast-growing technology.

Such legislation would likely put limits on the production of large language models, the general-purpose technology that underlies AI products such as OpenAI’s ChatGPT, according to two people briefed on the plans. 

The people said it was not set in stone what this legislation would cover or when it would be released, and noted nothing would be introduced imminently. But one of them said that it would likely look to mandate that companies developing the most sophisticated models share their algorithms with the government and provide evidence that they have carried out safety testing.

The plans come as regulators including the UK competition watchdog have become increasingly concerned about potential harms. These range from the possibility that technology could bake in biases that affect certain demographics, to the potential use of general-purpose models to create harmful materials.

“Officials are exploring moving on regulation for the most powerful AI models,” said one of the people briefed on the situation, adding that the Department for Science, Innovation and Technology is “developing its thinking” on what AI legislation would look like. 

Another person said the rules would apply to the large language models that sit behind AI products such as ChatGPT, rather than the applications themselves.

Sarah Cardell, the chief executive of the UK’s Competition and Markets Authority, warned last week that she had “real concerns” that a small number of tech companies creating AI foundation models “may have both the ability and the incentive to shape these markets in their own interest”.

The regulator identified an “interconnected web” of more than 90 partnerships and strategic investments involving the same companies: Google, Apple, Microsoft, Meta, Amazon and Nvidia.

The UK has been reluctant to push for legal interventions in the development and rollout of AI models for fear tough regulation might stymie industry growth. It has instead relied on voluntary agreements with governments and companies, ruling out legislation in the short term.

In November, Viscount Jonathan Camrose, minister for AI, said there would be no UK law on AI “in the short term”. Rishi Sunak, the prime minister, said a month earlier that “the UK’s answer is not to rush to regulate”.

“This is a point of principle; we believe in innovation,” Sunak said in October. “How can we write laws that make sense for something that we don’t yet fully understand?”

The EU has taken a tougher approach. The European parliament last month approved some of the first and strictest rules for regulating the technology through the AI Act.

AI start-ups have criticised the EU’s rules, which they see as overregulation that could hamper innovation. That tough legislation has prompted other countries, such as Canada and the United Arab Emirates, to swoop in and try to tempt some of Europe’s most promising tech companies to relocate.

Until now, the UK has delegated responsibility to existing regulators to clarify which current legislation applies to AI. These watchdogs have been asked to submit papers by the end of this month outlining how they intend to regulate AI in their fields.

Media regulator Ofcom, which has published its approach, is looking at how generative AI can be covered by the Online Safety Act, passed in October, to protect children and adults on the internet. 

Government officials singled out so-called “general purpose” AI models — those that are highly intelligent and adaptable for use on a wide range of tasks — as likely targets for further legal and regulatory intervention in a recent consultation response.

Tech companies have criticised this approach of targeting models by size, such as “general purpose” or “frontier models” often used to describe the large language type underpinning products such as ChatGPT or Google’s Gemini.

“At the moment, the regulators are working on a rather crude rule of thumb that, if future models are of a particular size or surpass a particular size . . . that therefore there should be greater disclosure,” Nick Clegg, president of global affairs at Meta, said last Tuesday.

“I don’t think anyone thinks that over time that is the most rational way of going about things,” he added, “because in the future you will have smaller fine-tuned models aimed at particular purposes that could arguably be more worthy of greater scrutiny than very large, hulking, great big all-purpose models that might be less worrisome.”

“As we’ve previously said, all countries will eventually need to introduce some form of AI legislation, but we will not rush to do so until there is a clear understanding of the risks,” a government spokesperson said.

“That’s because it would ultimately result in measures which would quickly become ineffective and outdated.”

Read More: World News | Entertainment News | Celeb News
FT

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Russia-Ukraine war: List of key events, day 778

As the war enters its 778th day, these are the main developments.…

South Korea’s Yoon left humbled by opposition election landslide

President Yoon Suk-yeol promises changes, top aides quit, after opposition parties sweep…

Thames Water bills likely to rise as ministers try to save company

Unlock the Editor’s Digest for free Roula Khalaf, Editor of the FT,…