Inside the UK's AI Safety Institute
It's the leading effort by any government to test AI systems for dangers. But its existence, for now at least, depends on the very companies it seeks to hold to account
Talk to anyone in the world of AI safety, and they will probably agree: the most advanced government body for testing AI dangers is located not in Washington or Beijing, but in Westminster, London.
In the months following the release of ChatGPT, the U.K.’s then-Prime-Minister, Rishi Sunak, saw an opening. Governments were growing increasingly worried about the dangers posed by advanced AI. Yet scientific work on AI safety was primarily being carried out inside the very tech companies that also had a large financial incentive to release their AI systems quickly into the world. Sunak lobbed £100 million at the problem. The result: the U.K.’s AI Safety Institute, a new government body that, despite its youth, has already racked up a considerable list of achievements.
I spent a lot of last year talking to insiders at the U.K. AISI, as it’s known. What emerged was a story that we published today in TIME. The piece goes behind the scenes at 10 Downing Street to tell the story of the remarkable act of political will that resulted in the world’s first AI safety institute. It looks at the AISI’s impressive track record so far, which includes (the piece reveals) pre-deployment testing of Google’s frontier Gemini Ultra model, as well as OpenAI’s o1 and Anthropic’s Claude.
But my piece also studies a crucial tension at the heart of the endeavor. The U.K. may have successfully negotiated access to these AI models, allowing safety tests to proceed. But this access is voluntarily provided by the AI companies. So can the AISI really hold them to account? This is a question that the U.K.’s new Labour government will have to grapple with. Before the election, Labour promised to enact “binding regulation” on AI companies. But no government wants to drive away such a lucrative industry. Indeed the new Prime Minister Keir Starmer — now in office and tasked with reviving the U.K.’s sluggish economy — recently promised that AI will be “mainlined into the veins” of the nation.
As AI systems grow more powerful by the month, the stakes of this experiment in government oversight have never been higher. The AISI's success — or its failure — could set the template for how democracies handle potentially the most transformative technology in human history.
Here’s the story: