I just returned from Paris, where I attended the third global summit on artificial intelligence.
It was a farce.
Before the summit had even started, the French organizers dismissed talk of AI risks as “science fiction.” Emmanuel Macron used his speech at the Grand Palais in Paris to advertise French AI companies, and claimed that France was “back in the AI race.” AI scientists working on safety research were relegated to speaking at unofficial side events. By the time JD Vance took to the stage on the final day, telling global leaders that AI safety was woke nonsense, the tone of the event had already been set:
Don’t look up.
Above our heads, AI capabilities have accelerated at a blistering rate in recent months. For example: OpenAI just launched an ‘agent’ tool called Deep Research, that can (according to many people who’ve used it) do research tasks at roughly the level of a smart grad student, for only $200 per month. And as these AI systems get more general and powerful, safety researchers are also making worrying new discoveries about the difficulty of “aligning” them — in other words, making sure they’re controllable and safe. Recent research has showed that advanced new AI models can attempt to deceive their creators, or to escape entirely, if they believe they’re about to be switched off or modified. Nobody knows how to prevent those behaviors yet, or whether doing so for more powerful future models is even possible. The findings are experimental results, but they pose huge red flags for the next generation of even larger models, which are set to arrive in “weeks or months.”
It didn’t have to be this way. The first AI Safety Summit, at Bletchley Park in the UK in 2023, was (for all its flaws) a watershed moment for AI governance. It looked like world leaders — even if they couldn’t reach an agreement immediately — at least recognized that AI could go badly wrong, and that global cooperation would be needed to stop that from happening. At the second summit, in Seoul last year, leaders had pledged to arrive in Paris with a set of ‘red lines’ which, if AI were to surpass, would demand further action. There was no attempt in Paris to follow through on that commitment. “It almost felt like they were trying to undo Bletchley,” Max Tegmark, an MIT professor and the President of the Future of Life Institute, an AI safety nonprofit, told me and my colleague Tharin on the sidelines of the summit.
Even the AI companies themselves, for all their incentives to race ahead, kept their side of the bargain made in Seoul: they came to Paris with detailed risk mitigation frameworks that they’d drawn up as agreed. Anthropic’s policy, for example, commits the company to build sufficient safety mechanisms before training new models. I heard policy folks from several companies remark, at various side events, that they wished governments would step in to make these voluntary frameworks legally binding, to prevent companies abandoning them in pursuit of profit. But in Paris, governments abdicated that responsibility.
This isn’t to give AI companies a pass. Most of them are following a deeply cynical playbook. They call for regulation in broad strokes, but at the same time they argue AGI will be so geopolitically transformative, and paint such detailed pictures of AI’s economic upside, that governments now have severe FOMO about any path except racing ahead. And when governments do propose specific regulations, like California’s SB1047 or the EU’s AI Act, the tech companies lobby against them. AI companies may pay lip service to the risks, but they are racing ahead to build bigger systems anyway. Still, the Bletchley and Seoul summits had suggested that states might attempt to build some kind of international AI governance regime, which might avert the worst kinds of races to the bottom. After Paris, that looks much less likely.
I covered the business of social media for many years before becoming a full-time AI reporter. The lesson I took from the upheaval of the late 2010s was: it’s irresponsible to release technology into the world before we fully understand how it works. I was under the impression that there was consensus, from governments and society at least, that the next time a new technology arrives, we should think more carefully about how to mitigate its harms before sanctioning its release.
Turns out I was wrong.
Don’t look up!
Here’s my news report from the event, for those interested.
Nobody knows about all this. It should be debated everywhere, at all levels.
Common people is left unconscious, while political leaders make decisions knowing and understanding less than a 12 YO kid about AI and its dangers.
This is grave.