In the summer of 2022, before the release of ChatGPT, a little-known startup called Anthropic had its own powerful chatbot: Claude.
Dario Amodei, Anthropic’s CEO, was in awe of Claude’s capabilities — but elected not to release the AI publicly, fearful of the consequences. The decision probably cost his company billions of dollars. But he doesn't regret it.
So begins my latest story for TIME, a deeply-reported profile of Anthropic: simultaneously the smallest of the companies building “frontier” AI systems, the youngest, and also the most expressly-committed to safety. For the story I spent three days earlier this month at Anthropic's headquarters in San Francisco, interviewing Amodei and other senior executives and researchers.
If you’ve never heard of Anthropic before, the story has your back. And if you’re a seasoned veteran of the AI industry, there should be some interesting tidbits for you as well.
Read it in TIME: Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy
Among other questions, I tried to grapple with this one: what happens if (or when) Anthropic’s money runs out? The cost to train the next frontier systems is estimated to be around $1 billion; the ones that come after that will likely be far costlier. Anthropic has raised $7 billion in the last year, mostly from Amazon and Google, part of a trend where big tech companies have been bankrolling smaller AI labs, which in turn spend most of that money on big tech’s cloud computing power. But those big tech companies are now training frontier models of their own — potentially reducing the longterm viability of smaller labs like Anthropic. Anthropic has talent, algorithmic secrets, and a reputation for safety that the big tech companies don’t have. But to remain at the frontier (and keep doing the safety research Amodei sees as essential) it will one day need to raise more money. That might come with difficult tradeoffs for its safety-first mission.
We also published a companion piece alongside the cover story, which dives deep into Anthropic’s weird corporate structure. If you’re a nerd, you’ll remember mid November of last year as I do: a disorienting swirl of events when OpenAI’s board fired CEO Sam Altman, then after five days of chaos, rehired him. The firing was only possible thanks to OpenAI’s strange corporate structure, in which its directors have no fiduciary duty to increase profits for shareholders. Altman himself had helped design that structure so OpenAI could build powerful AI safe from what he (at least then) saw as dangerous market incentives. But Altman’s rehiring, after a pressure campaign by Microsoft and other powerful investors, seemed to show this corporate structure had badly failed.
Read it in TIME: How Anthropic Designed Itself to Avoid OpenAI’s Mistakes
Anthropic has an unorthodox corporate structure too. But it’s very different to OpenAI’s. In fact, all of Anthropic’s seven co-founders are former OpenAI staffers, who left after a breakdown of trust with Altman (according to one source who spoke to me for my cover story, and which Amodei declined to comment on). They purposely structured Anthropic differently, in a way that they believe makes the company far more resistant to an OpenAI-style blowup.
But does the structure guarantee that AI will be developed safely and for the benefit of everyone? On that, the jury’s out. As head of the AI Policy Institute Daniel Colson told me: “It seems like Anthropic did a good job” on its structure. “But are these governance structures sufficient for the development of AGI? My strong sense is definitely no—they are extremely illegitimate.” Only government regulation, he says, can achieve that legitimacy.
Thanks for reading, and sorry for the long gap between this and my last newsletter. I warned you that I’d only send these when I had something interesting to share, and this project took a while!