The Orb Will See You Now
Sam Altman's audacious startup wants to prove you're human. What else does it want?
A couple of months ago I was in San Francisco, visiting the headquarters of a Sam Altman-backed startup called Tools for Humanity.
The company was gearing up for a big moment: the U.S. launch of a mysterious piece of hardware known as the Orb.
Weeks later, Altman appeared on stage to announce the news to a cheering crowd. After four years of testing abroad, 7,500 Orbs would be dispersed across the U.S. over the next 12 months, the company said.
Their purpose? To “verify” your humanity.
Altman co-founded Tools for Humanity back in 2019 based on a simple premise. One day, AI would become advanced enough that it could pass the Turing Test — and it would soon mean that you wouldn’t be able to tell that what you read, heard, or watched online was created by a human rather than an AI.
Now, that day is upon us. Last week, Google released Veo 3, a text-to-video model that can generate realistic videos from thin air. Similar tools have existed for images for a while now. And chatbots like ChatGPT are filling the internet with synthetic text — text which can be engineered to be engaging, enraging, or persuasive. (A recent study found that AI-generated comments on /r/ChangeMyView were up to 6x more successful at getting users to change their minds than human-written ones.) Soon, autonomous “AI Agents” will descend upon the internet too. Our information environment, in other words, is collapsing before our eyes.
Back in 2019, Altman imagined the internet would soon need a proof of humanity layer — almost like a more advanced captcha. You’d go to an Orb, prove to it that you’re a living human in the physical world, and then receive a digital ID that you could use to gain access to online spaces. Bots would thus be held at bay even as they became more advanced. However, such a system would only work if lots of us were on it; to incentivize signups, Altman reasoned, the company could distribute free crypto to new users. Maybe one day that financial/identity network could even be used to distribute a universal basic income — a stipend for workers displaced from their jobs by super-powerful AI.
I have spent the last few months reporting on this little-known Altman side project as it prepares to hit the mainstream. Unlike much other coverage of the company, I took its premise at face value: that the internet is in for a massive shock thanks to so-called “agentic” AI.
But I also wanted to know what kind of power Tools for Humanity might concentrate in the hands of Altman — the man perhaps most responsible for bringing the internet to this inflection point, who is now also selling the world its solution. My story, which is on the cover of TIME this week, makes the case that the internet may well soon need something like the Orb. But it also interrogates Tools for Humanity’s privacy practices, its crypto side, its promise of decentralization, and how Altman’s habit of moving the goalposts is already visible in this project, too.
I hope you’ll read it!
(Big shout out to the Tarbell Center for AI Journalism for supporting this story with a grant.)
What else I’ve written
Exclusive: New Claude Model Triggers Stricter Safeguards at Anthropic
On Thursday, Anthropic launched Claude Opus 4, a new model that, in internal testing, performed more effectively than prior models at advising novices on how to produce biological weapons, says Jared Kaplan, Anthropic’s chief scientist. “You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,” Kaplan says. Accordingly, Claude Opus 4 is being released under stricter safety measures than any prior Anthropic model.
With Letter to Trump, Evangelical Leaders Join the AI Debate
The world’s biggest tech companies now all believe that it is possible to create so-called “artificial general intelligence”—a form of AI that can do any task better than a human expert. Some researchers have even invoked this technology in religious terms—for example, OpenAI’s former chief scientist Ilya Sutskever, a mystical figure who famously encouraged colleagues to chant “feel the AGI” at company gatherings. The emerging possibility of AGI presents, in one sense, a profound challenge to many theologies. If we are in a universe where a God-like machine is possible, what space does that leave for God himself?
What I’ve read
Everyone Is Cheating Their Way Through College
This story went viral a few weeks ago for very good reason — just a devastating portrait of how generative AI has totally upended education. I particularly liked how the story doesn’t just blame chatbots; plenty of blame goes also to the commodification of education that began long before AI became a viable cheating tool.
If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born
Steven Levy is the straight up GOAT of tech journalism. I have written my own version of this profile, but I’m very happy to concede that Steven’s blows it out of the water. Just a piece of narrative art from start to finish.
3 Teens Almost Got Away With Murder. Then Police Found Their Google Searches
Another incredible piece of writing. I love how this story balances compassion for the victims of a terrible crime with the bigger question of what the methods used to investigate that crime might mean for civil liberties as a whole.