I just got back from Dubai, where at a glitzy event on Sunday night, TIME bestowed “impact awards” recognizing the influence of four individuals on the world of AI.
It was a nice surprise to see two of my covers blown up and displayed in the lobby of Dubai’s Museum of the Future, where the event was held. But that’s not what I want to discuss today.



What I actually want to talk about is one of the night’s awardees: Meta’s chief AI scientist Yann LeCun, who I interviewed ahead of the event. (More to come on that shortly.) Often referred to as one of the three “Godfathers of AI,” LeCun helped make some of the algorithmic breakthroughs in the 1980s and 1990s that laid the path toward today’s neural networks. And under his spiritual leadership, Meta has committed itself to an open source approach to AI, setting itself radically apart from the other “frontier” AI companies.
Meta’s competitors (like OpenAI and Google) protect their LLMs behind online interfaces that allow users to interact with them, but not meaningfully tweak their inner workings. Meta is different. By open-sourcing its models like the capable Llama 2, it has won a legion of fans in the developer community. For CEO Mark Zuckerberg, it’s a strategy that has helped Meta’s stock reach all-time highs (following two years of definitely-not-Metaverse-related poor performance), attract AI talent in a competitive market, and apply unwelcome pressure to rivals’ AI businesses.
But Meta’s approach to AI — personified by LeCun, who takes no prisoners on Twitter — has plenty of critics. Among them: “AI risk” oriented types, who tend to believe that open-sourcing AI tools could become dangerous as these tools become more capable of harm. LeCun, who believes that human-level AI is years away, has called those fears “preposterous.”
While I’m personally agnostic on the question of existential risk from AI, it seems sensible to me to expect that, as AI companies begin to train systems using 10 or 100 times more computing power than today’s cutting-edge, stuff is gonna get weird. AIs don’t have to be superintelligent to be dangerous; indeed stupid AIs implemented poorly can be dangerous enough. So I wanted to apply some scrutiny to LeCun’s position on this specifically. The full Q+A is available here, where we discuss AGI timelines and Meta’s pivot toward AI, among other things. But I want to draw attention to this illuminating exchange toward the end:
You've called the idea of AI posing an existential risk to humanity “preposterous.” Why?
There's a number of fallacies there. The first fallacy is that because a system is intelligent, it wants to take control. That's just completely false. It's even false within the human species. The smartest among us do not want to dominate the others. We have examples on the international political scene these days–it’s not the smartest among us who are the chiefs.
Sure. But it’s the people with the urge to dominate who do end up in power.
I'm sure you know a lot of incredibly smart humans who are really good at solving problems. They have no desire to be anyone's boss. I'm one of those. The desire to dominate is not correlated with intelligence at all.
But it is correlated with domination.
Okay, but the drive that some humans have for domination, or at least influence, has been hardwired into us by evolution, because we are a social species with a hierarchical organization. Look at orangutans. They are not social animals. They do not have this drive to dominate, because it's completely useless to them.
That's why humans are the dominant species, not orangutans.
The point is, AI systems, as smart as they might be, will be subservient to us. We set their goals, and they don't have any intrinsic goal that we would build into them to dominate. It would be really stupid to build that. It would also be useless. Nobody would buy it anyway.
What if a human, who has the urge to dominate, programs that goal into the AI?
Then, again, it's my good AI against your bad AI. If you have badly-behaved AI, either by bad design or deliberately, you’ll have smarter, good AIs taking them down. The same way we have police or armies.
But police and armies have a monopoly on the use of force, which in a world of open source AI, you wouldn't have.
What do you mean? In the U.S., you can buy a gun anywhere. Even in most of the U.S., the police have a legal monopoly on the use of force. But a lot of people have access to insanely powerful weapons.
And that’s going well?
I find that's a much bigger danger to life of residents of the North American landmass than AI. But no, I mean, we can imagine all kinds of catastrophe scenarios. There are millions of ways to build AI that would be bad, dangerous, useless. But the question is not whether there are ways it could go bad. The question is whether there is a way that it will go right.
I also reported from Dubai on the UN’s attempt to govern AI, and wrote about other TIME100 impact awardees including AI-enabled artist Sougwen Chung and AI CEO Karim Beguir. (Also some other stuff that I can’t tell you about just yet!)
What I’ve read
Sam Altman Wants $7 Trillion
By Scott Alexander on Substack
We want as much time as we can get to prepare for disruptive AI. Sam Altman previously endorsed this position! He said that OpenAI’s efforts were good for safety, because you want to avoid compute overhang. That is, you want AI progress to be as gradual as possible, not to progress in sudden jerks. And one way you can keep things gradual is to max out the level of AI you can build with your current chips, and then AI can grow (at worst) as fast as the chip supply, which naturally grows pretty slowly …unless you ask for $7 trillion dollars to increase the chip supply in a giant leap as quickly as possible! People who trusted OpenAI’s good nature based on the compute overhang argument are feeling betrayed right now.
How Tech Giants Turned Ukraine Into an AI War Lab
War has always driven innovation, from the crossbow to the internet, and in the modern era private industry has made key contributions to breakthroughs like the atom bomb. But the collaboration between foreign tech companies and the Ukrainian armed forces, who say they have a software engineer deployed with each battalion, is driving a new kind of experimentation in military AI. The result is an acceleration of “the most significant fundamental change in the character of war ever recorded in history,” General Mark Milley, former Chairman of the Joint Chiefs of Staff, told reporters in Washington last year.
The UK is dismantling its legacy of municipal splendour
We have learnt nothing from the disastrous disposal of council housing from 1980 onwards. Of council homes sold off under Right to Buy, 40 per cent have been rented out by private landlords, many to social tenants with landlords’ profits subsidised by the state. It has been a huge transfer of wealth from public to private — a levelling down… When amenities in which citizens have pride are stripped away, a sense of alienation fills the void. Who are cities for? To sell off what would never be built today suggests they are run for the benefit of developers.
Thank you for reading! Here’s the link to my LeCun interview again: