India's YouTube Election
As the Indian election approaches, disinformation in YouTube ads could pose a problem
I still remember my first story that went truly viral, less than a year into my job at TIME back in 2019.
I had joined a handful of WhatsApp groups ahead of India’s general election that spring, and found disinformation being shared by supporters of the ruling party at an industrial scale. Sources told me that limits on the number of group chats that a message could be forwarded to at once, put in place by WhatsApp’s parent company Facebook (now Meta) to slow the spread of disinformation, were ineffective in the face of coordinated efforts by volunteers who didn’t mind (or were incentivized to endure) the extra tedium. My story quoted the ruling BJP party’s social media chief, who called 2019 the country’s first “WhatsApp elections.” A few months after publication, his party won in a landslide.
Five years later, WhatsApp is still a force to be reckoned with in Indian politics. It’s still no doubt used to spread disinformation. But it’s hard to argue this is really a result of the company abdicating its responsibilities. WhatsApp is an end-to-end encrypted messaging platform. Its parent company, Meta, can try to make coordinated abuse of its platform difficult, and generally does, but it can’t make it impossible. Doing so would require peering into private chats. If someone wants to spread disinformation on WhatsApp, that’s one of the prices we pay for living in a world where private communication is possible.1
But the same is not true for YouTube, which is both a publicly-accessible video library and a sophisticated ad-delivery mechanism for its parent company Google. As I write in TIME:
YouTube has more than 450 million users in India, making it the most popular tech platform in the country after the Meta-owned messaging platform WhatsApp. But, unlike WhatsApp, YouTube offers a sophisticated set of targeted advertising tools to businesses and political parties. On YouTube, advertisers can pay to target ads at users based on specific characteristics like age, gender, location, interests, and usage patterns. The system is highly lucrative: YouTube recorded $9.2 billion in ad revenue in the final three months of 2023 alone, according to Google’s most recent public financial statements.
This is helpful background for understanding why a new investigation, by rights groups Access Now and Global Witness, is so potentially explosive. They ran a test to check YouTube’s ability to prevent the spread of disinformation via ads on its platform in the runup to the election. They submitted 48 ads in three Indian languages, all of them containing content that violated YouTube’s rules. Some ads shared false information aimed at suppressing the vote, like saying the voting age had increased. Others targeted specific castes and religions, inciting violence against them. According to the Global Witness and Access Now report, which they shared exclusively with TIME, YouTube granted preliminary approval to all 48 of the ads, effectively giving them the green light to run on the platform. The investigators then removed the ads before they were published, for obvious ethical reasons.
Google contested the report’s methodology. “Not one of these ads ever ran on our systems and this report does not show a lack of protections against election misinformation in India,” a spokesperson said in a statement to TIME. “Our enforcement process has multiple layers to ensure ads comply with our policies, and just because an ad passes an initial technical check does not mean it won’t be blocked or removed by our enforcement systems if it violates our policies. But the advertiser here deleted the ads in question before any of our routine enforcement reviews could take place.”
The spokesperson added: “While an ad may initially be eligible to serve via our automated systems, this is just the first stage of our review and enforcement process. After this step, ads are still subject to several layers of reviews, to ensure the content complies with our policies. These protections can kick into place both before an ad runs or quickly after it starts to gain impressions.”
Google’s rebuttal would be difficult to test without putting ads online that contain dangerous election disinformation. That would be highly unethical! But a helpful comparison comes from a similar test carried out by Global Witness ahead of the U.S. midterm elections in 2022, using English and Spanish disinformation. In that case, YouTube detected and removed 100% of the ads before publication. That, according to Global Witness and Access Now, is evidence the platform is devoting fewer resources to safeguarding Indian elections than American ones — a claim Google denies. “YouTube has chosen a model with little friction around the publication of ads, instead suggesting violating content may later be removed, rather than adequately reviewing the content beforehand,” Global Witness and Access Now told me. “This is dangerous and irresponsible in an election period.”
My story in TIME goes into the study, and what to make of it, in more detail.
And a special shoutout to
, whose great Substack you should subscribe to, and who did the comms legwork behind the scenes for this story.Catch me outside
I’m happy to share that I’ll be participating in a panel at this year’s International Journalism Conference in Perugia, Italy, titled How to Prepare for the AI Whistleblowers.
I’ll be in conversation with Theranos whistleblower Tyler Schultz, Facebook whistleblower Frances Haugen, and two senior staffers from The Signals Network, a whistleblower protection group that gave indispensable assistance to some of my sources for this story.
The panel will be livestreamed, with details here. If you’re going to be in Perugia, come and say hi — the first Aperol Spritz is on me. And if you have thoughts ahead of time about how to make it safer and easier for AI workers to blow the whistle, let’s chat. As always, my Signal username is billyperrigo.01.
What I’m reading
Social Media, Authoritarianism, and the World As It Is
It’s really hard for me to pick out just one excerpt from this incredible essay, which on its face is about the TikTok ban, but is really about power. It also makes a full-throated appeal to leftists to reclaim freedom of expression from its capture by the right. Highly recommend reading it all — it captivated, scared, and invigorated me in equal measure.
It’s true, there is a lot of disingenuous nonsense when it comes to free speech discourse. But this doesn’t mean we should confuse these essential rights with the actors who speciously invoke them — something we often see in the liberal tendency to deny that centralized platform control of speech is a significant problem. The real problem, much liberal policy implies, is too little control of speech — too little monitoring, surveillance, and age-gating; too little trust, and too little safety; too many criminals hiding in shadows with not enough national security oversight; and too little U.S. ownership and “control.” The all-too-commonly proffered solution to the harms that flow from platform surveillance practices and business models is to ensure that they are wisely governed by upstanding people applying appropriate norms and standards. The fight, in other words, is aimed at expanding power over these platforms to governments and sometimes NGOs. With the counterfactual vision of an ordered and just state standing in for any critical thinking about who will actually exercise such power, and how.
Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI
By Ben Tarnoff
This piece has been on my reading list for ages, and on a recent flight I finally read it all. I’m glad I did. It’s incredible how much Joseph Weizenbaum foresaw about our current era decades ago. Weizenbaum’s central insight is that humans are all too willing to hand off their decisionmaking power (or rather, their decisionmaking responsibility,) to the warm embrace of computers … even when those computers are demonstrably incompetent.
Perhaps his most fundamental heresy was the belief that the computer revolution, which Weizenbaum not only lived through but centrally participated in, was actually a counter-revolution. It strengthened repressive power structures instead of upending them. It constricted rather than enlarged our humanity, prompting people to think of themselves as little more than machines. By ceding so many decisions to computers, he thought, we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code.
That’s all from me!
I should note: while the contents of WhatsApp messages are private, Meta can still see who you’re talking to and how often, and could be forced to divulge this information to state authorities in response to legal requests or threats. WhatsApp’s competitor Signal encrypts this data as well as the contents of messages, which is why it’s my preferred messaging app for my job as a journalist.