The AI That Could Help Heal a Divided Internet
New AI tools, from Google's Jigsaw unit, aim to boost compassionate and nuanced online discourse over the more toxic kind
I know what you’re thinking. There’s no way that AI can heal a divided internet. It churns out biased nonsense, it’s already leading to writers and artists losing their jobs, and it’s further entrenching the power of unaccountable tech giants. If anything, AI is making the web worse, polluting our information commons even as it extracts from it.
That’s all true. But sit with me for a second. The piece I’m about to share isn’t necessarily a reason to be hopeful, because it doesn’t solve any of those problems. Instead, it’s a look at how new forms of AI might begin to change the way content is amplified online — subtly changing the internet experience in addition to the trends described above.
Before we get to that, some context. On social media today, content tends to be ranked by engagement. Posts that get the most clicks often rise to the top, and we unfortunately tend to click on content that provokes an emotional response. The better angels of our nature, like compassion and respect, are often forced to take a backseat to rage, anger, and personal attacks.
Social media companies rank by engagement largely because it’s profitable — it keeps people scrolling — but also because it’s easy. Likes, comments and reposts are a perfect, instant, signal that a post is resonating with people. Optimize your system to boost content based on that signal, and you’ve created an incentive for users to return day after day.
For years, other forms of ranking have not only been less profitable, they’ve also been difficult to build. How do you even begin to design a system that can read a post, understand it, and boost it based on whether it fits with a set of criteria you’ve determined to be desirable? Sounds tricky.
And it was, until now. Recent advances in large language models (LLMs) — the kind of AI that underpins tools like ChatGPT — have begun to make it possible to detect “fuzzy, contextual, know-it-when-I-see-it” kinds of attributes like nuance, reasoning, and compassion, in the words of Jonathan Stray, a Berkeley scientist. This week, Google’s Jigsaw unit released the first set of AI tools that propose to detect these attributes. If you can detect the likelihood of a post containing something like nuance, you can also rank by it. So it’s now becoming possible for social media sites to consider putting (for example) the most nuanced posts first, instead of ones that get the most likes.
I can almost hear your protests through my screen. And you’re right. Nuance, compassion, reasoning — these are all socially-constructed concepts. The power to define them is a powerful one indeed, and this innovation risks giving tech companies even more of an iron grip over the public square than they already have. LLMs, I think I also hear you saying, are often biased in favor of cultures and modes of speech that are already common online. How likely is it, really, that these AI tools will be able to detect and properly amplify a nuanced post in, say, the Tigrayan language, as effectively as one in English?
These criticisms are all true, and the faults they describe may even be insurmountable. But this tech, while imperfect, now exists where it previously didn’t. If social media companies begin to incorporate it (not guaranteed, since it would require switching off the money-firehose that is engagement-based ranking) then platforms will begin to subtly change. To understand that dynamic, it’s worth reading my latest story about Jigsaw’s new AI tools and what they mean. I promise it grapples with your concerns.
What else I’ve written
Last Monday, I emailed you with a story about protests inside Google over its relationship with Israel. On Friday, I published a scoopy follow-up. The story shows that Google provides cloud computing services to the Israeli Ministry of Defense, and has negotiating deepening its partnership during the ongoing war in Gaza, according to a company document viewed by TIME. The new evidence calls into question Google’s previous statements that its work with Israel under Project Nimbus, a controversial $1.2 billion deal, is largely for civilian purposes. You can read the full piece here:
Reminder
As I mentioned a couple of newsletters ago, this Friday I’ll be speaking on a panel at the International Journalism Festival in Perugia, Italy. The topic is “how to prepare for the AI whistleblowers.” If you’re in town, come along. Details here.
Really enjoyed reading your article -
thank you! I plan to read the Time articles that you linked.
As a non techie who loves all things technology I totally agree with you. I am firmly in the possibilities camp. The potential to build a better life for all is huge if we (Humans) collaborate with A.I. Of course we need the ethical side, guard rails etc but in my opinion the potential good outweighs the downsides.
My current focus is the Human-A.I. connection. Very interested to hear more about the Conference you are attending. The 'behind the scenes' aspect of A.I. building is not pretty. The impact on the workers (content moderators etc) MH is tremendous.
Looking forward to following your journey and reading your content!