Trouble Inside DeepMind
Workers at Google DeepMind want their parent company to drop military contracts
Hello! It has been a long time since I’ve sent this newsletter, but I’m back with a scoop on a theme I’ve been writing about quite a bit this year: tensions inside Google over its relationship with Israel.
The news: Nearly 200 workers inside Google DeepMind, the tech giant’s AI division, sent a letter earlier this year calling on Google to drop its military contracts, arguing that those contracts are in violation of its own AI rules. The signatures represent some 5% of DeepMind’s overall headcount—a small percentage, yes, but still a significant one for the AI industry, where top talent is in high demand.
“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter states.
The letter doesn’t refer to any specific governments or militaries, but it links out to one of my stories from earlier this year, which revealed that Google has a direct contract to provide cloud computing services for the Israeli Ministry of Defense. The MoD controls the Israeli military, which is of course currently engaged in the war in Gaza that has claimed more than 40,000 lives, many of them civilians.
The dispute is a microcosm of a bigger cultural clash inside Google, between DeepMind and Google’s Cloud business, which sells Google services, including AI developed inside DeepMind, to clients including governments and militaries.
Workers point to a gradual erosion of DeepMind’s independence within Google. When Google acquired DeepMind back in 2014, the lab’s leaders won a promise from the search engine: that DeepMind’s AI would never be used for military or surveillance purposes. But various restructures and reshuffles later, DeepMind is now bound by Google’s “AI Principles,” which although they forbid the company from building AI that could cause “overall harm,” do not explicitly prohibit Google selling tools to military clients. “While DeepMind may have been unhappy to work on military AI or defense contracts in the past, I do think this isn’t really our decision any more,” one DeepMind employee told me in April.
Google needs DeepMind to pursue its grand AI ambitions. But DeepMind also needs Google: specifically its cash and infrastructure, without which the lab’s leaders could not pursue their goal of developing “artificial general intelligence” — a project that they believe could be the most important endeavour humanity has ever embarked upon.
A Google spokesperson told me: “When developing AI technologies and making them available to customers, we comply with our AI Principles, which outline our commitment to developing technology responsibly. We have been very clear that [Google’s contract with the Israeli government] is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy. This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”
Read the full story in TIME here:
Google’s statement on Project Nimbus “is so specifically unspecific that we are all none the wiser on what it actually means,” one of the letter’s signatories told TIME.
This speaks volumes, if you ask me.