Skip to main content
Donate Now

Google Announces Willingness to Develop AI for Weapons

Tech Giant Backtracks on Its Own Principles on Artificial Intelligence

New product announcements at Made By Google in Mountain View, California, August 13, 2024. © 2024 Juliana Yamada/AP Photo

Google, a company that once went by the motto “don’t be evil,” appears to be changing tack. The tech giant on Tuesday announced significant changes to its artificial intelligence (AI) policy that had from 2018 until very recently guided the company’s work on AI.

Google’s previous Responsible AI Principles stated the company would not develop AI “for use in weapons” or where the primary purpose is surveillance. Google had committed to “not design or deploy AI” that causes “overall harm” or “contravenes widely accepted principles of international law and human rights.” Those red lines are no longer applicable.

The company’s revised AI Principles state that Google’s AI products will “align with” human rights without explaining how. This move from explicitly prohibited uses of AI is deeply concerning. Sometimes, it’s simply too risky to use AI, a suite of complex, fast-developing technologies whose consequences we are discovering in real time.

That a global industry leader like Google can suddenly abandon self-proclaimed forbidden practices underscores why voluntary guidelines are not a substitute for regulation and enforceable law. Existing international human rights law and standards do apply in the use of AI, and regulation can be crucial in translating norms into practice.

It's not clear to what extent Google was following its previous Responsible AI Principles, but Google workers were at least able to cite them in pushing back on alleged irresponsible AI development.

Google’s pivot from refusing to build AI for weapons to stating an intent to create AI that supports national security ventures is stark. Militaries are increasingly using AI in war, where their reliance on incomplete or faulty data and flawed calculations increases the risk of civilian harm. Such digital tools complicate accountability for battlefield decisions that may have life-or-death consequences.

Google executives describe a “global competition … for AI leadership” and say they believe AI development should be “guided by core values like freedom, equality, and respect for human rights.” Yet the company is deprioritizing consideration for how powerful new technologies impact our rights. This appears destined to result in a race to the bottom.

Consistent with the United Nations Guiding Principles on Business and Human Rights, all companies need to meet their responsibility to respect human rights across their products and services. In the context of military use of AI, the stakes could not be higher.

Your tax deductible gift can help stop human rights violations and save lives around the world.