Google backtracks on AI ethics pledge, sparks outrage over weapons and surveillance use
By isabelle // 2025-02-09
 
  • Google removed its 2018 pledge to avoid using AI for weapons or surveillance, sparking backlash from employees and human rights groups.
  • The updated policy emphasizes "responsible" AI but lacks explicit bans on military or surveillance applications.
  • Critics warn the move could enable mass surveillance, warfare, and human rights violations.
  • Google has a history of military collaboration, including recent AI support for the Israeli military during the Israel-Gaza war.
  • Internal and external critics argue the decision prioritizes profit and government contracts over ethical AI development.
Google has quietly erased a key promise from its AI ethics guidelines, abandoning its 2018 pledge not to use artificial intelligence for weapons or surveillance. The move, announced Tuesday, has ignited fierce backlash from employees, human rights advocates, and tech watchdogs, who warn that the decision sets a dangerous precedent for the unchecked militarization of AI. Google’s pivot aligns with its growing collaboration with government agencies, raising concerns about the tech giant’s role in enabling mass surveillance and warfare. The updated policy now emphasizes “responsible” AI development in line with “widely accepted principles of international law and human rights.” However, the removal of explicit prohibitions on weapons and surveillance has left many questioning Google’s commitment to ethical AI. Matt Mahmoudi, an AI and human rights adviser at Amnesty International, condemned the decision, stating, “AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy.”

Google has a history of military collaboration

Google’s shift comes nearly seven years after its controversial involvement in Project Maven, a Pentagon initiative that used AI to analyze drone footage and identify potential military targets. In 2018, more than 3,000 Google employees protested the project, signing an open letter demanding the company cease its involvement. Google eventually withdrew from Project Maven and published its original AI principles, which explicitly barred the use of AI in weapons and surveillance. Despite this, Google has continued to work with the U.S. military on other projects, including cloud computing and disaster response. Recent reports from The Washington Post reveal that Google has also provided AI technology to the Israeli military since the early weeks of the Israel-Gaza war, further fueling concerns about the company’s role in global conflicts.

Internal backlash and ethical concerns

The decision to scrap its AI ethics pledge has sparked outrage within Google itself. Parul Koul, a Google software engineer and president of the Alphabet Workers Union, told Wired, “It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public.” Google’s move reflects a broader trend of tech companies prioritizing profit and government contracts over ethical considerations. Margaret Mitchell, former co-lead of Google’s ethical AI team, warned that the removal of the “harm” clause could signal a willingness to deploy technology “directly that can kill people.”

A global race for AI dominance

Google executives have defended the policy change, framing it as a response to the “increasingly complex geopolitical landscape.” In a blog post, senior vice president James Manyika and Google DeepMind CEO Demis Hassabis argued that democracies should lead in AI development, guided by “core values like freedom, equality, and respect for human rights.” They emphasized the need for collaboration between governments and private companies to ensure AI supports national security and global growth. However, critics remain skeptical. Anna Bacciarelli, a senior AI researcher at Human Rights Watch, called the decision “incredibly concerning,” adding that it underscores the need for binding regulations rather than voluntary principles. “For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever,” she said. Google’s decision to abandon its AI ethics pledge marks a troubling turning point in the tech industry’s relationship with military and surveillance applications. By prioritizing geopolitical competition and government contracts over ethical safeguards, Google risks enabling the very harms it once sought to prevent. As AI continues to evolve at a breakneck pace, many worry the absence of robust regulations and accountability mechanisms leaves humanity vulnerable to the misuse of this powerful technology. For a company once guided by the motto “Don’t be evil,” this latest move raises a critical question: Has Google lost its moral compass? Sources for this article include: DailyMail.co.uk CNN.com BBC.com RT.com