Published :
5 minute read

‘We call on you, Sundar’: Over 600 Google employees urge Pichai to reject classified Gemini AI deal with US military

Google employees urge Sundar Pichai to reject classified Gemini AI military contract with the US Pentagon over surveillance and weapons concerns

A fresh internal revolt is building inside Google, as more than 600 employees have reportedly signed an open letter urging Chief Executive Officer Sundar Pichai to reject a proposed classified artificial intelligence contract with the United States Department of Defense.

The letter reflects growing concern among employees that advanced AI systems such as Google’s Gemini models could be used in sensitive military programs involving surveillance, targeting systems, or other operations that raise ethical and human rights questions. According to reports, the message was sent on April 27 and includes signatures from employees across major divisions, including DeepMind and Google Cloud.

The development marks one of the most serious internal policy challenges for Google since the employee backlash that led the company to withdraw from Project Maven in 2018.

Employees warn against lethal weapons and surveillance use

In the letter, employees reportedly called on company leadership to refuse any classified workloads tied to Google’s AI systems. They argued that artificial intelligence should not be used for lethal autonomous weapons or systems enabling mass surveillance.

Signatories said that because AI can make mistakes and concentrate power, engineers and researchers working closest to the technology have a responsibility to speak up before harmful uses become normalised.

The group also warned that if Google enters classified military deployments, employees may have no visibility into how the systems are used or whether safeguards are being followed.

That concern has become increasingly common across the global AI industry, where developers are debating how much control companies should maintain once advanced models are supplied to governments, militaries, or contractors.

Reputation and trust at the centre of the dispute

The letter reportedly states that the wrong decision at this stage could cause lasting damage to Google’s reputation, business interests, and global standing.

That message is significant because Google has long positioned itself as a company focused on user trust, responsible innovation, and products designed to improve daily life. Critics inside the company fear that classified military partnerships could conflict with that public identity.

Employees also reportedly raised concerns about worker safety, civil liberties, and the broader risk of deploying powerful AI tools in environments where transparency is limited.

For many technology workers, the debate is no longer theoretical. As AI capabilities rapidly improve, internal staff at major firms are increasingly asking whether companies should set hard limits on military and surveillance applications.

Echoes of the 2018 Project Maven protest

This is not the first time Google employees have openly challenged leadership over defense work.

In 2018, thousands of workers protested Google’s participation in Project Maven, a Pentagon program that used AI to help analyse drone footage. After mounting pressure, the company chose not to renew the contract and later introduced AI principles restricting certain military uses.

That episode became a defining moment in Silicon Valley, showing that employee activism could influence strategic decisions at one of the world’s largest technology companies.

The latest letter suggests that many inside Google believe those earlier principles should still guide the company’s actions as generative AI becomes more powerful.

Why the Pentagon wants AI partners now

Governments around the world are accelerating efforts to integrate AI into defense, cybersecurity, logistics, intelligence analysis, and cloud operations. The United States has been particularly active in seeking partnerships with private technology firms.

For the Pentagon, access to frontier AI models could improve data processing, language analysis, simulation tools, and operational decision support. For companies, such deals can bring significant long term revenue and strategic influence.

Google already supports some non classified government AI workloads through programs such as genAI.mil, according to reports. The current controversy centres on expanding Gemini tools into classified environments.

That distinction matters because classified deployments typically involve greater secrecy, stricter controls, and limited public scrutiny.

Comparison with Anthropic’s stance

The employee appeal also follows recent debate involving Anthropic, whose Claude models were reportedly sought for military related uses.

Reports indicate Anthropic resisted certain Pentagon requests linked to autonomous weapons and surveillance concerns. That confrontation later intensified into a broader dispute over procurement and supply chain treatment.

Google employees appear to be referencing that example to argue that major AI companies still have room to set boundaries, even when facing pressure from governments.

Google faces a difficult balancing act

For Sundar Pichai and senior leadership, the challenge is complex.

On one side is a competitive race for cloud and AI contracts where rivals such as Microsoft and Amazon Web Services have expanded public sector business. On the other is employee trust, brand reputation, and ethical accountability.

If Google rejects the deal, it may surrender strategic ground in a growing market. If it proceeds, it risks renewed internal unrest and public criticism.

That tension is becoming a defining issue for the AI era: whether the world’s most powerful models should remain commercial tools, public infrastructure, or instruments of state power.

What happens next

Google has not publicly announced any final decision on the reported classified Gemini discussions. But the letter places clear pressure on leadership to respond.

The company now faces questions that extend beyond one contract. Can a consumer technology brand maintain public trust while supplying advanced AI to military programs? Can internal ethics principles survive commercial competition? And how much influence should employees have over decisions with global consequences?

Those questions are likely to shape not only Google’s next move, but also the future relationship between governments and the private AI sector.

For now, one message from inside the company is unmistakable: many Google employees want limits placed on how Gemini is used, especially when secrecy and military power intersect.

Khogendra Rupini Author Profile
VOICES FROM AUTHOR

Khogendra Rupini

Khogendra Rupini is a full-stack developer and independent news writer, and the founder and CEO of Levoric Learn. His journalism is grounded in verified information and factual accuracy, with reporting informed by reputable sources and careful analysis rather than live or speculative updates. He covers technology, artificial intelligence, cybersecurity, and global affairs, producing clear, well-contextualized articles that emphasize credibility, precision, and public relevance.

Founder & CEO, Levoric Learn Editorial and Technology Analysis
or
or

Edit Profile

Contact Khogendra Rupini

Are you looking for an experienced developer to bring your website to life, tackle technical challenges, fix bugs, or enhance functionality? Look no further.

I specialize in building professional, high-performing, and user-friendly websites designed to meet your unique needs. Whether it's creating custom JavaScript components, solving complex JS problems, or designing responsive layouts that look stunning on both small screens and desktops, I can collaborate with you.

Get in Touch

Email: contact@khogendrarupini.com

Phone: +91 8837431044

Create something exceptional with us. Contact us today