Skip to content
Menu
Menu

Google Signs Classified AI Deal With Pentagon For “Any Lawful Use”

The agreement would allow deployment of Google AI models across classified military systems, with limits on oversight authority.

 

According to The Information, Google signed a classified agreement with the U.S. Department of War to provide its AI models for government use.

The agreement allows the Pentagon to deploy Google’s AI systems within classified networks for “any lawful government purpose,” including covert and overt military applications. 

The contract reportedly includes language stating that the AI system will not be used for domestic mass surveillance or for the development of fully autonomous weapons. However, it also specifies that Google does not have the authority to control or veto lawful government operational decisions.

The agreement would also require Google to modify AI safety filters at the government’s request, raising questions about how model safeguards may be adjusted in classified environments. 

A Google spokesperson said the company supports government agencies on both classified and unclassified projects and remains committed to limits on surveillance and on autonomous weapons without human control. 

There is no current confirmation of the deal from a public government statement or official release.  The Department of Defense declined to comment on the reported agreement. 

 

Backdrop: Expanding AI–Defense Ties Despite Ongoing Disputes

This is only the fourth contract between the DOW and top-tier AI developers. Anthropic*, OpenAI, xAI, and now Google are the only large-scale AI models currently working with the DOW. 

* While Anthropic is still under contract, the company is currently in a legal dispute with the administration after being designated a supply chain risk. Anthropic refused to remove guardrails relating to mass surveillance and autonomous weapons, sparking a feud with the DOW. The Secretary of War then designated Anthropic a supple chain risk, a designation usually reserved for foreign entities classified as potential security risks. Anthropic is pursuing the matter in federal court. Despite the conflict, many government agencies, including the NSA, are actively using Anthropic’s models, including its latest release, Mythos, an AI LLM designed to detect cyber vulnerabilities.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.

Advertise with AI RIsk Today, Today!