WASHINGTON, January 30, 2026, 08:43 EST
- The Pentagon and Anthropic remain at an impasse over restrictions on military and civilian applications of AI tools.
- A Jan. 9 strategy memo recommended including “any lawful use” language in AI procurement contracts.
- Perplexity inked a $750 million Azure contract with Microsoft amid growing AI investments by major tech players.
The Pentagon is clashing with San Francisco-based AI developer Anthropic over built-in safeguards—AI “guardrails” designed to restrict the U.S. government’s use of its technology in autonomous weapons targeting and domestic surveillance, sources told Reuters. Negotiations around a contract potentially worth $200 million have stalled, the sources added. Pentagon officials might still need Anthropic’s engineers to modify models that were trained specifically to avoid harmful actions. (Reuters)
The standoff comes as Washington pushes to integrate advanced AI into military and intelligence operations, relying on commercial systems instead of developing them entirely in-house. Silicon Valley wants both the revenue and influence but is also fighting to maintain control over how its technology is deployed.
A January 9 strategy memo for the Pentagon—now renamed the Department of War—ordered officials to insert standard “any lawful use” clauses into AI-services contracts within 180 days. The document also pushed for using models “free from usage policy constraints” that could restrict lawful military applications. (U.S. Department of War)
During talks with Anthropic, company reps voiced concerns that their tools might be used to surveil Americans or aid weapons targeting without sufficient human oversight, sources say. Pentagon officials pushed back, insisting they should deploy commercial AI as long as it complies with U.S. law, no matter the companies’ usage policies. The Defense Department declined to comment. Anthropic counters that its AI is already “extensively used for national security missions” and called the talks “productive,” all while gearing up for a public offering. It’s one of a few AI firms awarded Pentagon contracts last year, alongside Alphabet’s Google and OpenAI. CEO Dario Amodei wrote recently that AI should back national defense “in all ways except those which would make us more like our autocratic adversaries.” (CNA)
Guardrails go beyond mere contract terms. They’re embedded within an AI model’s core — defining its limits and capabilities. Loosening them typically requires altering the system itself, not just toggling a setting.
Another major AI deal unfolded in the cloud space. Perplexity, the AI search startup, has inked a $750 million contract with Microsoft for three years of Azure cloud services, Bloomberg reports. A Microsoft spokesperson told Reuters, “Perplexity has chosen Microsoft Foundry as its primary AI platform for model sourcing under a new multi-year agreement.” Meanwhile, Perplexity said the partnership grants access to “frontier models” — cutting-edge systems from OpenAI and Anthropic. Despite the deal, Perplexity confirmed to Bloomberg it hasn’t cut back spending on Amazon Web Services, its main cloud provider. Amazon sued Perplexity last year over its “agentic” shopping feature, accusing the startup of accessing customer accounts and masking automated actions as human browsing. (Reuters)
Microsoft is offloading some picks amid a pricey AI expansion. The company reported $37.5 billion in capital expenditures last quarter, but shares dropped afterward as investors fretted over whether revenue can keep pace with rising costs. “One big obvious issue is that revenues are up 17% and the cost of revenues are up 19%,” noted Eric Clark, portfolio manager of the LOGO ETF. (Reuters)
These episodes highlight a persistent tension from various perspectives: customers demand wide-ranging rights to deploy AI wherever it fits, but vendors fret over potential reputational and legal risks when things get risky.
The standoff with the Pentagon remains uncertain — Anthropic might ease its demands, or the government could push more aggressively toward other vendors and internal tools without built-in refusal mechanisms. Meanwhile, Perplexity’s cloud division faces pressure from Amazon’s lawsuit, underscoring how partnerships can swiftly turn into legal battles.
Anthropic faces a key challenge: can it maintain its usage policies while dealing with one of the largest tech buyers on the planet? Meanwhile, the Pentagon is testing the limits of its “any lawful use” policy as commercial AI models roll out widely.