Palantir’s Pentagon AI Tool Hit by Anthropic Ban, Forcing Risky Rebuild

March 5, 2026
Palantir’s Pentagon AI Tool Hit by Anthropic Ban, Forcing Risky Rebuild

NEW YORK, March 5, 2026, 07:36 EST

  • Palantir is under orders to pull Anthropic’s Claude from its Maven Smart Systems, following a U.S. directive to end its collaboration with Anthropic.
  • The move may stretch out for months, introducing fresh operational and legal questions for contractors working with U.S. defense AI programs.
  • Lockheed and other top contractors are under the same kind of scrutiny, with Washington considering whether to tag Anthropic with a “supply-chain risk” label.

Palantir Technologies Inc is rushing to untangle its Pentagon-focused Maven Smart Systems from Anthropic’s AI, after a Trump administration directive told contractors to cut ties with the startup, say people familiar with the situation. That directive could mean Palantir has to ditch Claude—Anthropic’s AI model—and overhaul sections of a U.S. government platform linked to contracts with more than $1 billion at stake. 1

The dispute erupts just as the U.S. military steps up its reliance on large language models—AI systems capable of writing and summarizing text—for both analysis and targeting. Contractors are hustling to secure “approved” supplier status. At the heart of all this: Maven, the Pentagon’s main AI initiative.

The data highlights what’s shaping up to be a deeper issue: defense platforms now rely on AI modules from quick-changing suppliers, and a political flare-up can instantly turn those components off-limits. Swapping them out isn’t always a straightforward technical job, either.

According to people familiar with Palantir’s work, Maven operates with prompts—essentially instructions that guide the AI system—and workflows crafted through Claude code. Replacing the model could take Palantir months, one person said. Reuters was unable to confirm a specific timeline.

Defense Secretary Pete Hegseth is pushing for rapid action. “Effective immediately, no contractor … may conduct any commercial activity” with Anthropic, he said, as per Reuters.

Palantir, the Pentagon, and Anthropic all declined to comment when contacted by Reuters. At a defense tech conference on Tuesday, Palantir CEO Alex Karp addressed the dispute—though he didn’t mention Anthropic by name. Karp, according to Reuters, warned that Silicon Valley companies that “screw the military” risk triggering “the nationalization of our technology.”

Defense contractors are moving to remove Anthropic’s technology from their operations, though legal teams are still debating if Washington can enforce a wider separation outside of government contracts. Lockheed Martin, for its part, said it will comply with federal orders and anticipates “minimal impacts.” The company emphasized it doesn’t rely on any single AI supplier “for any portion of our work.” 2

Reuters quoted Jason Workmaster, a contract lawyer at Miller Chevalier, describing the push to ban Pentagon contractors from using Anthropic as a “highly aggressive position.” University of Minnesota law professor Alan Rozenshtein weighed in too: “Capitalism and free markets rely on the rule of law. This is the opposite of that,” he told Reuters.

Still, contractors might move on before any court trims the policy. Swapping out an AI model in active military systems isn’t simple—testing, re-certification, and user retraining are all on the table. A bungled or delayed replacement could open up capability gaps.

Pushback is surfacing from corners of the tech sector. An industry association representing Amazon, Nvidia, Apple and OpenAI flagged its “concern” in a letter, following reports that the Pentagon might slap a supply-chain risk label on the heels of a procurement spat, according to Reuters. 3

Reuters quoted Connie LaRossa, who handles national security policy at OpenAI, as saying during a panel that OpenAI’s red lines are the same as Anthropic’s: “no domestic surveillance and no use of AI for autonomous weapons.” In the same report, Reuters noted Anthropic intends to fight any supply-chain risk label in court.

On Feb. 27, Trump announced a directive for federal agencies to stop using Anthropic’s technology, Reuters reported, giving the Defense Department and others six months to wind down. Anthropic responded, vowing to contest any risk designation and labeling it “legally unsound.” 4

Palantir, valued by Reuters at roughly $350 billion, faces a critical execution challenge here—plus, it’s a snapshot of just how vulnerable major defense software efforts are to the shifting supplier landscape in Silicon Valley. There’s no shortage of rivals, with OpenAI and Google already embedded in government contracts. But the real sticking point is the difficulty of replacing models within Maven itself.