Federal judge blocks Trump’s Pentagon ban on Anthropic

NEWNow you can listen to Fox News articles!
A federal judgeTrump’s decision to block the Trump administration from banning AI firm Anthropic from the Department of Defense’s use is fueling the debate over whether the decision is pushing the courts into making national security decisions.
The ruling, handed down late Thursday by U.S. District Judge Rita Lin, a Biden appointee for the northern district of California, halts a broader effort by the administration to block the company while the case continues, though it does not expressly require the Pentagon to use Anthropic. The judge also gave the government one week to appeal the case.
Under Secretary of War Emil Michael wrote in X that the decision contained “numerous factual errors” and was issued “during the conflict,” saying “we want to raise the law.” [president’s] role as Commander-in-Chief” and interfere with the department’s ability to conduct military operations.
Michael said management is looking at Anthropic as it has identified supply chain risks pending an appeal, stating that officials dispute the scope and effect of the court order.
Lin said the Pentagon’s decision to appoint Anthropic as a national security risk “It can be both illegal and reckless and reckless.”
“There is nothing in the regulatory law that supports the Orwellian idea that corporate America can be labeled as a potential enemy of the US for expressing disagreement with the government,” Lin said.
A BRAVE ARMY COLONEL TOOK THE PENTAGON — AND PAID YOUR PRICE
“Can a judge order the Department of Defense to use a security risk vendor? No, but also yes? Judge Lin (Biden ND California) is trying to stop President Trump/Secretary Hegseth from banning Anthropic. But you agree they would choose not to use it?” One X user Eric Wess wrote on social media.
Secretary of War Pete Hegseth was named in the case, along with other defendants. (Julia Demaree Nikhinson/AP)

Defense Secretary Pete Hegseth had warned Anthropic that it would face termination of its $200 million contract or be designated a supply chain risk if it did not allow its AI platform to be approved for all official uses. (Joe Raedle/Getty Images)
Others described the decision as pure legal activism and accused the judge of interfering in a national security decision.
But supporters of the decision — including a group of nearly 150 retired judges and statesmen — say the administration has gone too far, warning the Pentagon’s use of “”“Supply chain risk”. appears to be misused and can stifle free speech and legitimate business activity.
In a March 3 letter, the Pentagon notified Anthropic that it would be designated a national security supply chain risk. That mandate mandated that no contractor, supplier or partner doing business with the United States military could conduct commercial activity with Anthropic.
PLANTIR EXECUTIVE SAYS AI ENABLES FAST HEALTHCARE SYSTEMS AND EFFICIENCY FOR LARGE US PLANT.
This legal battle follows a wider dispute between the Pentagon and Anthropic about how the company’s AI system, Claude, could be used in military operations. Claude is the only commercial AI system approved for use in categories.
Defense Secretary Pete Hegseth had warned Anthropic that it would face termination of its $200 million contract, which was issued in July 2025, or be designated as a supply chain risk if it did not allow its AI platform to be approved for all legitimate uses.
Anthropic insisted that it would not allow Claude to be used for fully autonomous weapons or mass surveillance of Americans.
Pentagon officials say such use has not been approved, insisting that people remain on the list of lethal decisions and that the military is not looking at home, but they maintain that private companies cannot say how their systems are used in official operations.
Lin pointed to a range of measures — including a government-wide ban and contractor restrictions — saying they don’t appear to be “consistent with national security concerns” and instead “seem like an attempt to cripple Anthropic.

Hegseth described CEO Dario Amodei and Anthropic as “arrogantly professional” and “a textbook case of how not to do business with the United States Government.” (Samyukta Lakshmi/Bloomberg via Getty Images)
Anthropic welcomed the decision, saying in a statement: “We appreciate the court’s swift action, and we’re pleased that it agrees that Anthropic will likely succeed on its merits.”
CLICK HERE TO DOWNLOAD THE FOX NEWS PROGRAM
Hegseth described CEO Dario Amodei and Anthropic as “arrogantly professional” and “a textbook case of how not to do business with the United States Government” in a Feb. 27 in X.
OpenAI has emerged as a major alternative, protecting ia Pentagon deal applying its models to isolated systems as tensions with Anthropic increased.
However, Anthropic has not been completely removed – its Claude program remains focused on military operations, and restoring it will take time.



