Artificial intelligence company Anthropic has filed a lawsuit against the U.S. Department of Defense after being designated a national security supply chain risk, a move that could block the firm from lucrative military contracts. Legal experts say the case could test the limits of a rarely used law and that the company may have a strong argument that the government overstepped its authority.
The dispute places the rapidly expanding artificial intelligence industry at the center of a broader debate over military use of advanced technologies and the power of governments to restrict private companies from defense procurement.
Pentagon’s Supply Chain Risk Designation
The controversy began when the Pentagon invoked a little used statute known as Section 3252 to designate Anthropic as a potential supply chain risk. The law allows the defense secretary to exclude companies from certain government contracts if their technology could expose military information systems to sabotage or infiltration by foreign adversaries.
Officials used the authority to bar Anthropic from some defense contracting opportunities, arguing that restrictions embedded in the company’s AI software could undermine military operations.
Anthropic’s lawsuit argues that the decision was unjustified and violated its constitutional rights. The company says it is a U.S. based firm with no links to foreign adversaries and therefore does not meet the legal criteria for being labeled a supply chain threat.
The designation could cost the company billions of dollars in potential revenue beginning in 2026, according to company executives.
Dispute Over Military Use of AI
The conflict between Anthropic and the Pentagon centers on the company’s artificial intelligence model known as Claude. Anthropic has placed strict usage limits on the technology, prohibiting applications such as autonomous weapons and domestic surveillance.
Defense officials have argued that such restrictions could limit the military’s operational flexibility in future conflicts.
Anthropic maintains that the safeguards are necessary because current artificial intelligence systems are not reliable enough to control lethal weapons or perform sensitive surveillance tasks without human oversight.
The dispute intensified after Defense Secretary Pete Hegseth formally labeled the company a supply chain risk in early March. The decision came shortly after Anthropic refused to remove the restrictions on the military’s use of its AI systems.
Anthropic’s lawsuit notes that the U.S. military had recently used its technology during operations linked to the conflict involving Iran, which the company argues contradicts claims that the system poses a national security risk.
Legal Arguments in the Case
Anthropic’s legal challenge rests on several constitutional and administrative law arguments. The company says the designation violates the First Amendment by punishing it for expressing views about AI safety in warfare.
The lawsuit also claims the government violated the Fifth Amendment by imposing severe penalties without offering the company a meaningful opportunity to challenge the designation or present evidence in its defense.
In addition, Anthropic argues that the decision breaches the Administrative Procedure Act, which allows courts to overturn government actions that are arbitrary, capricious or not supported by evidence.
Legal scholars reviewing the case have pointed to potential inconsistencies in the government’s position. On one hand, the Pentagon has restricted Anthropic from certain contracts, while on the other it has continued using the company’s technology in military operations.
Government’s Likely Defense
Despite the legal arguments presented by Anthropic, courts traditionally give wide latitude to government agencies when national security is involved.
Lawyers for the government are likely to argue that the executive branch, led by Donald Trump, has broad authority to determine which companies can supply critical technologies to the military.
They may also contend that the armed forces cannot depend on a contractor whose internal policies limit how its products can be used in combat or intelligence operations.
Historically, courts have been reluctant to second guess national security decisions made by the president or defense officials, a factor that could complicate Anthropic’s legal challenge.
Analysis
The lawsuit represents one of the first major legal battles over how artificial intelligence companies interact with national security institutions. As AI technology becomes increasingly important to military planning and operations, tensions between private sector safety standards and government demands are likely to intensify.
The case also raises broader questions about the boundaries of government power in regulating technology firms. If the Pentagon’s designation is upheld, it could establish a precedent allowing defense officials to exclude companies from federal contracts based on policy disagreements over how their technologies are used.
Conversely, if the courts side with Anthropic, the ruling could reinforce limits on government authority and protect technology firms that attempt to impose ethical restrictions on the deployment of artificial intelligence in warfare.
Either outcome could shape the evolving relationship between Silicon Valley and the national security establishment as governments race to integrate advanced AI systems into defense strategies.
With information from Reuters.

