
The Pentagon is evaluating whether to terminate its AI partnership with San Francisco-based Anthropic after reports surfaced that the U.S. military used the company’s large language model Claude in the classified operation that resulted in the capture of Venezuela’s former president Nicolás Maduro, according to U.S. defense and industry sources.
The dispute arises from differences over usage restrictions on the AI and Pentagon demands for broader deployment rights.
In a significant development in U.S. defense technology policy, Pentagon officials are considering severing ties with Anthropic, a leading artificial intelligence maker, amid disagreements over usage policies governing the company’s AI models. According to senior administration officials cited by Axios and Reuters, the Department of Defense has been engaging in protracted talks with Anthropic over the terms under which the military may employ Claude, the company’s AI model, particularly in sensitive defense tasks that extend beyond standard applications.
The core of the dispute stems from the Pentagon’s request that major AI firms grant the department the ability to use their models for “all lawful purposes,” encompassing weapons development, intelligence gathering, and battlefield support. Anthropic has resisted these demands, maintaining that its ethics-driven policy framework prohibits certain applications, including mass surveillance and fully autonomous weapons, even when requested by the military. The stance, officials argue, creates operational ambiguity and complicates logistics for defense users reliant on consistent AI functionality.
Defense officials assert that a more flexible use policy is necessary for national security operations, especially given the rapid integration of AI into military planning and execution. Other AI developers, including OpenAI, Google, and Elon Musk’s xAI, have agreed to broader terms for Pentagon access, though negotiations continue over classified system integration. Sources familiar with the matter note that Anthropic’s reluctance to adopt the “all lawful purposes” standard has prompted mounting frustration within defense leadership.
Anthropic, for its part, has publicly reiterated its commitment to national security support while emphasising its usage policies are designed to prevent misuse. A company spokesperson stated that discussions with the government have centered on clarifying these policies and not about specific operational deployments, highlighting an insistence on contractual and ethical safeguards.
The tensions between the Pentagon and Anthropic have been catalysed by reports that Claude was deployed in the U.S. military’s January operation to capture Nicolás Maduro, the former Venezuelan president, in Caracas. Reporting by the Wall Street Journal and confirmation from Reuters indicates that Claude was used through Anthropic’s partnership with Palantir Technologies, which supplies data and AI platforms to the Defense Department, including on classified networks.
While the precise functions Claude performed during the operation have not been publicly detailed by U.S. authorities, the deployment on classified systems marked a high-visibility instance of an AI model from a private firm participating in an active military context. This development has drawn scrutiny given Anthropic’s internal restrictions that explicitly bar its AI from being used to facilitate violence, develop weapons, or conduct surveillance under its usage policy.
Officials cited in media reports have indicated that an Anthropic executive’s inquiry to a Palantir counterpart about Claude’s use in the raid raised concerns within the Department of Defense, suggesting that the company might not approve of its software third-party use in kinetic military operations. Pentagon sources described this as problematic for operational alignment.
Anthropic’s existing contract with the Pentagon, valued at up to $200 million, was awarded to bring advanced AI capabilities, including on classified networks, into defense use. However, the reported use of Claude in the Maduro operation has intensified debates within the Pentagon about whether Anthropic’s safeguards are compatible with military requirements and whether the department should pivot to other AI suppliers with fewer restrictions.
The dispute between the Pentagon and Anthropic highlights broader tensions as the U.S. military increasingly incorporates artificial intelligence into doctrine and operations. The Pentagon’s push for unrestricted deployment of commercial AI reflects strategic imperatives to maintain technological superiority, particularly against peer competitors. However, Anthropic’s resistance illustrates the ethical and policy challenges that arise when commercial AI firms set limits on how their models are employed, even for national security missions.
As officials review the future of the partnership, the Pentagon must balance operational flexibility with maintaining relationships with leading AI developers. Other companies in the defense tech landscape, such as OpenAI and Google, are actively negotiating terms to expand their models’ use in both classified and unclassified settings, pointing to a competitive environment in the AI-for-defense domain.
Stay informed on the fastest growing technology.
Disclaimer: The content on this page and all pages are for informational purposes only. We use AI to develop and improve our content — we love to use the tools we promote.
Course creators can promote their courses with us and AI apps Founders can get featured mentions on our website, send us an email.
Simplify AI use for the masses, enable anyone to leverage artificial intelligence for problem solving, building products and services that improves lives, creates wealth and advances economies.
A small group of researchers, educators and builders across AI, finance, media, digital assets and general technology.
If we have a shot at making life better, we owe it to ourselves to take it. Artificial intelligence (AI) brings us closer to abundance in health and wealth and we're committed to playing a role in bringing the use of this technology to the masses.
We aim to promote the use of AI as much as we can. In addition to courses, we will publish free prompts, guides and news, with the help of AI in research and content optimization.
We use cookies and other software to monitor and understand our web traffic to provide relevant contents, protection and promotions. To learn how our ad partners use your data, send us an email.
© newvon | all rights reserved | sitemap

