海角社区app

Trump orders US agencies to stop using Anthropic technology in clash over AI safety

FILE - Dario Amodei, CEO and co-founder of Anthropic, attends the annual meeting of the World Economic Forum in Davos, Switzerland, Jan. 23, 2025. (AP Photo/Markus Schreiber, File)(AP/Markus Schreiber)

WASHINGTON (AP) 鈥 The Trump administration on Friday ordered all U.S. agencies to stop using Anthropic鈥檚 artificial intelligence technology and imposed other major penalties, escalating between the government and the company over AI safety.

President Donald Trump, Defense Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline, accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.

鈥淲e don鈥檛 need it, we don鈥檛 want it, and will not do business with them again!鈥 Trump said on social media.

Hegseth also deemed the company a 鈥渟upply chain risk,鈥 a designation typically stamped on foreign adversaries that could derail the company鈥檚 critical partnerships with other businesses.

In a statement issued Friday evening, Anthropic said it would challenge what it called an unprecedented and legally unsound action 鈥渘ever before publicly applied to an American company.鈥

Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations.

鈥淣o amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,鈥 the company said. 鈥淲e will challenge any supply chain risk designation in court.鈥

The government鈥檚 effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI鈥檚 role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.

OpenAI strikes deal with Pentagon hours after Anthropic was punished

Hours after its competitor was punished, OpenAI CEO Sam Altman announced on Friday night that his company struck a deal with the Pentagon to supply its AI to classified military networks, potentially filling a gap created by Anthropic鈥檚 ouster.

But Altman said that the same red lines that were the sticking point in Anthropic鈥檚 dispute with the Pentagon are now enshrined in OpenAI鈥檚 new partnership.

鈥淭wo of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,鈥 Altman wrote, adding that the Defense Department 鈥渁grees with these principles, reflects them in law and policy, and we put them into our agreement.鈥

Altman also said he hopes the Pentagon will 鈥渙ffer these same terms to all AI companies鈥 as a way to 鈥渄e-escalate away from legal and governmental actions and toward reasonable agreements.鈥

Trump and others lash out at Anthropic

Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.

鈥淭he United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!鈥 he wrote in all caps.

Months of private talks exploded into public debate this week and hit a stalemate when Amodei said his company to the demands.

Anthropic can afford to lose the contract. But the government’s actions posed broader risks at the peak of the company鈥檚 meteoric rise from a little-known computer science research lab in San Francisco to one of the world鈥檚 most valuable startups.

The president鈥檚 decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, but their complaints posed contradictions.

Top Pentagon spokesman Sean Parnell said Anthropic鈥檚 unwillingness to go along with the military鈥檚 demands was 鈥渏eopardizing critical military operations and potentially putting our warfighters at risk.鈥 Hegseth said the Pentagon 鈥渕ust have full, unrestricted access to Anthropic鈥檚 models for every LAWFUL purpose in defense of the Republic.鈥

Trump’s social media post said the company 鈥渂etter get their act together, and be helpful鈥 during the phase-out period or there would be 鈥渕ajor civil and criminal consequences to follow.鈥

However, Hegseth鈥檚 choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by U.S. adversaries to prevent them from selling products that are harmful to American interests.

Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, 鈥渃ombined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.鈥

Dispute shakes up Silicon Valley

The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic鈥檚 top rivals 鈥 OpenAI and Google 鈥 voiced support for Amodei鈥檚 stand in open letters and other forums.

The moves could benefit OpenAI’s ChatGPT as well as Elon Musk鈥檚 competing chatbot, Grok, which the Pentagon also plans to give access to classified military networks. It could serve as a warning to Google, which has a still-evolving contract to supply its AI tools to the military.

Musk sided with Trump鈥檚 administration, saying on his social media platform X that 鈥淎nthropic hates Western Civilization.鈥 Altman took a different approach, expressing solidarity with Anthropic’s safeguards and opposing the government’s 鈥渢hreatening鈥 approach while also working to secure OpenAI’s deal with the Pentagon. It marked the latest twist in OpenAI’s longtime and sometimes acrimonious rivalry with Anthropic, which was founded by a group of ex-OpenAI leaders in 2021.

Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon鈥檚 AI initiatives, wrote on social media this week that the government 鈥減ainting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.鈥

Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic鈥檚 red lines were 鈥渞easonable.鈥 He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also 鈥渘ot ready for prime time in national security settings,鈥 particularly not for fully autonomous weapons.

Anthropic is 鈥渘ot trying to play cute here,鈥 he wrote on LinkedIn. 鈥淵ou won鈥檛 find a system with wider & deeper reach across the military.鈥

___

O’Brien reported from Providence, Rhode Island.

Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

Federal 海角社区app Network Logo
Log in to your 海角社区app account for notifications and alerts customized for you.