Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute

· Axios

The Pentagon is considering severing its relationship with Anthropic over the AI firm's insistence on maintaining some limitations on how the military uses its models, a senior administration official told Axios.

Why it matters: The Pentagon is pushing four leading AI labs to let the military use their tools for "all lawful purposes," even in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and the Pentagon is getting fed up after months of difficult negotiations.

Visit milkshake.it.com for more information.

  • Anthropic insists that two areas remain off limits: the mass surveillance of Americans and fully autonomous weaponry.

The big picture: The senior administration official argued there is considerable gray area around what would and wouldn't fall into those categories, and that it's unworkable for the Pentagon to have to negotiate individual use-cases with Anthropic — or have Claude unexpectedly block certain applications.

  • "Everything's on the table," including dialing back the partnership with Anthropic or severing it entirely, the official said. "But there'll have to be an orderly replacement [for] them, if we think that's the right answer."
  • An Anthropic spokesperson said the company remained "committed to using frontier AI in support of U.S. national security."

Zoom in: The tensions came to a head recently over the military's use of Claude in the operation to capture Venezuela's Nicolás Maduro, through Anthropic's partnership with AI software firm Palantir.

  • According to the senior official, an executive at Anthropic reached out to an executive at Palantir to ask whether Claude had been used in the raid.
  • "It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot," the official said.

The other side: The Anthropic spokesperson flatly denied that, saying the company had not "not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners outside of routine discussions on strictly technical matters."

  • "Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy."
  • "Anthropic's conversations with the DoW to date have focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance — none of which relate to current operations," the spokesperson said.

Friction point: Beyond the Maduro incident, the official described a broader culture clash with what the person claimed was the most "ideological" of the AI labs when it came to the potential dangers of the technology.

  • But the official conceded that it would be difficult for the military to quickly replace Claude, because "the other model companies are just behind" when it comes to specialized government applications.

Breaking it down: Anthropic signed a contract valued up to $200 million with the Pentagon last summer. Claude was also the first model the Pentagon brought into its classified networks.

  • OpenAI's ChatGPT, Google's Gemini and xAI's Grok are all used in unclassified settings, and all three have agreed to lift the guardrails that apply to ordinary users for their work with the Pentagon.
  • The Pentagon is negotiating with them about moving into the classified space, and is insisting on the "all lawful purposes" standard for both classified and unclassified uses.
  • The official claimed one of the three has agreed to those terms, and the other two were showing more flexibility than Anthropic.

The intrigue: In addition to CEO Dario Amodei's well-documented concerns about AI-gone-wrong, Anthropic also has to navigate internal disquiet among its engineers about working with the Pentagon, according to a source familiar with that dynamic.

The bottom line: The Anthropic spokesperson said the company was still committed to the national security space.

  • "That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers."

Read full story at source