By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: What does the US military’s feud with Anthropic mean for AI used in war?
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Software > What does the US military’s feud with Anthropic mean for AI used in war?
Software

What does the US military’s feud with Anthropic mean for AI used in war?

News Room
Last updated: 2026/03/07 at 9:47 AM
News Room Published 7 March 2026
Share
What does the US military’s feud with Anthropic mean for AI used in war?
SHARE

Anthropic’s ongoing fight with the Department of Defense over what safety restrictions it can put on its artificial intelligence models has captivated the tech industry, acting as a test of how AI may be used in war and the government’s power to coerce companies to meet its demands.

The negotiations have revolved around Anthropic’s refusal to allow the federal government to use its Claude AI for domestic mass surveillance or autonomous weapons systems, but the dispute also reflects the messy nature of what happens when tech companies have their products integrated into conflict. The Pentagon this week declared Anthropic a supply chain risk for its refusal to agree to the government’s terms, while Anthropic has vowed to challenge the designation in court.

spoke with Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University who previously served in the United States air force, about how the feud has played out.

You’ve worked for a while on problems around “dual use technologyWhat happens when there’s a consumer technology that also gets used for classified or military purposes?

I’ve thought about this a lot because I was in the military and I was on the side of the military that was developing and acquiring new technologies. We were always getting criticism about why it was taking so long, and now watching what’s happening I realize why it takes so long.

What you would develop for classified and military contexts is very different from what Anthropic has developed for when I use Claude. The challenge for the military is that these technologies are so useful they can’t wait until a military grade version is available. They need to act quickly because of how valuable these tools are, but it’s not surprising that they ran into cultural differences between not just an AI platform and the military, but an AI platform that has tried to cultivate a reputation as being more safety conscious.

one element in this feud is that Anthropic has branded itself as a safety-forward company, but then it did sign onto a deal with the military.

Yes, there is a way in which it’s surprising that Anthropic would be surprised by where this ended up. Part of the challenge is that Anthropic seems to have made the decision a year or two ago that ChatGPT was going to be for individual users and Anthropic was going to try to corner the enterprise market. That means they’re trying to do business with organizations, rather than trying to sell individual plans.

The puzzle to me is that they were then doing business with the Pentagon and Palantir, which is in the business of using AI for what some people would say are questionable purposes. So that decision was surprising to me because it was very much at odds with the brand that Anthropic was trying to curate.

It seems like that was Anthropic OK with a pretty wide use of its technology, but that there was a red line that they got to with domestic mass surveillance and lethal autonomous weapons.

There are a couple of possibilities. One is that some of this had to do with relationships between the people in Anthropic and the Trump administration, which led to a downward spiral of distrust.

Second, there was the situation in Venezuela and then the politics around ICE activities. There is this question of what does it actually mean to be using these technologies lawfully? One person’s definition of lawful might look very different from another’s.

The Pentagon’s argument was, in part, that if there’s a national defense issue we shouldn’t have to call up Dario Amodei to get approval. It does seem like there is an actual question here around what role private tech companies have in national security decision-making.

If you recall the case of the San Bernardino killer’s iPhone, authorities were worried that this was a ticking bomb situation and they needed Apple to get into the phone. (In 2016, the FBI demanded Apple create a backdoor to grant them access to a mass shooter’s phone. Apple refused on privacy grounds, resulting in the FBI seeking out an independent third party to hack into the device.).

The difference here with Anthropic’s AI is that once you hand this over to the military, you no longer need Anthropic’s approval to use it as you see fit. It’s the difference between hardware and software. You can repurpose this software and use it in ways that maybe weren’t part of the explicit agreement, but now you can justify it on the basis of national security. Then Anthropic has lost all its leverage because it’s in the hands of these national security professionals.

And Anthropic wouldn’t be able to tell what it’s even being used for, correct?

Yes, exactly right. It goes into not just a black box, but Black Ops and classified systems that are closed off.

I’ve found it interesting this week that it seems like a lot of really longstanding questions on AI use in the military are coming to a head. You’ve been following these issues for a long time, what are you thinking about watching this current fight?

When I would hear the CEO of Anthropic talk, he would talk about these existential risks and the misappropriation of AI for bioterrorism. I always thought that those were either too distant or too out of reach. I thought this sort of more mundane case was more of a risk.

There have also been people for a long time foreshadowing these questions about autonomous weapons. The challenge is how do you ever know whether there’s actually a human in the loop. This was a concern that Anthropic had – how do we know if these systems are being used in a fully autonomous way? The US says we are not going to use AI in a fully autonomous capacity, but it’s not clear what that process looks like for ensuring that doesn’t happen. This was some time coming, but I guess it was sort of inevitable that we would go in that direction, just because the technology has gotten more and more sophisticated. The fact of now being involved in a conflict just kind of accelerates those timelines.

We talk a lot about threats from AI and these red lines that people backed away from, but how is AI already being used in warfare?

You can see how it’s extremely useful in a military setting. I did some work on the intel side and one of the challenges is not the lack of content, it’s the signal to noise ratio. You have a huge volume of information but it can be really hard to connect the dots, and that’s something that AI is so good at. You feed it large amounts of information and it generates outputs that help identify what the signal is.

If you’re looking for pattern recognition, AI is really good at pattern recognition. You can identify what the kind of correlates or characteristics are that you’re looking for and then it can go out and identify things, say an Iranian naval vessel, based on what you’ve programmed it to identify. That’s not been super controversial in some ways, because those targets are fairly concrete.

Where people get more uncomfortable is in a setting where the US, for example, would do counter-terrorism strikes. You have an individual on the ground that doesn’t have a lot of identifiable characteristics and so that is a much more precarious situation for AI where you’d really want to make sure you’re triple-checking. He could be a combatant, he could be a civilian. It’s not a naval vessel or surface to air missile, where it’s harder to get that wrong.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article F1 2026: Everything to Know About Streaming on Apple TV This Season F1 2026: Everything to Know About Streaming on Apple TV This Season
Next Article Your next Galaxy phone might let you turn your wild ideas into real phone features — here’s why that’s a big deal Your next Galaxy phone might let you turn your wild ideas into real phone features — here’s why that’s a big deal
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

La Liga Soccer: Stream Athletic Bilbao vs. Barcelona Live From Anywhere
La Liga Soccer: Stream Athletic Bilbao vs. Barcelona Live From Anywhere
News
Packworks Earns a 313 Proof of Usefulness Score by Building an Operating System for General Trade in Southeast Asia | HackerNoon
Packworks Earns a 313 Proof of Usefulness Score by Building an Operating System for General Trade in Southeast Asia | HackerNoon
Computing
Skip Microsoft 365 and Snag a Lifetime License to Office for Windows at an 81% Discount
Skip Microsoft 365 and Snag a Lifetime License to Office for Windows at an 81% Discount
News
Sony appears to be testing dynamic pricing on PlayStation games
Sony appears to be testing dynamic pricing on PlayStation games
News

You Might also Like

Resisting the temptations of AI may be impossible for underpaid markers

7 Min Read
OpenAI is delaying its “adult mode” for ChatGPT
Software

OpenAI is delaying its “adult mode” for ChatGPT

3 Min Read

Mercedes’ dominated Australia qualifying. Its rivals fear F1 2026 is already over

11 Min Read

Elsevier journal under fire over ‘AI-generated’ review comments

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?