By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: We don’t have to have unsupervised killer robots
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > We don’t have to have unsupervised killer robots
News

We don’t have to have unsupervised killer robots

News Room
Last updated: 2026/02/27 at 12:20 PM
News Room Published 27 February 2026
Share
We don’t have to have unsupervised killer robots
SHARE

It’s the day of the Pentagon’s looming ultimatum for Anthropic: allow the US military unchecked access to its technology, including for mass surveillance and fully autonomous lethal weapons, or potentially be designated a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies’ government and military contracts wondering what kind of future they’re helping to build.

While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets with no human oversight, OpenAI and xAI had reportedly already agreed to such terms, although OpenAI is reportedly attempting to adopt the same red lines in the agreements as Anthropic. The overall situation has left employees at some companies with defense contracts feeling betrayed. “When I joined the tech industry, I thought tech was about making people’s lives easier,” an Amazon Web Services employee told The Verge, “but now it seems like it’s all about making it easier to surveil and deport and kill people.”

In conversations with The Verge, current and former employees from OpenAI, xAI, Amazon, Microsoft, and Google expressed similar feelings about the changing moral landscape of their companies. Organized groups representing 700,000 tech workers at Amazon, Google, Microsoft, and more have signed a letter demanding that the companies reject the Pentagon’s demands. But many saw little chance of their employers — whether they’re directly embroiled in this conflict or not — questioning the government or pushing back.

“From their perspective, they’d love to keep making money and not have to talk about it,” said a software engineer from Microsoft.

So far, Anthropic has stood its ground. Anthropic CEO Dario Amodei put out a statement on Thursday that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” But he has stated that he is not at all opposed to lethal autonomous weapons sometime in the future, just that the technology was not reliable enough “today.” Amodei even offered to partner with the DoD on “R&D to improve the reliability of these systems, but they have not accepted this offer,” he wrote in the statement.

In the past few years, however, major tech companies have loosened their rules or changed their mission statements to expand into lucrative government or military contracts. In 2024, OpenAI removed a ban on “military and warfare” use cases from its terms of service; after that, it signed a deal with autonomous weapons maker Anduril and then its DoD contract, and just this week, Anthropic changed its oft-touted responsible scaling policy, dropping its longtime safety pledge in order to ensure it stayed competitive in the AI race. Big Tech players like Amazon, Google, and Microsoft have also allowed defense and intelligence agencies to use their AI products, including some agreeing to work with ICE despite growing outcry from the public and employees alike.

In past years, tech workers’ resistance to partnerships and deals they deem harmful to society at large sometimes led to big change. In 2018, for instance, thousands of Google employees successfully pressured the company to end its “Project Maven” partnership with the Pentagon, and Microsoft workers presented leadership with an anti-ICE petition signed by about 500 Microsoft employees, though Microsoft still works with the agency. In 2020, after the murder of George Floyd, tech companies made public statements about and financial commitments supporting the Black Lives Matter movement. But in recent months, the industry has seen a very different reality: a culture of fear and silence, especially amid cooperation with the Trump administration and ICE, tech workers recently told The Verge.

Companies have followed in the footsteps of longtime surveillance and military tech partnerships, who have only become more hawkish. That includes the Peter Thiel-cofounded Palantir, whose CEO Alex Karp recently stated to shareholders that “Palantir is here to disrupt and make the institutions we partner with the very best in the world, and, when it’s necessary, to scare enemies and on occasion kill them. And we hope you’re in favor of that.” (Protect Democracy, a nonprofit, recently put out an open letter calling for Congressional oversight of the Department of Defense’s demands for unrestricted use of AI. )

OpenAI, Google, Microsoft, xAI, and Amazon did not immediately respond to requests for comment.

A former xAI employee told The Verge, “Everyone is actually working on killer robots at this point,” adding that he believes everyone will follow in the footsteps of Palantir, Anduril, and xAI, since the government sentiment is that if a company doesn’t acquiesce, it’s “against the benefits of the country, in a sense.” He said there’s a “big push for working with the military, and the trend is it’s cool to do it… You’re a patriot if you do it.”

A Google employee called the situation a “dominance display from Hegseth that is disgusting.” He added, “Over and over AI is presenting us with choices about who we want to be and what kind of society and future we want to have. And they’re coming at us fast and with, really, the least thoughtful and least principled leaders in power that we could imagine. I can only thank Anthropic for insisting on the decent path and using their leverage — that they are indispensable — to chart a course toward a humane world and a humane future.”

The AWS employee told The Verge that “boundaries have definitely eroded in terms of the customers big tech is willing to court” and that there’s “a deliberate whitewashing of the implications of new lucrative deals.” She recalled recently receiving an email from an AWS executive touting a more than $580 million contract with the US Air Force, among other partnerships, as a sign of Amazon’s AI successes, with no acknowledgment of the broader scope or harms involved.

“If the government is hell-bent on pursuing technologies like this, they should have to build them themselves, and be answerable for those decisions,” she said.

The erosion may have extended to internal culture as well — normalizing the idea that companies should always be watching. The AWS employee said that she and her colleagues are tracked on how much they’re using AI for their jobs, how often they’re working from the office, and more. “I can see myself and my coworkers getting more desensitized to surveillance on ourselves at work, and I’m worried that means we’re obeying, complying, and giving up too much in advance,” she said.

An OpenAI employee said the general feeling within the AI industry over the last few weeks “has reopened the door to more discussion… about the values and the future of the technology.” The employee said that the Pentagon-Anthropic situation, the recent ICE headlines, and the fast advancement of AI have been some of the main factors opening up those discussions internally.

Even so, people who are immigrants or in more vulnerable positions are more afraid to speak, the OpenAI employee said.

Anthropic, the former xAI employee said, seems like it’s in a position where it can say no and still stay afloat. Its focus on enterprise rather than consumer AI business may make it more sustainable even without government contracts, offering it some leverage. A software engineer at Microsoft said of Anthropic, speaking generally, “I was surprised to see them stand on some form of principle. I don’t know how long it’ll last.”

“Will it last?” seems to be the question on everyone’s lips. The Pentagon has already reportedly asked two major defense contractors, Boeing and Lockheed Martin, to provide information about their reliance on Anthropic’s Claude, as it moves to potentially designate Anthropic a “supply chain risk,” a classification usually reserved for threats to national security and rarely, if ever, assigned to a US company. It also reportedly may be considering invoking the Defense Production Act to attempt to force Anthropic to comply with its request.

Just like with any other AI company, if Anthropic folds, the Microsoft employee said, there’s little chance of it or others pulling back on killer robots and surveillance. “Once you’re in the door with the Department of Defense or whatever we’re calling it now… I think it’s probably hard for them to actually have the oversight they claim. It’s just going to be lucrative to basically give themselves permission to do the thing that makes the most money.”

In Microsoft’s own case, he said he doesn’t expect the company to adhere to any sort of ethical principles. The company has worked extensively with the Israeli Defense Forces, including for mass surveillance of Palestinians and dissidents, despite employee protest. (It said it ended some parts of the partnership last year.)

Another Microsoft employee told The Verge that although “Microsoft holds a Responsible AI ‘commitment,’… they are currently attempting to play both sides for the sake of profit rather than meaningfully commitment to Responsible AI.”

But this is nothing new, one AI startup employee said. In her eyes, the boundaries have often been “fuzzy, especially within AI,” about what kinds of things companies are willing to let their technology power. “A lot of it has been going on beneath the surface for as long as AI has been around.”

The AWS employee emphasized that “we need cross-tech solidarity and a coherent, worker-led vision for AI now more than ever.”

“The safeguards that Anthropic is trying to keep in place are no mass surveillance of Americans and no fully autonomous weapons, which just means that they want a human in the loop if the machine is going to kill somebody,” she added. “Even if this technology were perfect — which it isn’t — I think most Americans don’t want machines that kill people without human oversight running around in an America that’s become an AI-powered mass surveillance state.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Hayden Field

    Hayden Field

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Hayden Field

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Amazon

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Amazon

  • Anthropic

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Anthropic

  • Google

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Google

  • Microsoft

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Microsoft

  • OpenAI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All OpenAI

  • Report

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Report

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

  • xAI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All xAI

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Womandla, IBM and IAVE bring reskilling program to South Africa Womandla, IBM and IAVE bring reskilling program to South Africa
Next Article Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute Altman says OpenAI agrees with Anthropic’s red lines in Pentagon dispute
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Here’s How to Use Red Light Therapy for Fuller Hair at Home
Here’s How to Use Red Light Therapy for Fuller Hair at Home
Gadget
Washington state gets federal sign-off for huge broadband buildout
Washington state gets federal sign-off for huge broadband buildout
Computing
Trump Slams Anthropic as 'Woke,' Orders Feds to Stop Using Claude AI
Trump Slams Anthropic as 'Woke,' Orders Feds to Stop Using Claude AI
News
Intel Releases Updated CPU Microcode For Xeon 6 SoCs “Granite Rapids D”
Intel Releases Updated CPU Microcode For Xeon 6 SoCs “Granite Rapids D”
Computing

You Might also Like

Trump Slams Anthropic as 'Woke,' Orders Feds to Stop Using Claude AI
News

Trump Slams Anthropic as 'Woke,' Orders Feds to Stop Using Claude AI

6 Min Read
Tesla FSD (Supervised) could be approved in the Netherlands next month: Musk
News

Tesla FSD (Supervised) could be approved in the Netherlands next month: Musk

2 Min Read
The 5 Best Weekend Tech Deals to End February With From Apple, Bose, LG, and More
News

The 5 Best Weekend Tech Deals to End February With From Apple, Bose, LG, and More

8 Min Read
TCL releases RayNeo Air 4 Pro smart glasses for 9
News

TCL releases RayNeo Air 4 Pro smart glasses for $249

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?