By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The Soft Bigotry of AI Doom: Because Users Are Just Too Incompetent | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > The Soft Bigotry of AI Doom: Because Users Are Just Too Incompetent | HackerNoon
Computing

The Soft Bigotry of AI Doom: Because Users Are Just Too Incompetent | HackerNoon

News Room
Last updated: 2026/03/01 at 12:06 PM
News Room Published 1 March 2026
Share
The Soft Bigotry of AI Doom: Because Users Are Just Too Incompetent | HackerNoon
SHARE

When a new, disruptive technology comes along, fearmongering is never far behind. Writing was said to erode our memory, yet most of you still somehow managed to remember to put underwear on today. Movies and television were supposed to destroy our imaginations, yet the Star Wars and Harry Potter universes are bursting with human imagination, and the sheer volume of their wildly inappropriate fan fiction likewise proves so. Smartphones supposedly eradicated our attention spans, yet… wow, that’s really shiny! I’m sure I don’t need to tell you how disruptive AI is, so naturally, the fearmongering has followed.

The thing is, this particular brand of fearmongering around AI has escalated rather quickly, even in the face of both absurd and hyperbolic arguments, such as AI destroying our critical thinking, collapsing our institutions, and ending our world. Even better, there’s an unspoken thread in this fearmongering that implicates you, the user of AI, as part of this destruction because, apparently, you’re just too damned incompetent.

Destruction of Critical Thinking

One of the supposed negative side effects of AI use is the destruction of critical thinking. It seems we are going to be so enamored with AI that we’re going to use it to supplant much of our thinking. No longer will you have to sit and have a think because you can simply have your AI do it for you. Don’t know what’s for dinner? Ask AI! Unsure of the moral implications of capital punishment in a contemporary society? Just ask AI! Is there life after death? AI, my friend. AI.

The argument basically boils down to this: because AI is so ubiquitous, we are going to be unloading so much of our cognitive effort that our critical thinking skills will diminish. It’s the old “use it or lose it” idea. But if the logic is inventions that reduce cognitive effort therefore reduce critical thinking, why didn’t the invention of writing prevent the invention of books, which should have prevented the invention of adding machines, which should have prevented the invention of computers, which should have prevented the invention of AI, whose creation required some of the most arduous collective critical thinking ever undertaken by humanity? Hint: It’s not true.

It seems as though the best evidence for this brand of fearmongering is that students who use AI to write their papers are not engaging their critical thinking. Unfortunately, even this claim can’t be supported. What can definitively be said about students using AI to generate their papers is not that their critical thinking skills are degrading, but that they are spending less time writing. Do we need a reminder that writing is not critical thinking? Because if it is, Socrates was an idiot, as he didn’t write. So while writing can certainly be tied to critical thinking, it is not critical thinking itself.

The problem is, many of these arguments use writing as a measure for critical thinking, demonstrating a deep lack of critical thinking. Not that it needs to be said, but two things can be true at once: you can be a good critical thinker and a crap writer. You also don’t have to think critically to write a paper. As a community college instructor, I’ve seen plenty of papers that are bereft of even a grain of critical thinking, but somehow, there were still a bunch of words printed on paper. So my students broke reality in addition to my hopes for them.

No, the best claim that can be made is that many students are spending less time writing, so it’s possible there might be a reduction in time spent on critical thinking. But even then, it would be a leap to assume none ever occurs. Am I to understand that students using AI to write papers are not even going to see what the AI wrote and use critical thinking to evaluate the essay? Are they completely unaware that AI hallucinates and makes errors? If all that is the case, it’s not the AI that is preventing critical thinking, now, is it? As surprising as it is, cheating predates AI.

Wait a minute… I must wholeheartedly apologize. I am absolutely unqualified to think about this critically, as I was in the top 1% of users who sent messages to ChatGPT in 2025. Therefore, I will start acting appropriately: Derrrrrr! Duh, AI good!

The Fall of Civic Institutions

Our beloved civic institutions will also fall due to AI. An infamous paper recently made its way around the AI doom circles on this exact topic, How AI Destroys Institutions, and it proposes that this is going to happen through three mechanisms: deteriorating expertise through cognitive offloading, interfering with our decision-making, and isolating humans from each other.

Much like the argument for critical thinking, our professional skills are going to be eroded because of skill offloading from AI. It’s not that we might get worse at that specific thing AI is doing for us; rather, it’s that our professional skills will degrade. This is why I can never use an LLM for classroom content as an English as a second language instructor—my English skills would degrade, I wouldn’t be able to speak English anymore, and I’d be out of a job. Thanks, AI!

So we are to believe that professionals, people who have invested time and money into education and building up careers, are simply going to let important skills get unknowingly chipped away at because of AI? Are lawyers just going to become people who bring Claude into court? So all these people who’ve been highly trained won’t notice they’re not as effective at their jobs as they used to be? Their bosses won’t? The clients who pay for competent services won’t? That’s an extremely dependent and extraordinarily unlikely inverted pyramid made from a lack of self-awareness.

So I guess I won’t notice the degradation of my accountant’s skills when I have to pay six times more in taxes because of their mistakes, and I guess the parents of students won’t notice their children’s grades slipping because the teacher used AI and therefore sucks at teaching. Apparently, AI functions as a global blindfold. It’s fun when you find unintended uses of products!

Apparently, we’re also just going to have AI make our difficult moral choices for us because we’re just so damned lazy. We’re going to outsource our morality and judgment, all hidden behind an unknowable algorithm. No one will ever hash it out and come to a better agreement because we’re just going to outsource all of that to AI. I know how eager the public is to outsource our ethics, judgment, and morality to AI. Thank goodness there’s never, ever, ever been any pushback on this idea. Like ever. I guess Catholics will ask for repentance through Grok rather than through priests.

In order to collapse our civic institutions, such as education and the justice system, AI will also erode human relationships. Now, it is true that AI will displace some relationships; there’s a good chance the relationship you had with your assistants will be a faint memory when AI replaces them. Honestly, I still haven’t replaced the relationship I had with my ice block delivery man or the horse he rode on. Gosh, I sure do miss the 80s… The 1880s, that is.

And of course, all of this destruction of human relationships will happen only because of AI. If you thought it started happening with the decline of the monoculture as digital technology ramped up, well, you’re just wrong.

Additionally, people will become isolated because, with AI being so agreeable, there’s less reason to hash stuff out with others in an uncomfortable manner. I get it; people are conflict-averse. That’s why when I turn on the news, I only see stories about rainbows and puppies rather than wars and protests. Humanity is so harmonious!

So yeah, our beloved civic institutions will crumble. Damn you, AI!

The End as We Know It

If destroying our society wasn’t bad enough, AI is also going to contribute to the end of the world. The Doomsday Clock by the Bulletin of the Atomic Scientists has been moved to 85 seconds to midnight, in part because of the threat from AI. Biological, nuclear, and informational warfare AI upgrades are pushing us closer than we’ve ever been. And while these threats do actually have some merit, the hyperbolic conclusion still leaves this firmly in the fearmongering camp.

The fear of’ biological warfare is that AI will create a new, dangerous pathogen that people have no defense against. For nuclear warfare, AI’s implementation could mask the decision-making process, increasing the chances of error with a devastating weapon. And for informational warfare, it’s more about sowing chaos through deepfakes and the like.

I’ll be the first one to admit that these particular threats do seem a bit more compelling, though I’m still unsure that inching us toward doom is the appropriate conclusion. It is very conceivable that AI could design an extremely dangerous pathogen, but to be fair, we’ve kept plenty of dangerous pathogens for many years, so I’m just going to continue keeping my fingers crossed.

For nuclear warfare, technology in general typically reduces the need for human judgment, and it’s easily argued that it reduces errors from human judgment, so the doom argument seems like a wash at best. As for informational warfare with deepfakes, yeah, that one’s hard to refute. Society is just going to have to figure that one out as we did with other disruptive technologies, though again, contributing to the end of the world seems a bit of a hyperbolic conclusion in the meantime.

The Thread That Binds

While AI doom slop has always annoyed me, it wasn’t until I sat down and thought about what binds them together, somehow without the help of AI, that I found the thread: humans are too incompetent to use AI responsibly. See, the allure of AI is simply too great for humanity, so naturally, the result is a degradation of our critical thinking, the collapse of our society, and even the destruction of our world.

We must first acknowledge that for any of these doom scenarios to come to fruition, it requires an incredible amount of human failure stacked on human failure stacked on human failure. We’ve already been warned by the doomers who seem immune to AI’s negative effects, yet we’re just not going to do anything to mitigate these potential disasters? Is AI not going to improve in any marked way? The companies that create AI have no incentive not let it destroy the world? I had no idea profits continued to percolate to the afterlife.

It seems parents and educators will simply shrug and accept that their children won’t be very good thinkers. The institutions that prop up our societies apparently can’t do anything in the face of AI to save themselves from ultimate destruction. And the great powers of the world would certainly never do anything to mitigate the risk of world destruction, even though the primary goal of conflict is to not be destroyed, but whatever. Remember, humanity is incompetent. They won’t say it outright, but it’s an implicit requirement in all of these predictions.

Plus, while we may be a bit late to the party, historically, we have always recognized the dangers of technology and done our best to minimize the risks. Cars now have seatbelts and airbags, houses now have circuit breakers and grounded outlets, guns have safeties, and even the Three Mile Island, Chornobyl, and Fukushima disasters produced increased nuclear safety. I wonder which magical property of AI makes it resistant to our inevitable improvements.

What can actually be stated with confidence is that it’s possible we might become too reliant on AI for problem-solving. It’s possible AI will collapse our institutions, but the sheer number of required failures to get there makes it virtually impossible. It’s also possible AI will be participating in the destruction of the planet, but if over 80 years of humanity having nuclear technology is any indication, we’re about 23.95 hours till midnight rather than 85 seconds.

To be clear, none of this means we shouldn’t be cautious; caution with a new disruptive technology should be a requirement. However, we all know the claims being made aren’t advising caution; that’s gone out the window, and they’re predicting disaster.

I suppose AI Will Erode Our Critical Thinking is a bit catchier than AI Will Erode Our Critical Thinking If We Simply Do Nothing But Twiddle All Our Thumbs As It Happens. How AI Destroys Institutions is a bold and head-turning title whose cup overflows with hyperbole; AI Has The Possibility To Generate Some Negative Effects On Our Institutions, So Let’s Prepare Ahead Of Time isn’t nearly so bold. AI Is Pushing Us Closer To Global Doom naturally gets many clicks; AI Is A New Tool, So Let’s Proceed Cautiously doesn’t, even if it’s more accurate.

The Takeaway

So, what can be learned from these AI doom stories? Well, it seems they think you’re incompetent and can’t use AI responsibly. They think you’re not going to do anything to mitigate any potential problems from AI. They also think you are simply going to use it in a manner that is ultimately destructive. So really, what we’ve learned is that the soft bigotry of low expectations has come to the world of AI.

Oh joy.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article You Can Now Upgrade to Windows 11 Pro for Just .97 You Can Now Upgrade to Windows 11 Pro for Just $12.97
Next Article Appian points to proficiently perfected partner practices Appian points to proficiently perfected partner practices
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Readers reply: what would happen to the world if computer said yes?
Readers reply: what would happen to the world if computer said yes?
News
Xiaomi 17 Series: Everything Xiaomi announced in Barcelona
Xiaomi 17 Series: Everything Xiaomi announced in Barcelona
Computing
Motorola has a tiny new black box at MWC that wants to kill Android Auto cables
Motorola has a tiny new black box at MWC that wants to kill Android Auto cables
News
First Steps? Honor's Humanoid Robot Makes Its Debut With a Moonwalk and a Backflip
First Steps? Honor's Humanoid Robot Makes Its Debut With a Moonwalk and a Backflip
News

You Might also Like

Xiaomi 17 Series: Everything Xiaomi announced in Barcelona
Computing

Xiaomi 17 Series: Everything Xiaomi announced in Barcelona

19 Min Read
Online Censorship in Schools Is Impacting Teachers As Much as Students | HackerNoon
Computing

Online Censorship in Schools Is Impacting Teachers As Much as Students | HackerNoon

7 Min Read
Rust 1.77.0: C-String Literals and More | HackerNoon
Computing

Rust 1.77.0: C-String Literals and More | HackerNoon

4 Min Read
Filings: How Amazon’s B OpenAI deal actually works, and what they’re keeping secret
Computing

Filings: How Amazon’s $50B OpenAI deal actually works, and what they’re keeping secret

10 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?