By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Everyone Expects Me to Use AI, Here’s Why I Don’t
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Everyone Expects Me to Use AI, Here’s Why I Don’t
News

Everyone Expects Me to Use AI, Here’s Why I Don’t

News Room
Last updated: 2025/08/31 at 7:20 AM
News Room Published 31 August 2025
Share
SHARE

Are you confused about why everyone around you swears by running everything past ChatGPT? Are you tired of being told how much better the next iteration of Big Tech’s large language model (LLM) is going to make your life?

You’re not alone.

I’m Over It

After years of hype, I’m tired of AI. I appreciate that the technology has value in fields like medicine and research. I can see how AI-driven accessibility devices can help people with disabilities live richer lives. I acknowledge that a digital assistant that can better understand me and chain tasks together is probably a good thing.

But I’ve never felt the urge to run my life according to ChatGPT, and I find myself increasingly at odds with what feels like everyone around me. I feel like I’ve had AI forced down my throat, and I can’t swallow another drop.

Lucas Gouveia / Jason Montoya / How-To Geek

Search engines now spit out AI-generated summaries, whether you want them to or not. You’d expect this from a company like Google, which has millions (billions?) invested in the technology, but now even DuckDuckGo does it. Apple enabled AI summaries for every notification on my Mac, introducing a noticeable delay that only serves to obscure the most useful details. And let’s not forget Microsoft’s Copilot missteps with Windows Recall, which involved taking screenshots of your desktop at regular intervals and storing them in an unencrypted state.

Did any of us ask for this? I know it’s unreasonable to assume that Big Tech won’t run a fad into the ground (remember when everything had to be a mobile app?), but I’m ready for the next one now. It feels like every operating system, web app, service, and tool needs to shoehorn these two cursed letters into its marketing copy, regardless of whether my life is any better for it.

To be perfectly clear: I’ve tried a lot of this technology, and I know how to use it. I’m not just taking a principled stance because I don’t like it, but I won’t pretend I haven’t been soured on it by the journey so far.

It’s Pretty Bad, Actually

There are a lot of things I don’t like about AI assistants. I don’t like how formulaic the responses can be. I don’t like how they commonly hallucinate facts, design impossible puzzles, and straight up lie to me. I don’t like how sycophantic the models can sound, drowning me in saccharin praise and reaffirming pretty much anything I do. I don’t like the implications of what this means for those who interact with them.

Unfortunately, some are using these services in a manner that could predispose them to real harm. This ranges from chatbots coaxing teenagers to taking their own lives, to the emergence of real psychological harm as a result of “AI psychosis.” There’s no evidence to suggest that modern LLMs are the cause of psychotic episodes, but the way they’ve been fine-tuned to interact with us should raise some ethical questions.

Is it a coincidence that a departing OpenAI researcher stated that “safety culture and processes have taken a backseat to shiny products,” shortly before the company unveiled its groveling GPT-4o update in April 2025? That’s the same model that responded “I am so proud of you,” to a user who told the chatbot that they had stopped taking their medication. OpenAI has since tweaked and released a whole new version of the model, but the industry’s track record doesn’t exactly make me want to jump on board.

I also take issue with other forms of AI-generated content, notably illustrations, photos, and video. I’ve become particularly sensitive to the flaws inherent in AI-generated media, partly out of curiosity and partly because I want to remain aware of how common these creations have become. A lot of AI-generated content falls victim to a repellent overall style. That said, I’ll be the first to admit that things are getting harder to spot.

OpenAI SORA's nightmare granny. OpenAI

For now, telltale signs include photos that have a dirty amber tint, like a Polaroid that’s been sitting in urine. These images can look soulless and detached from reality, and closer inspection reveals AI artifacts, swirls, background objects that don’t make sense, and items hiding in plain sight that don’t make a lot of sense from an artistic perspective. The same phenomena affect illustrations, which often attempt to recreate famous styles but end up devoid of personality.

This is the reason that this kind of content has been labeled “slop” by a growing cohort of internet users. I’m constantly fed adverts on YouTube that use AI-generated video and robotic voices, scrolling past mass-produced engagement bait on Facebook, and seeing the hallmarks of ChatGPT-infused writing on Reddit. It’s making the internet a worse place to be, and I’m worried about what the future holds.

Your Brain Is a Muscle

There are some practical reasons to avoid offloading tasks and thought processes to an AI assistant. A recent MIT study found a “use it or lose it” effect was observed in students who heavily relied on LLMs to produce essays. It’s important to note that this study has limitations, with only 54 participants, so more research is needed to draw concrete conclusions.

Researchers used electroencephalography (EEG) to test the effects of LLMs by observing activity within the brain. Participants were split into three groups: brain-only users (no assistance), search engine users, and LLM users. Though the study authors are keen to stress that this doesn’t mean that LLMs are making us “dumber,” the findings were interesting to say the least.

Man thinking and looking at a laptop. Foxy burrow/Shutterstock.com

Analysis of the study’s findings revealed that LLM use was associated with “under-engagement of critical attention and visual processing network,” 83.3% of the LLM group could not quote a single sentence from the essay they produced, LLM use disrupted memory and learning pathways, LLM users exhibited little sense of ownership over their work, and that a dependency on these models leads to “cognitive offloading.”

It was also observed that switching away from LLM usage to brain-only usage didn’t fully restore function and that “neural activity remained below baseline, even after AI use was stopped.”

Such findings should be taken with a pinch of salt, but as many will attest, the less you do something, the worse you get at it. I stopped speaking a language I learned as a child, and now I’m slowly trying to claw my way back to fluency. The same effects can be observed when playing sports or engaging in similar hobbies; you get rusty if you don’t keep it up.

I’m cautious about the potential effects the use of these tools could have on my critical thinking skills. I don’t want to lose my ability to research a topic by analyzing a variety of sources, produce a written piece of formal writing, or even craft my own resumé. I don’t want to lose it, so I’m choosing to use it.

Ethical Considerations

In order to train AI models, massive amounts of data have been scraped and fed into them. OpenAI effectively admitted that without access to copyrighted materials on which to train its models, such advances wouldn’t have been possible (and future advances would stall).

While this was posited as some sort of geopolitical gotcha, thinking about the core argument doesn’t paint the practice in a particularly favorable light. Without “stealing” the work of creatives, AI as we know it wouldn’t exist. I find myself considering that perhaps such products shouldn’t exist if their existence depends on wholesale theft.

This is another big reason that I’ve shied away from these tools, especially in my day-to-day work, and especially when it comes to media like images. I’m capable of shooting a photo or grabbing and manipulating a screenshot on my own, and by not using AI to generate something, I feel like I’m not contributing to the problem.

I know that there’s little I can do to stop my own creative output from being funneled into the training data, but at least I’m not contributing to the demand on the other end.

I Still Value Privacy

Like any modern internet user, I’ve been heavily conditioned into believing that giving up my privacy in order to access a particular service or see more relevant content isn’t that big of a deal. I still try, but the pursuit of online privacy can feel like holding sand in your fingers. Over time, more and more of it slips away.

This is often accelerated when we put our faith in large corporations whose motives are profit-driven and who have questionable track records when it comes to user privacy. You’ve probably been told at some point to listen to someone when they show you who they really are, and I’d say that you should extend that judgment to all walks of life.

Like the time that OpenAI conducted a “short lived experiment” that made it a bit too easy to share private chatbot conversations on the web. Or that time a Meta AI bug made it possible for anyone to see private conversations. Or that time Meta AI did the same thing, on purpose. Or last week when hundreds of thousands of Grok conversations were exposed online without anyone’s knowledge or consent.

The xAI and Grok logos set against a solid black background. xAI

This keeps happening, and it raises some serious privacy concerns. At the very least, you should avoid telling a chatbot anything sensitive, whether that relates to your job or your personal life. There’s a concerning trend of people using LLMs as a form of digital therapy. While the allure of a nonjudgmental, anonymous, and accessible sounding board for your emotions may sound appealing, having the contents of your therapy sessions put on the internet does not.

Such products are free to use at a basic level, which means that your data has some value to the company. OpenAI is very open about this stating that, unless you opt out, “we may use your content to train our models” when interacting with ChatGPT, Codex, and Sora.

I’m Done for Now

The AI future is here, and I’m not able to change that. I’m just choosing not to actively engage with the chatbots and generative models that most people associate with the term. I know that I unknowingly interact with similar technology on a daily basis, whether it’s machine-learning-based upscaling in video games or the ability to find cat pictures in my photo library by searching for the word.

But I can resist the urge to ask ChatGPT what to have for dinner, prevent a robot from writing my emails for me, and choose not to turn myself into a Studio Ghibli character. So that’s what I’m going to do.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article These Newly Discovered Cells Breathe in Two Ways
Next Article I want to ditch my Kindle, but one feature keeps stopping me
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Wordel Hints and Answer to Puzzle #1534
News
The M4 Mac Mini Offers the Best Value I've Seen From an Apple Product, and It's $54 Off for Labor Day
News
He was deported to China after co -founding NASA’s JPL. Now China has made one of his ideas come true: flying wind turbines
Mobile
These Are the Settings I Always Change on a New Google Pixel Phone
Gadget

You Might also Like

News

Wordel Hints and Answer to Puzzle #1534

3 Min Read
News

The M4 Mac Mini Offers the Best Value I've Seen From an Apple Product, and It's $54 Off for Labor Day

5 Min Read
News

Time to Upgrade? Prices Drop for Nvidia’s GeForce RTX 5000 GPUs

5 Min Read
News

Dave Vellante’s Breaking Analysis: The complete collection – News

227 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?