By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI Won’t Replace Me Yet, But It Might Prove I Was Never That Original | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > AI Won’t Replace Me Yet, But It Might Prove I Was Never That Original | HackerNoon
Computing

AI Won’t Replace Me Yet, But It Might Prove I Was Never That Original | HackerNoon

News Room
Last updated: 2025/03/15 at 3:19 PM
News Room Published 15 March 2025
Share
SHARE

The issue with Large Language Models—capitalized here the way you might capitalize God or Death, given the mission-critical importance the tech industry now attaches to them—is not that they generate text. That part is almost endearingly quaint, cute even. So 2022.

The real conundrum I’m racking my brains over, dear HackerNoon reader, is more unsettling. Like the feeling you get when you realize you’ve been on autopilot for the last two hours doing 80 on I5.

I wonder: Have I been living as an algorithm all this time, long before Large Language Models started autocompleting my thoughts? Is generative AI, in replicating the ways we write, also exposing the mechanical nature of our cognition?

Maybe We Were Machine-Like To Begin With

We’re told that Large Language Models don’t write. Not in the sense Shakespeare authored plays or you wrote weepy yearbook love notes to your 10th-grade crush.

They predict. That is, they harvest the statistical likelihood of bite-sized tokens appearing in certain patterns, then serve them back to us in arrangements that feel like thought but are, in fact, just a simulation of actual thinking.

This raises a disquieting question: How much of human writing was already just…this? How often are we not writing but predictively assembling, our choice of words a game of Tetris played with borrowed patterns, phrases, and unconscious mimicry of established rhetorical forms?

What if the real heartburn-inducing revelation here is not that Large Language Models can imitate us but that what we call “us” was machine-like all along?

The Writer’s Process: Romantic Struggle Or Pattern Recognition

Strangely enough, if you break down the writer’s process, or at least this writer’s process, it starts to look a lot like what Large Language Models do. Less an intuitive leap of imagination, and more a matter of scanning memory for the next most probable word based on context and experience.

Many of us like to imagine it as some arcane, deeply human endeavor, a wrestling match with the Muse. A dance of inspiration and struggle and of bending language into something beautiful and telling.

But isn’t writing just a series of micro-predictions? Are we not reaching for words not through divine inspiration but through exposure and pattern recognition?

So, when Large Language Models do the same thing—just with a larger training corpus and fewer identity crises—is it really so different? Isn’t it doing what we’ve always done, only faster and at scale, and without the burden of writer’s block or imposter syndrome?

And if writing has always been an act of sophisticated pattern prediction, what does that say about thinking? Is it possible that Human Consciousness is not the ineffable Hard Problem we think it is?

I wonder if the seemingly novel idea I just had is just a probabilistic response to stimuli, a calculated extrapolation of everything I’ve ever read, heard, and been told to believe.

Maybe the real threat of generative AI is not that it will replace me, but that it forces me to confront the unsettling possibility I was never as original as I thought I was.

The Formulaic Truth About Most Writing

Of course, humans cling to the idea of uniqueness. We resist the notion that creativity can be mechanized because creativity is, well, what makes us human. We tell ourselves that AI cannot generate true art because it doesn’t feel like we do. It doesn’t yearn, it doesn’t suffer crippling self-doubt, it doesn’t bear the pangs and emotional scars of unrequited love.

And yet, if we’re being brutally honest, how many human writers are truly engaging in an act of raw creation versus repackaging pre-existing ideas, tropes, and schemes into shapes that look vaguely new? How much human writing is boring and predictable?

Take James Patterson genre fiction. Take academic writing or journalism. Look at advertising copy or influencer content. Consider the performative, self-important Accordion of Wisdom posts on LinkedIn that deploy unnecessary line breaks to game the “See More” button.

The fact that AI can now churn out convincing facsimiles of these forms is not necessarily proof of AI’s sophistication so much as it is an indictment of how formulaic most human writing already was.

Maybe the majority of human writers, including yours truly, are essentially doing the same thing, just with more handwringing and a greater likelihood of misusing “literally” metaphorically or “affect” instead of “effect.”

I’m not afraid of AI replacing human writers. I’m afraid of a Skynetish future, minus the nukes and robot uprising, where AI holds a mirror up to human output and exposes how soulless much of it already was.

And now, in the recursive interplay of humans using Large Language Models to edit, co-author, and outright plagiarize, we plunge headfirst into a Not-So-Brave World of AI imitating humans imitating AI imitating humans, an ouroboros of homogenous content.

Existential Dread of Writing in the Age of AI

In low moments, I find myself worrying about the flattening of discourse and the profusion of sad beige linguistic slop that awaits—just one form of creeping existential dread in the Age of Large Language Models, up there with  the slow atrophy of critical thinking skills, the erosion of truth in a world of ubiquitous deepfakes, and the nagging fear AI will eventually take all of our jobs.

I think about my LinkedIn feed, and the Accordion of Wisdom posts there and how posts like these will not only persist but somehow become even more formulaic thanks to AI.

Then again, perhaps the most important difference between machines and humans is suffering, particularly when it comes to writing.

Large Language Models breezily churn out content in a matter of seconds. They don’t agonize over choosing the best word. They don’t rewrite a paragraph 15 times until it feels right. They don’t wonder if they’re a fraud, and they certainly don’t lose sleep over the gnawing suspicion that what they’ve written is derivative pastiche. In short, they do not suffer.

But maybe even the idea of suffering as a path to meaning and purification is just another pattern, one Large Language Models will eventually learn to replicate.

What happens when they do? Will they, once prompted, tell you they’re struggling to come up with ideas? That they need an extension because they’re not in the right headspace?

Will they simulate the agony of writer’s block and waste compute fretting over how well their outputs will be received?

Will Large Language Models learn to mimic suffering in statistically plausible ways? And when they do, what happens then to the last shred of human exceptionalism?

No clue. But for now, I’ll keep putting pen to page and finding the magic, illusory as it may be, in writing a well-crafted sentence or a weepy love note.


AI Use Disclosure: AI was occasionally consulted as a brainstorming partner for structure and as an unpaid editorial intern for sentence-level tweaks. It did not suffer alongside its human counterpart. Rest assured: All self-doubt, overthinking, and diction anxiety remain entirely the author’s own.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Behind the research curtain: The new age of analyst relations in tech – News
Next Article Good luck sleeping after you watch this humanoid robot ‘rise from the dead’
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

With AI Mode, Google Search Is About to Get Even Chattier
Gadget
Foxconn Chair Claims AI Will Destroy Manufacturing Jobs
News
Fidelity Bank drops out of trillion-naira club after court ruling
Computing
AI Mode is obviously the future of Google Search
News

You Might also Like

Computing

Fidelity Bank drops out of trillion-naira club after court ruling

4 Min Read
Computing

11 Best Planforge Alternatives and Competitors in 2025 |

36 Min Read
Computing

A Survey of Machine Learning Approaches for Predicting Hospital Readmission | HackerNoon

4 Min Read
Computing

Hazy Hawk Exploits DNS Records to Hijack CDC, Corporate Domains for Malware Delivery

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?