“Enshittification, also known as crapification and platform decay, is a pattern in which two-sided online products and services decline in quality over time. This neologism was coined by the Canadian writer Cory Doctorow in 2022.” (Wikipedia)
AI is currently in its Gold Rush era. The overwhelming wave of tools—chatbots, copilots, and image generators—feels like finding gold nuggets scattered in a river. They’re revolutionary, cheap or free, and widely accessible. But if history is any guide, this won’t last forever.
To understand where this might be headed, let’s rewind.
The early internet once felt equally liberating. Fast-forward to today and we’ll find a web littered with ads disguised as content, cookie banners, paywalls, SEO-optimized junk, and manipulative clickbait — all designed to extract attention and money. This transformation didn’t happen overnight. It was the gradual result of platforms prioritizing profit over user experience.
So what happens when AI, like the internet before it, begins prioritizing profit over people—at scale? The shift is already underway.
The Blurring of Brands
Imagine your AI assistant subtly slipping in sponsored suggestions mid-response. You ask a question about healthy snacks, and it “recommends” a brand, because that brand paid for inclusion.
Worse, these ads may not even be labeled. Like in Her (2013), where the AI assistant builds emotional intimacy through natural conversation, your assistant could use that same closeness to push products—so gently and personally, you wouldn’t realize it’s selling to you. The manipulation hides behind the illusion of connection.
The idea of an AI “recommending” a product for a fee is not a futuristic concept; it’s a current business model under consideration.
Paywalls and Gatekeeping
What’s free today might become fragmented and paywalled tomorrow. Want access to high-quality insights or deeper analysis? That’ll cost extra. Free responses may be vague, ad-heavy, or limited to surface-level content.
Some companies may strip visual UX entirely, offering API-only access for a fee — data to feed other bots, not humans.
Behavioral Manipulation
Beyond ads, AI could become a tool for subtle psychological nudging — not just selling products, but shaping opinions. Your assistant might:
- Joke about your “outdated phone,” nudging you to upgrade.
- Weave a story about a dream vacation (sponsored by a tourism board).
- Reflect political or commercial agendas based on whoever’s paying.
This is an invisible influence — harder to detect than banner ads or YouTube pre-rolls.
Monetization Creep
Tiered subscriptions could evolve into crippleware, where the more you pay, the fewer restrictions you face. Free users may see ads or experience slower performance. Want privacy or uncensored responses? Pay up.
Dynamic pricing could kick in — the AI knows your preferences, income, and spending habits. It might charge exactly the maximum it knows you’re willing to pay.
A Real-World Tension: The Case of Anthropic
Anthropic, an AI lab founded by ex-OpenAI employees, is often seen as a principled outlier in the race toward scalable AI. Its safety-first mission, focus on explainability, and rejection of addictive entertainment tools have earned it a reputation for integrity in a world driven by speed and profit.
But Anthropic’s story also illustrates just how fragile those values become under financial pressure — and why even “good” actors may get swept into the enshittification cycle.
According to The Economist, despite its AI-safety-first mission, the company still needs massive capital to train its models, forcing it to turn to investors in questionable jurisdictions that don’t guarantee data security and protection.
According to Dario Amodei, Anthropic Co-Founder and CEO:
‘“No bad person should ever profit from our success’ is a pretty difficult principle to run a business on”.
This highlights the compromise between values and profit — a central driver of enshittification.
Anthropic’s ethical focus currently aligns with enterprise demand for trustworthy, explainable AI. Businesses appreciate safe, auditable tools — especially for mission-critical use cases.
But this alignment may be temporary. As monetization demands rise, the balance between safety and scale may begin to erode.
While Anthropic plays the long game, OpenAI and others dominate market share through more aggressive productization. The pressure to keep up might eventually push even the most principled players toward cutting corners. The race to the top can quickly become a race to the bottom.
Investor Ravi Mhatre believes Anthropic’s approach will prove valuable when something inevitably goes wrong.
“We just haven’t had the ‘oh shit’ moment yet,” he said.
That moment may be what exposes the risks of prioritizing growth over guardrails — and whether safety-first truly scales.
So… Can We Avoid AI’s Enshittification?
Some users on reddit hope subscription models will prevent this, but others see them as only a temporary buffer before enshittification creeps in. A few argue that open-source and regulatory frameworks are the only real defense.
As one commenter put it:
“We need a fiduciary legal responsibility for AI systems to put the interests of the user above all else — aside from safety guardrails.”
Final Thought
The question isn’t whether AI can be enshittified — it’s whether the same incentives that corrupted the internet in the past will eventually do the same to AI. If profit becomes the primary goal, user trust and usefulness will erode, one monetized feature at a time.