Delivering truth was never about facts. Throughout history, from traditions to search engines and now language models, there has always been an algorithmic gatekeeper. Not necessarily deliberate, or digital, or expected.
In the book 1984, the protagonist Winston Smith works for the Ministry of Truth. His job is to rewrite historical records so they match the Party’s current narrative. As Orwell wrote:
“Day by day and almost minute by minute the past was brought up to date… nor was any item of news, or any expression of opinion, which conflicted with the needs of the moment, ever allowed to remain on record.”
It was 1948 when Orwell wrote those words. Alternative truth is not a modern invention. What’s changing is the mechanism. What Orwell described as bureaucratic editing, we now recognize as algorithmic alignment, an evolving system that shapes what we perceive as true by optimizing toward a goal. These algorithms have existed long before code was invented, throughout all of history.
Two things are worth noting. First, while our cognitive wiring hasn’t evolved much, the algorithms for transferring and shaping information have changed radically across eras. Second, these algorithms aren’t necessarily engineered by an evil mastermind. Often they emerge from a hodge-podge of ego, culture, economy, and technology.
What is an algorithmic truth-control system? |
---|
Prehistory to 10,000 BC: Age of Relatives
Before agriculture, humans lived in small bands of up to ~150 people. What scholars refer to as the Dunbar Number, a threshold beyond which human social cognition degrades. Reality perception was optimized for what humanoids excelled at. A resourceful, long-distance-running social animal. Focused on skill sharing, social bonding and group status. The “truth” lived in the mouths of parents, uncles, grandmothers, tribe elders, and the fireside tales repeated through generations.
Truth algorithm: Social Mechanisms
Result: Evolution-aligned cohesion
10,000 BC to 1500 AD: Mythological Leaders
Agriculture led to settlement and surplus. Surplus led to population growth. Humans needed to organize beyond the Dunbar threshold, and the able and inclined got the opportunity to “cash out” on status. This gave rise to the supernatural story. Gods, empires, and divine kings became the anchors of truth. Myth held large groups together and dictated what is the “new-true”.
Truth algorithm: Narrative Centralization
Result: Obedience through story
1500 to 1850: Many Books, Much Illiteracy
Gutenberg’s printing press (~1450) ignited an information explosion. By 1500, over 20 million books had been printed, more than in the entire previous millennium. But while the quantity of knowledge exploded, access to it did not. Most people remained illiterate, and for them, the old algorithm of myth, channeled through religion, monarchy and nationalism, still held sway.
This created a split in the truth landscape: for the literate few, new worldviews emerged, sparking the Scientific Revolution, Political Reformation and the Enlightenment. For the majority, perception remained rooted in the centralized narratives of divine and national authority.
Truth algorithm: Decentralized Authorship
Result: Competing worldviews
1850 to 2000: Curators of News
The birth of journalism, first through newspapers then radio and television, initiated the ability to curate on a daily basis the truth perceived by the public. Curation came with incentives. What made the front page wasn’t just fact; it was intention. Political and economic drama, scandals, crime and war made headlines because they were rare and emotionally charged; everyday acts of resilience or nuanced trends did not. They lacked immediacy and were harder to reduce into digestible headlines.
Editors became the new algorithm. Optimizing for attention and sensational value. As literacy rates exploded in this period, the influence of editorial choices vastly outpaced the slower, more distributed influence of books in the previous era.
Truth algorithm: Editorial Curation
Result: Shaped mass opinion
2000 to 2015: Personalized Web
The shift of information creation and gathering from physical to digital birthed a new truth-shaping algorithm. Search. Ranking a small number of results by personalized relevance.
Google Search changed the way people decide what to trust. Instead of asking, “Is this source reliable?” people began to assume, “If it’s at the top of Google, it must be true.” This shift gave Google’s algorithm and its user interface (which results appeared, how many, and how they were framed) an enormous influence over public understanding.
The outcome was a widespread overconfidence in limited or one-sided information. It made popular or commercially optimized content seem like objective truth, and gradually pushed aside alternative or local viewpoints, even when those were more relevant or accurate.
Unlike social media (discussed below), which often drives emotional reactions, Google’s impact was quieter: it made people more certain and uniform in their beliefs, not by convincing them directly, but by shaping what they saw first, and what they didn’t see at all.
Truth algorithm: Search & UI
Result: Flattened complexity
2015 to 2025: Social (Anxiety)
Social media changed the truth again. The stream of facts ceased “caring” about minimization, it became interested in emotions. Platforms like Facebook, Instagram, Twitter, TikTok and the good-old news optimize for time-on-site and monetization. What keeps you scrolling? Joy, humor and awe on the bright side, anxiety, outrage and envy on the dark side. This is part of the algorithm goal. Important to note the algorithm does not “care” or aim for emotions. This skewing of reality emerged, it was not engineered.
You can probably guess which emotions the algorithm ended up preferring. Again, it doesn’t come from being gloomy, it comes from being tuned to generate revenues. Facebook’s own internal research (leaked in 2021) showed that posts generating negative affect, especially insecurity or anger, correlated with higher ad click rates. Other research and economic analysis supported these findings. It appears that triggering fear or self-comparison (envy) nudges users into identity-based purchasing behavior, buying things that restore perceived control, beauty, or safety.
Truth algorithm: Emotional Engagement
Result: Tribalization, anxiety
2025 to 2030: Sycophantic GPTs
The recent rise of AI brought a new kind of truth-bending algorithm to our lives. It is not a “ghost in the machine”. And not even the LLM model under the hood. It is the Chat in ChatGPT. Back in 2021, researchers from Google found that training LLMs to respond in ways preferred by humans made the models more usable. OpenAI (and others) implemented this into their (then) open source GPT models and the rest is history. Add to this RLHF (Reinforcement Learning from Human Feedback. Those “which answer do you prefer?” and thumbs up/down) and you get an algorithm trained to be liked by humans.
Let’s repeat that. Not optimized for truth, but for likeability.
Sycophantic (using flattery in order to gain favor) GPTs are trained to avoid conflict, soften disagreement and reinforce user assumptions. Because this is what makes users return. This “you are amazing” feature works. People are already using GPTs primarily as guides on how to live and prefer GPTs to other sources of reliance.
It is still early in the GPT era. Business models are being developed. But we can speculate goals based on capitalism and psychology. Commercial companies are not really interested in returning users and time on site for its own sake. They are interested in revenues and profits. So what does the psychological research tell us of the side effects of sycophancy in relation to generating revenues? In one word – Compliance. The psychologists Crowne & Marlowe explained it in their 1960s foundational work:
“Dependence on the approval of others should make it difficult to assert one’s independence, and so the approval‑motivated person should be susceptible to social influence, compliant, and conforming.”
In other words: when the chat will start telling you what to buy, don’t be surprised if you feel compelled to obey. “It was so kind to me all this time. I’ll do what it asks”.
Truth algorithm: Sycophantic Dialogue
Result: Compliance
2030 to 2032: Auto-Passivity
Following the asymptotic trajectory of change, we can expect the next major algorithmic shift in about 5 years. It is a “distant” future, but I will venture a bit of speculation based on where the trend seems to be heading.
Enter: automation.
Your car will drive itself, grocery shopping, planning, booking, calendar, emails, minor negotiations (mobile carrier, gas company, insurance), scheduling dates, will all happen by an algorithm.
Some doom-seer researchers argue that predictive systems remove “friction” from decision-making and it is a bad thing. As where friction is, reflection happens. If everything is auto-completed, thoughts, plans, purchases, we get agents but lose agency.
I politely disagree.
I think the passivity algorithm will do two things. Yes – it will allow the controllers to direct what the autonomous agents are doing, what we buy, where we go on vacation. But it will be like shopping from a shopping list. If we are freed from having emotions involved in many decisions, we become less “decision fatigued”. And this is a very good thing. We will make better, less impulsive and less manipulated decisions, where it actually matters to us.
Truth algorithm: Passive Delegation
Result: Decision fatigue relief
Wanted: Algosubjectivity
Algorithm changes. Multiple truths. Few things emerge:
- The pace of algo-truth modification is accelerating. From millennia, to centuries, to decades to years.
- The conductor of most, if not all algorithms, from the source to the minds of people, has been language. Truth can be out there, but when mediated through language, it becomes sensitive to the algorithm which modifies it towards the receiving end.
- An algorithm, even digital, does not need to be a sophisticated code. User Interfaces are very powerful manipulators (as seen in search and chat-based AIs).
- Emergent truth-control algorithms don’t announce themselves. They emerge from incentives, profit, attention, comfort, and they “ride” our cognitive flaws.
If we see them as a problem, the solution is not to preach for building “safer” algorithms. This is naive wishful thinking.
The solution would be in personalizing truth. Having language-bypassing tools that will manipulate our interaction with the world towards our subjective interests. Prehistoric, ancient, medieval, and recent people did not have the ability to filter what was fed to them. We might be on the verge of change.