Thirty years ago, Ted Kaczynski—better known as the Unabomber—wrote Industrial Society and Its Future, a 35,000-word manifesto on the dangers of technology.
He submitted it to The Washington Post and New York Times after an almost two-decade-long mail-bombing terror spree that left three people dead and 23 more injured. The FBI recommended its publication in the hopes that someone would recognize the writing. Someone did: his brother. Kaczynski was arrested at a remote Montana cabin six months later.
Ted Kaczynski mugshot (Photo by Bureau of Prisons/Getty Images)
Kaczynski wasn’t a crackpot, at least not at first. He went to Harvard at 16. He earned his master’s and doctorate degrees in mathematics from the University of Michigan. At 25, he was the youngest assistant professor at the University of California, Berkeley. His crimes are indefensible, but his warnings about modern society becoming enslaved to systems we don’t control or even understand, once abstract, now read like a forecast of our relationship with algorithms and AI.

Ted Kaczynski at the University of California, Berkeley in 1968 (Photo by Sygma/Sygma via Getty Images)
Here’s some of what Kaczynski said three decades ago, and how that’s playing out in 2025.
Black Boxes We Depend On
“The real issue is not whether society provides well or poorly for people’s security; the trouble is that people are dependent on the system for their security rather than having it in their own hands.”
Humans today live inside black boxes. We trust Google’s search engine to deliver truth without really knowing how it works. We trust it despite the fact that people actively game it for their own benefit; there’s an entire industry devoted to gaming Google’s answers.
We ask OpenAI, Anthropic, or Google’s Gemini questions, and they respond with sentences no one—not even their creators—can predict. These are probability machines that behave less like tools and more like oracles. The systems hold the power, and we are dependent on them. They make up stuff, something that their creators openly acknowledge. They just call them “hallucinations.”
Technology That Builds Itself
“It is not possible to make a LASTING compromise between technology and freedom, because technology is by far the more powerful social force and continually encroaches on freedom through REPEATED compromises.”
Kaczynski feared that technology, once unleashed, would perpetuate itself. That’s where we stand with AI. In Silicon Valley, there’s a concept, “vibe coding,” where an AI generates software from a simple prompt. So far, it works only with guardrails and handholding, but the direction is clear: AIs will soon begin making meaningful improvements to themselves. Progress will accelerate not in decades, but in years.
Already, Microsoft says up to 30% of its code is written by AI tools, while Meta estimates that up to half of its development will be done by AI in the next year. Salesforce just cut 4,000 jobs “because I need less heads” thanks to AI, CEO Marc Benioff admitted recently.
The Erosion of Identity
“Oversocialization can lead to low self-esteem, a sense of powerlessness, defeatism, guilt, etc. The concept of ‘mental health’ in our society is defined largely by the extent to which an individual behaves in accord with the needs of the system.”
Identity itself has become fragile. Banks want credit to be easy, so they tolerate fraud. If a credit card is opened in your name, it’s not their crisis—it’s your headache.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
AI is on the same path. People become attached to it like a friend, even though it’s really a set of computers sitting in a data center that consumes a lot of power.
We’ve lost bodily autonomy. With a 10-second audio sample, anyone can replicate your voice. Deepfake tools can make you appear naked just from one picture of you. Just as with financial fraud, the cost of this erosion will be borne not by the platforms that enable it but by individuals left to prove that the fake wasn’t real.
The Arc of Technology Bends Toward Monetization
In an August blog post, OpenAI said its goal “isn’t to hold your attention, but to help you use it well. Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for.”
OpenAI is taking a swipe at Google and Facebook’s business models. Its quote is accurate, for now. ChatGPT and other AIs are funded by subscriptions and enterprise contracts, although Sam Altman recently acknowledged that AI slop like Sora 2 is one way to help pay for GPUs.
Recommended by Our Editors
In an early paper, Google founders Sergey Brin and Larry Page warned that advertising would corrupt search results. Today, Google is the largest ad platform in history. However, billions of dollars have been invested in developing and hosting these models. At some point, investors will demand repayment. If the past is a guide, they will turn to advertising.
Plenty of Upside
Although Kaczynski took an entirely negative view of technology, I see many upsides. In high school, for example, I took a chemistry class taught in the “mastery learning” style. I had a workbook that guided me through the curriculum. The teachers were there primarily to help students who got stuck. I completed the curriculum in two-thirds of the time. Then I sat around twiddling my thumbs. It’s likely that the teacher only knew enough to finish the curriculum. With AI, I could have kept learning.
A few years ago, meanwhile, I needed to see a retina specialist. That was easy to do in New York City, but imagine that you’re a villager in India, where diabetic retinopathy is a big issue and can cause blindness. A $50 smartphone paired with AI could help you get treated early; just point the camera at your eye. It wouldn’t produce as clear an image as a $50,000 Nikon machine used by an ophthalmologist in the US, but it’s good enough to detect the disease.
Anticipating Risks
In 2007, when Facebook was still a fledgling company, I interviewed for a job with cofounder Dustin Moskovitz and told him that the platform would become the primary way people consumed news. He thought the idea was absurd and didn’t hire me. I’ve never been so sad to be right.
I have a travel guide from the 1900s. In it, there are tables of deaths from railroad accidents, which numbered in the thousands annually. We didn’t abandon trains; we established guardrails and enhanced safety technologies.
The lesson is not to valorize Kaczynski but to recognize the truth embedded in his warnings. Technological systems do not wait for permission. They expand, they entrench, they shape society long before we understand the consequences.
The black boxes are already here. The systems are already building themselves. Our identities are already dissolving into algorithms. We can’t stop the momentum. But we can build guardrails while we’re building the steam engine.
5 Ways to Get More Out of Your ChatGPT Conversations
About Our Expert
Rakesh Agrawal
Contributor
Experience
Rakesh is a San Francisco-based entrepreneur and analyst exploring how technology reshapes society. He focuses on the human side of technology, especially AI and autonomous vehicles. Once, he nearly interviewed Richard Branson in a Vegas wedding chapel before being redirected to Branson’s penthouse suite. Read more on his blog.
Read Full Bio
