As AI oozes into daily life, some people are building walls to keep it out for a host of compelling reasons. There’s the concern about a technology that requires an immense amount of energy to train and contributes to runaway carbon emissions. There are the myriad privacy concerns: At one point, some ChatGPT conversations were openly available on Google, and for months OpenAI was obliged to retain user chat history amid a lawsuit with The New York TimesThere’s the latent ickiness of its manufacturing process, given that the task of sorting and labeling this data has been outsourced and underappreciated. Lest we forget, there’s also the risk of an AI oopsie, including all those accidental acts of plagiarism and hallucinated citations. Relying on these platforms seems to inch toward NPC status—and that’s, to put it lightly, a bad vibe.
Then there’s that matter of our own dignity. Without our consent, the internet was mined and our collective online lives were transformed into the inputs for a gargantuan machine. Then the companies that did it told us to pay them for the output: a talking information bank spring-loaded with accrued human knowledge but devoid of human specificity. The social media age warped our self-perception, and now the AI era stands to subsume it.
Amanda Hanna-McLeer is working on a documentary about young people who eschew digital platforms. She says her greatest fear of the technology is cognitive offloading through, say, apps like Google Maps, which, she argues, have the effect of eroding our sense of place. “People don’t know how to get to work on their own,” she says. “That’s knowledge deferred and eventually lost.” As we give ourselves over to large language models, we’ll relinquish even more of our intelligence.
exposure avoidance
The movement to avoid AI might be a necessary form of cognitive self-preservation. Indeed, these models threaten to neuter our neurons (or at least how we currently use them) at a rapid pace. A recent study from the Massachusetts Institute of Technology found that active users of LLM tech “consistently underperformed at neural, linguistic, and behavioral levels.”
People are taking steps to avoid exposure. There’s the return of dumbphones, high school Luddite clubs, even a TextEdit renaissance. A friend who is single reports that antipathy toward AI is now a common feature on dating app profiles—not using the tech is a “green flag.” A small group of people proclaim to avoid using the technology entirely.
But as people unplug from AI, we risk whittling the overwhelming challenge of the tech industry’s influence on how we think down to a question of consumer choice. Companies are even building a market niche targeted towards the people who hate the tech.
Even less effective might be cultural signifiers, or showy—perhaps unintentional—declarations of individual purity from AI. We know the false promise of abstinence-only approaches. There’s real value in prioritizing logging off, and cutting down on individual consumption, but it won’t be enough to trigger structural change, Hanna-McLeer tells me.
Of course, the concern that new technologies will make us stupid isn’t new. Similar objections arrived, and persist, with social media, television, radio—even writing itself. Socrates worried that the written tradition might degrade our intelligence and recall: “Trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom,” Plato recorded his mentor arguing.
But the biggest challenge is that, at least at the current rate, most people will not be able to opt out of AI. For many, the decision to use or not use the technology will be made by their bosses or the companies they buy stuff from or the platforms that provide them with basic services. Going offline is already a luxury.
As with other harmful things, consumers will know the downsides of deputizing LLMs but will use them all the same. Some people will use them because they are genuinely, extremely useful, and even entertaining. I hope the applications I’ve found for these tools take the best of the technology while skirting some of its risks: I try to use the service like a digital bloodhound, deploying the LLMs to automatically flag. updates and content that interest me, and before I then review whatever it finds myself. A few argue that eventually AI will liberate us from screens, that other digital toxin.
Misaligned with the business model—and the threat
A consumer-choice model for dealing with AI’s most noxious consequences is misaligned with the business model—and the threat. Many integrations of artificial intelligence won’t be immediately legible to non- or everyday users: LLM companies are highly interested in enterprise and business-to-business sectors, and they’re even selling their tools to the government.
There’s already a movement to make AI not just a consumer product, but one laced into our digital and physical infrastructure. The technology is most noticeable in app form, but it’s already embedded in our search engines: Google, once a link indexer, has already transformed into a tool for answering questions with AI. OpenAI, meanwhile, has built a search engine from its chatbot. Apple wants to integrate AI directly into our phones, rendering the large language models an outgrowth of our operating systems.
The movement to curb AI’s abuses cannot survive merely on the hope that people will simply choose not to use the technology. Not eating meat, avoiding products laden with conflict minerals, and flipping off the light switch to save energy certainly does something, but not enough. AI asceticism alone does not meet the moment.
The reason to do it anyway is the logic of the Sabbath: We need to remember what it’s like to occupy, and live in, our own brains.
The early-rate deadline for Fast Company’s World Changing Ideas Awards is Friday, November 14, at 11:59 pm PT. Apply today.
