Why would you ever engage directly with a text when a robot can give chase on your behalf?
Google’s NotebookLM heralds much the same efficiency: You can upload dozens of files that the AI system will read for you, then query that content for summaries, answers, and linked passages. It is a “democratized CliffNotes” that can be applied to any text ever created.
Because “hallucination” – false or misleading information generated by AI – is an ongoing problem that threatens the accuracy of AI and therefore its reliability and acceptance, I tested NotebookLM on a source I know well: 100,000 words from my latest book, which is about America’s obsession with authenticity in media, culture and politics.
I asked questions like, “How does social media create the desire for authenticity while promoting inauthenticity?” “How does the influencer industry idealize the mundane?” and “Explain the strategies campaign consultants use to give the impression of real politicians.”
The system’s output was flawless. It captured the broad themes of my book and summarized them in broad outlines.
And if even NotebookLM’s bullet points remain too heavy, the system can also provide you with an eight-minute podcast distillation of the content, with sharp chatbot voices with NPR-style cadences making fun of what you hadn’t read.
As far as I could tell from my first encounter with NotebookLM, there was no reason to read my book. Or any book. Someday again.
Automated summary shortcuts are on the rise. Otter and Zoom apply them to meeting transcripts; Facebook to user comments sections; and Amazon to buyer review highlights. Newspaper chain Gannett has even tested AI-generated ‘key points’ on top of articles, avoiding the need to read their reporters’ work in full.
Such tools predict an information landscape in which everything is reduced to TL;DR – short for ‘too long; not read.” Perhaps this is an inevitable development in a world flooded with data that is growing exponentially. Part of the challenge of modern life is not only figuring out what to pay attention to, but also cultivating habits of mind to screen out what not to pay attention to.
And not all source texts are equal. We lose considerably less intellectually by stuffing the 60,000 customer opinions about Amazon’s smart plug into the insatiable maw of AI than, say, by feeding it the 120,000 lines of prose that make up Shakespeare’s canon. Yet both examples follow the same logic: to save time and ‘know’ more, one should read less.
This cult of efficiency is not a new ideal. As the Industrial Revolution unfolded centuries ago, it redefined not only work, but also virtue. Machines predicted what could be done and therefore should be done – which is the same sales pitch that AI ads run today.
Sociologist Max Weber has diagnosed this as the principle of rationalization, an ethos in which the most efficient, instrumental means of achieving an end should govern all human behavior and in which nothing is left to chance. This drove capitalism: if factory assembly lines could produce widgets in half the time, the owners would make twice as much profit.
The invisible algorithmic bureaucracies that shape today’s information landscape serve an analogous ideal. If you can process a text in half the time (or less), you can use twice as much (or more). But does that text, in its emaciated form, still leave its mark on you? What is lost with these gains?
Reading – similar to its intellectual cousins, learning and writing – is not always an efficient process. Unlike an assembly line, it cannot be tylorized – a management theory named after Frederick Taylor, one of its leading proponents, which analyzes and synthesizes workflows for maximum productivity. There are many blind alleys and blind alleys in reading. Yet we only find this out in retrospect, a process I consider one of the many messy, sometimes inefficient pleasures of learning.
AI wants to remove all that inefficiency. It presupposes knowing what we are looking for, even if we may not know it yet. It enables power browsing. It tries to eliminate the waste of declining details. Google’s product manager for NotebookLM might assure us, “There is no substitute for reading actual text,” even though the system aims to do just that: replacing a slower approach to answering a question with a mechanized approach which is faster and more predictable.
Destroy reading and you might destroy writing too. The news industry stands to lose more than 25 percent of web traffic and $2 billion in advertising revenue if readers settle for Google’s AI summaries dominating search results instead of clicking through publisher links. After two decades of slow-motion implosion — with declining revenues, dwindling subscribers and hollowed-out newsrooms — the newspaper industry doesn’t seem ready to survive another digital innovation.
Who will fill the AI summary resources if content creators cannot make a living?
Fifteen years ago, technology writer Nicholas Carr presciently observed that the Internet was turning us into “pancake people,” our brains flattened by the habit of superficially skimming the surface of information, like digital jet skiers zooming over hyperlinks. Whither the divers who dive deep into focus and become immersed in a printed text?
Certainly, a world without any efficiency would be a maddening place. Efficiency has enabled human evolution over millennia and supported countless facets of growth. But it is not the most important, much less the only, important value in a culture. And higher education, where knowledge in itself is valued, should be a special space of resistance against the ruthless dominance of the tyranny of efficiency.
Of course, I could teach a classroom of 1,000 students. It wouldn’t be so good if my classes maxed out at 30 – for them or for me. Perhaps most importantly, we would lose human connection. Reading remains an important spark for that human connection.
What we gain in AI meta-reading quantity, we lose in quality. The algorithmic culture idealizes a world where you only get the information you want, because more would be a waste of time. As tech companies try to convert us to their products, don’t forget to read between the lines of what’s being made obsolete.
Michael Serazio is a professor of communications at Boston College and author of “The Authenticity Industries: Keeping It ‘Real’ in Media, Culture, and Politics.”