It’s hard to tell AI news from AI hype at the best of times, but the most recent surge around agents, triggered by many developers embracing Claude Code a couple of months ago, feels like something different. With the viral freakout over Moltbook, the agent social network, and the Super Bowl ad slap fight between OpenAI and Anthropic, AI has escalated to a new level of mainstream attention.
Everyone’s forgotten about the AI bubble and is instead dancing around the AI “inflection point,” when AI in general and agents in particular begin to take over huge swaths of knowledge work, with massive consequences for the economy and the workforce. The recent sell-off of SaaS stocks is an indication of how seriously the industry takes this.
For journalists, all this mainstream AI noise, coupled with the steady drumbeat of layoffs in the media industry, quickly turns into a familiar feeling: pressure to do more. As newsrooms shrink and AI tools get framed as productivity machines, it’s easy to assume the right response is higher output. But AI isn’t just changing how stories get made. It’s changing how stories get found. So the temptation to use AI to do “more with less,” which in many cases will be to tell the same kinds of stories, just more quickly and more often, is misguided.
This is because of the contradiction in how AI systems surface information: While they look for sameness to reinforce the patterns they’re seeing, they don’t reward it. That’s the difference between being cited in an AI summary vs. being in the background. AI only needs one competent version of the commodity story; it goes looking for the one that looks authoritative and adds something new.
More isn’t more
In practice, yes, you could use AI to accelerate news production, letting you cover more stories than you could before, and a few newsrooms are doing that. And on an individual level, that might even signal your value to your employer in the short term. But if it’s effectively the same story reported elsewhere, an AI engine has no reason to prioritize yours over another.
Instead, the more logical path is to invest in the parts of journalism that only humans can do: finding new and novel information through sourcing, research, interviews, and analysis. In other words, while the instinct to do more isn’t wrong, it should be aimed at going deeper, not wider.
AI can still be an accelerant here, speeding up ideation, research, and even things like reaching out to sources. A digital media researcher, Nick Hagar, recently showed what this looks like in practice, using coding agents to recreate a deep analysis from a human-authored journalistic investigation on Virginia police decertifications. The interesting thing about his case study is that, when used with very specific tools (such as Claude Code “skills,” which essentially turn certain research tasks into templates), he could quickly replicate the work, but ultimately his human judgment was required throughout. “Even with skills enforcing a structured workflow, I made dozens of judgment calls…. Skills make the workflow more systematic; they don’t eliminate the need for human attention,” he wrote.
