In early March, a job advert was doing the rounds among sports journalists. It was for an “AI-assisted sports reporter” at USA Today’s publisher, Gannett. It was billed as a role at the “forefront of a new era in journalism”, but came with a caveat: “This is not a beat-reporting position and does not require travel or face-to-face interviews.” The dark humour was summed up by football commentator, Gary Taphouse: “It was fun while it lasted.”
As the relentless march of artificial intelligence continues, newsrooms are wrestling with the threats and opportunities the technology creates. Just in the past few weeks, one media outlet’s AI project was accused of softening the image of the Ku Klux Klan. AI is also playing a part in some British journalists recording more than 100 bylines in a day. Amid the angst over the technology, however, a broad consensus is beginning to emerge about what the technology is currently capable of doing accurately.
Yet media companies are already aware of an elephant in the room. Their calculations could be upended should users simply turn to AI assistants to get their content fix. “I think good quality information can rise in an age of AI,” said one UK media executive. “But we need to set the terms in the right way in the next couple of years, or we are all screwed.”
The speed at which the technology has arrived has brought some early case studies in journalistic misadventure. In early March, the LA Times launched an AI tool giving alternative perspectives on opinion pieces. It caused alarm by saying some local historians regarded the Ku Klux Klan as a “‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement, minimising its ideological threat”. The pitfall, said one media executive looking at AI, was obvious: “It was given a task of making judgments it can’t possibly be expected to make.”
The fact that even a tech giant like Apple had to suspend a feature that made inaccurate summaries of BBC News headlines shows just how hard it can be to ensure the accuracy of generative AI.
In reality, teams of journalists and tech tool designers have been working for years to find the best AI uses. In terms of public content, publishers are clustering around using it to suggest small chunks of text, based on original journalism. In practice, that means headline suggestions and story summaries, easily checked by human editors. The Independent became the latest to announce this week that it would be publishing condensed AI versions of its own stories. Many publishers are trialling or have already deployed similar tools.
Some big organisations have also been experimenting with their own AI chatbots, allowing readers to ask questions using content from their own archives. The problem is that editors cannot possibly know the answers being spat out. Attached to the Washington Post’s chatbot feature is the note: “This is an experiment … Because AI can make mistakes, please verify the response by consulting these articles.”
The amount of AI-assisted text that can be safely overseen by human editors is a live issue. Reach, publisher of the Daily Mirror and a series of other local sites, has been using its Guten tool to repackage its own journalism for different audiences. It has contributed to some eye-wateringly high byline counts for some journalists. On one day in January, one regional Reach reporter recorded 150 bylines or joint bylines across the group’s titles. While he did not use Guten himself, the technology was used to repurpose his work for other sites.
Some Reach journalists have privately expressed concern. A Reach spokesperson said Guten was only a tool and “needs to be used thoughtfully” by journalists. “We’re encouraged by the progress we’ve made in reducing errors and supporting our everyday work,” they said. “This has enabled us to free-up journalists to spend more time on journalism which would otherwise go unreported.”
USA Today Network made the same point about its AI-assisted sports reporter post. “By leveraging AI, we are able to expand coverage and enable our journalists to focus on more in-depth sports reporting,” said a spokesperson.
Others doubt whether the time saved will go into original journalism. The former Independent editor Chris Blackhurst said recently he was “very cynical” about the idea, fearing it was more likely to be “freeing people up to work elsewhere”.
While publicly visible AI-assisted journalism has created the most debate, it is actually inside newsrooms that the technology is providing gains, interrogating huge datasets. The FT, the New York Times and the Guardian are among the groups exploring the technique. It has already helped find severe cases of neglect from more than 1,000 pages of hospital documents in Norway. Transcription and translation are other more everyday uses.
Others are using it for “social listening”. The News Movement, which aims content at a younger audience, has built a tool that monitors what its audience are talking about on social media and feeds it back to journalists. “It helps us understand what conversations and topics people are currently having,” said Dion Bailey, its chief product and technology officer. Despite the angst about AI errors, some companies, such as Der Spiegel, are actually trying to use AI to factcheck content.
What is coming next? According to academic research, it is “audience-facing format transformations”. In other words, taking a story and turning it into the kind of content that a user wants – be it condensed, audio or even video. About a third of media leaders surveyed by the Reuters Institute for the Study of Journalism said they wanted to experiment with turning text stories into video. Tools can already turn long footage into short, shareable content.
Yet hanging over all this newsroom innovation is the fear that it could all be for nought if personal AI chatbots take the place of media companies in producing content. “What keeps me up at night is AI simply inserting itself between us and the user,” said one media figure. Google’s launch this month of a new “AI Mode”, which takes information from multiple sources and presents them as a chatbot, has spooked the industry. Some believe government intervention is the only solution.
Some bigger media groups have been signing licensing deals with the owners of the main AI model owners, allowing the models to be trained on their original material with attribution. The Guardian has such a deal with OpenAI, owner of ChatGPT. Meanwhile, the New York Times is leading a lawsuit against OpenAI for using its work.
Bailey shares the concerns, but retains hope that the media world can adapt. “If the power goes to two or three big tech companies, then we have some real, significant issues,” he said. “We need to adapt in terms of how people are able to get to us. That’s just a fact.”