A UK academic has criticised the suspected use of chatbots in peer review after he was given lengthy instructions on improving his statistical analysis – despite not including any statistics in his rejected paper.
Keith Baker, a researcher on energy policy, submitted a review paper to the open-access journal Heliyon almost two years ago with the aim of highlighting how his proposals for a state-owned energy company in Scotland, suggested in 2014, had eventually been adopted in Wales.
The paper, co-authored by academics from several Scottish universities, set out to explain the “unexpected success story” of the proposals from the thinktank Common Weal, with which he is affiliated. The ideas share some similarities with current UK government plans to invest £8 billion into the state-owned power company GB Energy.
Almost a year after submitting the paper to Heliyon, Baker finally received a lengthy list of recommendations from reviewers, including 14 suggestions on how to improve the paper’s statistical methods and reporting.
ADVERTISEMENT
The authors were asked, for instance, to describe the algorithms used in the statistical analysis, show 95 per cent confidence intervals to guard against p-hacking and include “forest plots and funnel plots” to help visualise their results.
“The only statistics we mentioned in the paper were some figures from energy company accounts – the comments just didn’t make sense,” Baker told Times Higher Education.
ADVERTISEMENT
“At best we were dealing with a reviewer who was incapable, but we strongly suspect this was an AI-generated review,” he continued, suggesting that a human reviewer was unlikely to make dozens of detailed recommendations, most of which were “nonsensical” or unreasonable because they would require years of further work.
“Many of the comments urged us to improve the quality of English in the paper which was frankly insulting given the authors include a professional journalist and several journal editors,” said Baker.
He said he decided to speak out after months of receiving automated responses to his attempts to contact the Shanghai-based section editor at Heliyon dealing with his paper.
After Baker and his co-authors responded robustly to the comments, the paper was rejected – again, nearly a year later.
ADVERTISEMENT
The journal, which was founded by Cell Press in 2015 but is now owned by Elsevier, charges an article processing fee of $2,270 (£1,670), which the authors’ institutions had agreed to cover because of the paper’s relevance to UK domestic energy policy.
“We just wanted to get the damn thing out there,” said Baker on why he approached Heliyon, a mega-journal which, according to its website, “considers research from all areas of the physical, applied, life, social and medical sciences”.
In October 2024, Clarivate’s Web of Science put its indexation of new content at Heliyon on hold, apparently because of the quality of its articles, Retraction Watch reported at the time. According to an Elsevier statement, the Clarivate investigation is still ongoing.
“We’d tried Energy Policy [another journal] but they batted it back – which is their right – but we wanted the results out there because we have done a lot of work on this,” said Baker.
ADVERTISEMENT
“We’ve spent months trying to get in touch but the best we’ve had is an email from customer services which refuses to engage with the complaint that the review was AI-generated.”
In a statement, Elsevier said it “will investigate and if needed determine how we can better prevent the use of AI going forward as well as training our editors to identify this during the peer review process”.
ADVERTISEMENT
“Reviewers are not permitted to upload any submitted papers into a large language model (LLM),” it added, referencing its GenAI policy which prohibits the use of LLMs in peer review.
jack.grove