The world’s largest machine-learning and AI conference has been flooded with examples of academics using AI-generated content for their peer reviews, while many papers submitted were also partially—or even fully—AI-generated, Nature reports.
In academia, work published in reputable journals or presented at major conferences is typically “peer reviewed,” meaning scholars in the same field assess the paper for quality and rigor. But in the run-up to the 2026 International Conference on Learning Representations (ICLR), which is set to host around 11,000 AI researchers in Brazil, 21% of ICLR peer reviews were allegedly fully AI-generated, and more than half showed signs of AI use.
The analysis comes from Panagram, a US startup that provides tools for detecting AI-generated writing. Panagram screened all 19,490 studies and 75,800 peer reviews submitted for ICLR 2026. The issue was less severe for the papers themselves, but still significant: 199 manuscripts (1%) were fully AI-generated, while 9% contained more than 50% AI-generated text. These figures are solely from Panagram and haven’t yet been independently verified as of writing.
Graham Neubig, an AI researcher at Carnegie Mellon University, grew suspicious after receiving what appeared to be an AI-generated peer review, prompting him to bring in the startup to investigate by reaching out on social media.
This Tweet is currently unavailable. It might be loading or has been removed.
Desmond Elliott, a computer scientist at the University of Copenhagen, told Nature that one peer review on his student’s work had “missed the point of the paper.” His student suspected it was generated by an LLM. Panagram’s analysis flagged the review as fully AI-generated, confirming his “gut feeling.”
The glut of AI-generated reviews at this year’s conference is already having a real impact. Nature reports that many authors have withdrawn their ICLR submissions after receiving peer reviews containing false claims.
“In AI and machine learning right now, we have a crisis in terms of reviewing, because the field has expanded exponentially for the past five years,” Neubig told Nature.
AI-generated cheating has already become a major issue in education since tools like ChatGPT went mainstream—some teachers across the US have even reverted to old-school options like in-class essays and “blue books” to curb cheating.
But AI cheating is clearly no longer confined to high school and college students. It is becoming endemic among skilled professionals. The New York Times recently reported AI-generated work showing up in serious court cases across the US, including LLMs hallucinating nonexistent previous cases as references. Meanwhile, AI “workslop” has become hard to avoid in fields like IT and consulting.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Our Expert
Experience
I’m a reporter covering weekend news. Before joining PCMag in 2024, I picked up bylines in BBC News, The Guardian, The Times of London, The Daily Beast, Vice, Slate, Fast Company, The Evening Standard, The i, TechRadar, and Decrypt Media.
I’ve been a PC gamer since you had to install games from multiple CD-ROMs by hand. As a reporter, I’m passionate about the intersection of tech and human lives. I’ve covered everything from crypto scandals to the art world, as well as conspiracy theories, UK politics, and Russia and foreign affairs.
Read Full Bio
