Information Either Misleading or Totally Made Up
ChatGPT generated a total of 176 citations across the study. Nearly a fifth (19.9%) of these were found to be completely fabricated. Of the remaining 141 real citations, a significant portion (45.4%) contained inaccuracies, including incorrect publication dates, page numbers, or invalid digital object identifiers (DOI).
Shockingly, ChatGPT was found to be both real and accurate on just 77 occasions, approximately 43.8% of the time. To put it another way, 56.2% of overall citations were either made up or contained errors.
The errors in question weren’t always obvious. For instance, when ChatGPT provided the DOI for a fabricated citation (as it did on more than 94% of occasions), 64% of examples linked to research papers on totally unrelated topics. In other words, readers would only realize the error if they clicked through to the linked paper. The remaining 36% of fake DOIs, meanwhile, were completely invalid.
