The academic publishing community is being rocked by stories of authors using generative artificial intelligence (GenAI) to write articles and create images for publications. Just over 1 percent of articles published in 2023 will be written using large language models (LLMs), according to one study. In my opinion, this is probably an underestimate.
Published articles, which have now appeared withdrawn or removed because they contained GenAI content, they were found to contain verbatim generic answers from LLMs:
“CertainlyHere’s a possible introduction to your topic: Lithium metal batteries are promising candidates for…”
“In summary, I am very sorry about the treatment of bilateral iatrogenic, but I do not have access to real-time information or patient-specific data as I an AI language model.”
These cases and others, such as this headline Business insider“An AI-generated rat with a giant penis highlights a growing crisis of fake science that is plaguing the publishing industry,” highlights the challenges academic publishers face in adapting to the analytical and generative capabilities of AI tools.
It is clear that GenAI tools have the potential to worsen the crisis of trust in scientific publishing, as readers are unsure whether what they are reading is true. written by people, machines or both. At the same time, academic publishing in many contexts is gripped by a ‘publish-or-perish’ culture, in which academics have a strong incentive to outsource their writing to GenAI to increase their productivity.
We need to change our mindset towards GenAI tools to reclaim the narrative and restore trust in academic publishing. Here I offer suggestions on how we can move forward.
What are the guidelines for using GenAI?
In response to the development of GenAI, publishers and journals, under the advice of the Committee on Publication Ethicsnow require contributing authors to indicate their use of GenAI in their writing process. Times higher education also requires authors to do this declare that they use GenAI. Similar requirements are a common feature of university assessment guidance. However, the voluntary declaration approach does not seem to have the desired effect. Journal editors I spoke with said they rarely receive GenAI statements on submitted articles. A study of university students’ AI use statements found similar results of non-compliance. Another study suggests that the academic community fears that GenAI’s disclosure will give editors, reviewers, and potential readers a negative image of the authors.
Within universities, students fear being judged unfairly or punished if they accurately report their AI use in assessments. These concerns may be justified because people view texts that they believe are machine translated more negatively than texts they believe are human translated, according to one study. Deception seems to be preferable to transparency because transparency is believed to pose the greatest risk to the author.
How we Real dealing with AI tools
The practice of disclosing AI use may not reflect the way we use GenAI tools today. GenAI functionalities are increasingly embedded in our daily work tools. Microsoft Word has Co-pilot, Grammarly has writing assistance, and these are just two of the many tools we can use as part of our research and writing process. We can use an LLM to ‘discuss’ our initial research ideas, we can use an academic AI research engine (such as Consensus AI, Scite or Elicit) collect and summarize literaturewe can ask another LLM for feedback on research tool design, and get help from MAXQDA AI assist during our data coding. We can enable Grammarly or Microsoft Co-pilot when we write, selecting or rejecting suggestions about style, tone, and language in sync as we write. This blurs the lines of knowing which applications are acceptable, which applications should be declared, and whether we have gotten help from AI at all.
How we normalize GenAI in academic writing
In essence, the use of GenAI has become normalized in our academic writing processes. Many authors and students are turning to GenAI tools to boost their intelligence and capabilities, but retain human oversight and ownership over the content they create. We need to think about how to reform academic publishing in a world where GenAI is normalized.
Firstly, we need to change our way of thinking: we no longer see the use of these tools as a sign of shortages a sign of improvement. The responsible and ethical use of GenAI, under the supervision and accountability of the author, may be no different from outsourcing tasks to research assistants or professional proofreaders. The author’s responsibility to verify information and check the accuracy of completed tasks is the same.
Second, we need to develop methodological models and frameworks to show us how to use GenAI ethically, legally, and transparently to support authors’ knowledge production activities. For example, using GenAI to verify human coding procedures could improve the quality of data analysis. Creating models that use GenAI as part of the process and protocols for writing about these processes can be established and tested.
Third, authors and students should be educated about the ethical issues associated with using GenAI tools. Authors can take steps to address concerns about bias and intellectual property rights once they have increased AI literacy.
Once the standardization of GenAI is accepted in scientific publications, it will become easier to discuss the ethical and responsible use of these tools. Users can come out of the shadows and we can be more open, honest and reflective about our use of GenAI. In this way, confidence in scientific publishers can be regained.
I used Grammarly to help me write this article.
Benjamin Luke Moorhouse is an assistant professor at Hong Kong Baptist University.
To receive advice and insights from academics and university staff straight to your inbox every week, sign up for the Campus Newsletter.