The rapid progress of artificial intelligence (AI) has undeniably impacted academic publishing in recent years. In addition to becoming a common tool which authors use in penning scientific papers, used for various stages like data analysis, statistics, literature review, and academic writing, it is no longer rare to encounter scientific papers written entirely by AI (1). However, an even more concerning situation that has recently emerged is the use of AI tools to review manuscripts and then submit the generated reports to editors as if they were their own evaluations. This practice poses serious ethical issues that threaten the integrity and the fundamental principles of academic publishing.
The growing number of scientific journals, along with the rise in the number of manuscripts submitted to these journals, has increased the workload of scientists involved in reviewing manuscripts, forcing them to spend more time on this task, and putting peer review under strain. Clearly, manuscript evaluations using AI applications are conducted efficiently and in a short amount of time, nevertheless in good language. AI applications can quickly determine the appropriateness of the methods, the consistency of the findings and conclusions, and whether the statistical methods are correctly selected. They can also detect plagiarism, if any, and make appropriate grammatical corrections in the text (2, 3). All of these help to shorten the review time.
So, where is the problem in evaluating manuscripts using AI? Firstly, who is responsible for the evaluation report generated by the AI? Just as AI cannot be the author of a scientific article today, it cannot be accepted as the author or the party responsible for a review report. When an evaluation is made, ultimately someone must be responsible for the accuracy of its content, possible errors, or misinterpretations (4).
No matter how advanced or frequently updated AI applications are, they must work with existing data. This is one of the significant limitations of AI. In groundbreaking studies involving new concepts, AI may fail to perceive the minute details, the originality of a new perspective, or a new theory based on existing data, resulting in insufficient critical analysis. A reviewer with expertise in the field is more likely to be open-minded about game-changing ideas and manuscripts (4).
Moreover, for an article to be evaluated by AI, it must first be uploaded to an AI application. This constitutes significant ethical violation because it compromises the confidentiality of a manuscript submitted to a journal. The confidentiality of the text uploaded to an AI application cannot be guaranteed (4).
For these reasons, on June 23, 2023, the National Institutes of Health prohibited the use of AI tools, such as ChatGPT, in the peer review process. The Australian Research Council also banned the use of AI tools in the peer review process. Additionally, many leading academic journals do not allow the use of AI tools in peer reviews (5).
The policy of the Turkish Archives of Otorhinolaryngology on this matter is based on the current approaches of international publishing ethics organizations such as COPE, WAME, EASE, and ICMJE. AI tools may be used to a limited extent to improve the grammar and linguistic expression of the review report; however, delegating the peer review task to AI, uploading manuscripts to internet-connected AI systems, or presenting AI-generated evaluations as personal opinions is unacceptable.
In conclusion, using AI in the review process is not an appropriate approach. Nevertheless, in a world where technology and AI are advancing exponentially, it remains difficult to predict what the relationship between peer review and AI will look like in the near future.