Article

Artificial intelligence is the new ghost author in urological research

“Although the future of AI within science is uncertain, disclosure of the use of AI is an excellent first step to balance academic integrity with disruptive innovation,” the authors write.

Artificial intelligence (AI) is disrupting the medical field before our eyes. Its open accessibility, power, and speed are causing both amazement and controversy in the scientific community. Like it or not, AI technology is here to stay, so learning about its strengths, weaknesses, and tips for responsible use is paramount for academic urologists.

“AI presents the opportunity to create a virtuous cycle where researchers who choose to forgo its use will be left in the dust, trying to keep up with the research output of their peers who use AI,” write authors Tyler Bergeron, BS, Alexander Small, MD, and Nitya Abraham, MD.”

“AI presents the opportunity to create a virtuous cycle where researchers who choose to forgo its use will be left in the dust, trying to keep up with the research output of their peers who use AI,” write authors Tyler Bergeron, BS, Alexander Small, MD, and Nitya Abraham, MD.

One such AI application, ChatGPT, is a chatbot quickly gaining popularity, with its ability to synthesize information and create natural text outputs in seconds. Unlike previous iterations of chatbots that created robotic and formulaic responses to questions, ChatGPT can create bodies of text that read as though they were written by a human. Within the scientific community, ChatGPT has generated entire manuscripts that even experts could not identify as being written by AI.1 This begs the question: If ChatGPT can write a manuscript, should it be cited as an author? The current stance of journals such as Nature and Science is that AI cannot be cited as an author.2 Although AI cannot be cited, the rules surrounding its use as a tool for a human author to produce a manuscript are ambiguous and in flux. Implementing disruptive technology seems inevitable, but the rules and guidelines for using ChatGPT are still up for debate. The best path forward to decide the fate of AI in academic writing should be the same as the evaluation of any new procedure or drug within medicine: an assessment of benefits and risks.

The most significant benefit that AI offers to researchers is its time-saving potential in the writing process. Outlining, writing, rewriting, and editing can be a time-consuming yet educational process. Could the time saved otherwise be spent proposing new research questions, designing new experiments, or collecting data? AI can generate a manuscript draft in seconds, drastically reducing the time to submission and perhaps even publication. The ability to publish more papers could lead to more funding, which could then lead to more discovery. AI presents the opportunity to create a virtuous cycle where researchers who choose to forgo its use will be left in the dust, trying to keep up with the research output of their peers who use AI.

Even if the use of ChatGPT is banned from generating text for journal submissions, there remains the possibility of using AI for other tasks that may be deemed less necessary for direct human involvement. ChatGPT can clean and format data in charts and graphs, proofread bodies of text, or act as a substitute for a dictionary or thesaurus. Proofreading software has long been accepted within academia, and at the very least, limiting AI’s use to an efficient alternative could be helpful.

On the other hand, the use of ChatGPT poses many potential risks to both patients and health care professionals. Currently, patient health information records would not be safe and secure if used in conjunction with AI. ChatGPT has not taken any safety measures or precautions to guarantee that data entered by the software cannot be accessed or leaked. Using patient health information with AI software would pose a substantial ethical discrepancy, breaking the Health Insurance Portability and Accountability Act of 1996. Until ChatGPT can put safety measures in place to protect patient information, its use in producing manuscripts for research containing patient data will be severely hindered. Furthermore, ChatGPT can cite literature and integrate it into a manuscript succinctly, but the evidence cited may not be accurately portrayed or may even be fabricated. Although papers cited by AI are usually authentic, the information extracted from those papers may not be properly interpreted, put into context, or deemed legitimate.3 How ChatGPT generates these sources is unknown to the user, who can only see the result of their prompt; therefore, ChatGPT has been dubbed a black box technology.4 Authors who use AI should use caution and check all citations.

Moreover, is there something intrinsically valuable to researching a topic that will be lost to AI implementation? This is not an ethical limitation of the technology but a possible trade-off in that the pros and cons have yet to be weighed appropriately. There is literature to support the idea that research and manuscript writing offer valuable experience to students and young physicians.5 The experience learned through this process comes at the cost of efficiency in producing a paper, but there seems to be an intangible benefit that may be more difficult to quantify. Firsthand experience in researching background, synthesizing data, and producing a manuscript may provide tacit knowledge, leading to a novel research question that would not have otherwise been produced.

Currently, the debate within the Department of Urology at Montefiore Medical Center in Bronx, New York, is ongoing. Some department members wish to forbid the use of AI entirely from the scientific writing process, arguing that any body of work for which an author takes ownership but did not produce themselves would be a form of plagiarism. Others argue for a more nuanced stance, acknowledging AI’s potential benefits and possible inevitability. This stance would discourage authors from using AI for paper writing but would mandate complete transparency on the specifics of using AI. Honesty and transparency are pillars of sound scientific research and advancement. Although the future of AI within science is uncertain, disclosure of the use of AI is an excellent first step to balance academic integrity with disruptive innovation.

Bergeron is a medical student at Albert Einstein College of Medicine in Bronx, New York. Small is an attending physician and assistant professor at Montefiore Medical Center in Bronx, New York, and Albert Einstein College of Medicine. Abraham is an associate professor of urology, female pelvic medicine, and reconstructive surgery at Montefiore Medical Center and Albert Einstein College of Medicine.

References

1.Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(7944):423. doi:10.1038/d41586-023-00056-7

2.Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613(7945):612. doi:10.1038/d41586-023-00191-1

3.Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel).2023;11(6):887. doi:10.3390/healthcare11060887

4.Rao A, Kim J, Kamineni M, Pang M, Lie W, Succi MD. Evaluating ChatGPT as an adjunct for radiologic decision-making. medRxiv. Preprint posted online February 7, 2023. doi:10.1101/2023.02.02.23285399

5.Tomaska L. Teaching how to prepare a manuscript by means of rewriting published scientific papers. Genetics.2007;175(1):17-20. doi:10.1534/genetics.106.066217

Related Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.