CDS PhD student Yiqiu Shen publishes editorial on ChatGPT in the medical field


The paper explores the “double-edged sword” of unintended consequences and potential benefits the technology has in clinical settings

CDS PhD student, Yiqiu (Artie) Shen

In an editorial “ChatGPT and Other Large Language Models Are Double-edged Swords” in the journal Radiology, CDS PhD student Yiqiu (Artie) Shen, advised by CDS Affiliated Professor Krzysztof J. Geras and CDS Professor Kyunghyun Cho, collaborated with a team of medical experts to describe the complicated role of artificial intelligence technologies in healthcare. The piece emphasizes caution, pointing out several cases that illustrate the fine line the AI system ChatGPT (Chat Generative Pre-trained Transformer) draws between beneficial and detrimental.

Since its launch last November by the AI development company OpenAI, ChatGPT has generated everything from class assignments to song lyrics. The technology allows users to ask questions and enter prompts that generate human-like text. ChatGPT is trained on a vast selection of text (about 570GB) including books, articles, and websites which encompass a diverse range of topics. Unlike other automated chat boxes, the AI system is conversational and more adept at responding to users’ inquiries.

Despite its technological advances, ChatGPT has limitations and many of its shortcomings are shared by other deep learning models that generate natural language. ChatGPT can produce inaccurate responses that appear to be true and will assume the user’s intentions rather than asking additional questions. The authors highlight concerns that the limitations of AI technologies can lead to the fabrication of scientific writing and the misrepresentation of research results through text, figures, or images.

While acknowledging the risks, the editorial breaks down the potential benefits of ChatGPT such as producing medical content like reports and patient-facing materials for post-procedure care, providing translations for multilingual communication, and summarizing electronic health records saving medical professionals from the time-consuming process of identifying critical information from patient charts. The authors also outline areas where the technology could be used in collaboration with human intelligence such as in clinical decision support. While the technology can be used to provide answers to specific clinical scenarios, it has been designed not to provide medical advice but instead defers decision-making to a healthcare professional.

As medical and academic communities navigate the use of ChatGPT, understanding the limitations of AI technologies will be essential. “Despite these challenges, harnessing the power of this technology for clinical decision support and even imaging appropriateness holds great potential,” write the authors. “It is exciting to consider the possibilities of what ChatGPT and similar AI-based technologies have in store for the future.”

Co-authors on the editorial include Drs. Laura Heacock, Beatriu Reig, and Linda Moy from the Department of Radiology at NYU Grossman School of Medicine with Drs. Jonathan Elias, Keith Hentel, and George Shih of Weill Cornell Medicine in New York City.

By Meryl Phair



NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.