ChatGPT AI Shines in Challenging Medical Cases

Summary: A novel study put the diagnostic prowess of generative AI, specifically the chatbot GPT-4, to the test, yielding promising results.

The study involved evaluating the AI’s diagnostic accuracy in handling complex medical cases, with GPT-4 correctly identifying the top diagnosis nearly 40% of the time and including the correct diagnosis in its list of potential diagnoses in 64% of challenging cases.

The success of AI in this study could provide new insights into its potential applications in clinical settings. However, more research is needed to address the benefits, optimal use, and limitations of such technology.

Key Facts:

  1. In a study involving 70 complex clinical cases, GPT-4 correctly matched the final diagnosis 39% of the time.
  2. GPT-4 included the correct diagnosis in its differential list (a list of potential conditions based on patients’ symptoms, medical history, and clinical findings) in 64% of the cases.
  3. Despite the promising results, researchers stress the importance of further investigation to understand the optimal use, benefits, and limitations of AI in a clinical setting.

Source: BIDMC

In a recent experiment published in JAMA, physician-researchers at Beth Israel Deaconess Medical Center (BIDMC) tested one well-known publicly available chatbot’s ability to make accurate diagnoses in challenging medical cases.

The team found that the generative AI, Chat-GPT 4, selected the correct diagnosis as its top diagnosis nearly 40 percent of the time and provided the correct diagnosis in its list of potential diagnoses in two-thirds of challenging cases.

Generative AI refers to a type of artificial intelligence that uses patterns and information it has been trained on to create new content, rather than simply processing and analyzing existing data.

This shows an AI representation of a doctor.
Generative AI refers to a type of artificial intelligence that uses patterns and information it has been trained on to create new content, rather than simply processing and analyzing existing data. Credit: Neuroscience News

Some of the most well-known examples of generative AI are so-called chatbots, which use a branch of artificial intelligence called natural language processing (NLP) that allows computers to understand, interpret and generate human-like language. Generative AI chatbots are powerful tools poised to revolutionize creative industries, education, customer service and more.

However, little is known about their potential performance in the clinical setting, such as complex diagnostic reasoning.

“Recent advances in artificial intelligence have led to generative AI models that are capable of detailed text-based responses that score highly in standardized medical examinations,” said Adam Rodman, MD, MPH, co-director of the Innovations in Media and Education Delivery (iMED) Initiative at BIDMC and an instructor in medicine at Harvard Medical School.

“We wanted to know if such a generative model could ‘think’ like a doctor, so we asked one to solve standardized complex diagnostic cases used for educational purposes. It did really, really well.”

To assess the chatbot’s diagnostic skills, Rodman and colleagues used clinicopathological case conferences (CPCs), a series of complex and challenging patient cases including relevant clinical and laboratory data, imaging studies, and histopathological findings published in the New England Journal of Medicine for educational purposes.

Evaluating 70 CPC cases, the artificial intelligence exactly matched the final CPC diagnosis in 27 (39 percent) of cases. In 64 percent of the cases, the final CPC diagnosis was included in the AI’s differential – a list of possible conditions that could account for a patient’s symptoms, medical history, clinical findings and laboratory or imaging results.

“While Chatbots cannot replace the expertise and knowledge of a trained medical professional, generative AI is a promising potential adjunct to human cognition in diagnosis,” said first author Zahir Kanjee, MD, MPH, a hospitalist at BIDMC and assistant professor of medicine at Harvard Medical School.

“It has the potential to help physicians make sense of complex medical data and broaden or refine our diagnostic thinking. We need more research on the optimal uses, benefits and limits of this technology, and a lot of privacy issues need sorting out, but these are exciting findings for the future of diagnosis and patient care.”

“Our study adds to a growing body of literature demonstrating the promising capabilities of AI technology,” said co-author Byron Crowe, MD, an internal medicine physician at BIDMC and an instructor in medicine at Harvard Medical School.

“Further investigation will help us better understand how these new AI models might transform health care delivery.”

This work did not receive separate funding or sponsorship. Kanjee reports royalties for books edited and membership of a paid advisory board for medical education products not related to artificial intelligence from Wolters Kluwer, as well as honoraria for CME delivered from Oakstone Publishing. Crowe reports employment by Solera Health outside the submitted work. Rodman reports no conflicts of interest.  

About this ChatGPT and AI research news

Author: Chloe Meck
Source: BIDMC
Contact; Chloe Meck – BIDMC
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge” by Adam Rodman et al. JAMA


Abstract

Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge

Recent advances in artificial intelligence (AI) have led to generative models capable of accurate and detailed text-based responses to written prompts (“chats”). These models score highly on standardized medical examinations.

 Less is known about their performance in clinical applications like complex diagnostic reasoning. We assessed the accuracy of one such model (Generative Pre-trained Transformer 4 [GPT-4]) in a series of diagnostically difficult cases.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.