When it comes to answering medical questions, can ChatGPT do a better job than human doctors? It appears to be possible, according to the results of a new study published in JAMA Internal Medicine, led by researchers from the University of California San Diego.
The researchers compiled a random sample of nearly 200 medical questions patients posted on Reddit, a popular social discussion website, for doctors to answer. Next, they entered the questions into ChatGPT (OpenAI’s artificial intelligence chatbot) and recorded its response.
A panel of healthcare professionals then evaluated both sets of responses for quality and empathy. For nearly 80% of the answers, the chatbots won out over the real doctors. “Our panel of health care professionals preferred ChatGPT four to one over physicians,” said lead researcher Dr John W. Ayers, PhD, vice chief of innovation in the Division of Infectious Diseases and Global Public Health at the University of California San Diego.
One of the biggest problems facing today’s healthcare providers is that they’re overburdened with messages from patients, Ayers said. The influx of messages could lead to higher levels of provider burnout, Ayers believes. Yet millions of patients are either getting no answers or unsatisfactory ones, he added.
Thinking of how artificial intelligence might help, Ayers and his team turned to Reddit to demonstrate how ChatGPT could present a possible solution to the backlog of providers’ questions. Reddit has a “medical questions” community (a “subreddit” called f/AskDocs) with nearly 500,000 members. People post questions — and vetted healthcare professionals provide public responses.
The questions are wide-ranging, with people asking for opinions on cancer scans, dog bites, miscarriages, vaccines, and other medical topics. After randomly selecting the questions and answers, the researchers presented them to real healthcare professionals — who are actively seeing patients. They were not told which responses were provided by ChatGPT and which were provided by doctors.
ChatGPT was three times more likely to give a response that was very good or good compared to physicians, he told Fox News Digital. The chatbot was 10 times more likely to give a response that was either empathetic or very empathetic compared to physicians.