November 2, 2024

AI could help physicians better communicate with patients

AI Could Help Physicians Better Communicate With Patients
AI-generated image.

For many physicians, patient communication has become a daunting task. The rise of patient portals and app-based messaging, intended to improve healthcare access, has significantly increased the number of messages that doctors must manage daily. This can add hours of work to their already busy schedules, especially for primary care physicians, who often bear the brunt of the workload.

It’s one of the many reasons why physicians in many countries are experiencing record-level burnout. In fact, the use of electronic records and other similar administrative burdens has been identified as one of the major causes of the burnout crisis.

Physicians need to carefully review medical histories, test results, and previous conversations, all while considering the appropriate medical advice for each individual. In this environment, it’s easy to see why AI is viewed as a potential solution.

“We are very interested in using AI to help solve health system challenges, including the increase in patient messages that are contributing to physician burnout,” said study senior author Christopher Longhurst, MD, executive director of the Joan and Irwin Jacobs Center for Health Innovation, chief medical officer and chief digital officer at UC San Diego Health. “The evidence that the messages are longer suggests that they are higher quality, and the data is clear that physicians appreciated the help, which lowered cognitive burden.”

AI in healthcare: a new application

Generative AI, specifically designed to draft replies to patient messages, has been integrated into electronic health records (EHR) systems at the University of California San Diego. The AI’s purpose is to assist doctors by generating draft responses for four common types of messages: requests for prescription refills, test results, paperwork inquiries, and general medical questions. The draft replies are meant to save time by providing a starting point that physicians can review, edit, and send.

In theory, this should alleviate some of the cognitive burden by reducing the time spent crafting responses from scratch. The AI drafts are generated in under a minute, ensuring that physicians are not kept waiting. But does the reality live up to the promise?

According to a new study, the answers are not so clear.

A mixed picture

The study, which took place between June and August 2023, used a randomized waiting-list design to assess the impact of GenAI on physicians’ time management. Fifty-two primary care physicians participated in the trial, with some granted immediate access to the AI tool, while others received delayed access. A control group of 70 physicians who did not use the AI tool served as a comparison group.

<!– Tag ID: zmescience_300x250_InContent_3

[jeg_zmescience_ad_auto size=”__300x250″ id=”zmescience_300x250_InContent_3″]

–>

The results, analyzed over a three-month period, offered a mixed picture. The study revealed that while physicians who used the AI-generated drafts experienced a 21.8% increase in time spent reading messages, there was no significant reduction in the time spent replying. In fact, the length of replies increased by nearly 18%. This suggests that while the AI drafts provided a starting point, physicians many have spent more time reviewing the drafts to make sure they are accurate.

That’s good news — it means that physicians aren’t just pasting impersonal text, which is what you’d be hoping for. But did the AI reduce the workload? This is complicated.

Physicians acknowledged the benefits of having a draft reply to work with, but many noted that the drafts still required significant editing. Some felt that the AI responses were too polite, formal, or impersonal, meaning they had to be revised to fit the specific needs and tone required for patient communication. This editing process, coupled with the need to review both the patient’s message and the AI draft, explained increased time spent reading messages.

Balancing efficiency with personalization

The biggest benefit was that the AI provided a draft for the physicians to work with. This was indeed a big advantage.

“This study shows that generative AI can be a collaborative tool,” said study lead author Ming Tai-Seale, PhD, MPH, professor of family medicine at UC San Diego School of Medicine. “Our physicians receive about 200 messages a week. AI could help break ‘writer’s block’ by providing physicians an empathy-infused draft upon which to craft thoughtful responses to patients.”

However, one of the critical challenges identified in the study was the balance between efficiency and the need for personalized, compassionate communication. AI, by its nature, can provide a standardized response, but healthcare interactions often require a human touch—especially when patients are concerned about their health. The study found that while the AI drafts often included empathetic language, physicians still needed to tailor these responses to better address individual patient concerns.

Balancing this issue of saving time versus writing personalized, accurate messages has far-reaching implications in healthcare. As AI becomes more integrated into healthcare systems, everyone from senior doctors to those in physician assistant internships will likely also interact with these tools.

So, can machines ever fully replicate the nuanced, empathetic communication that patients expect from their doctors?

Room for optimism

Physicians who gave the AI higher ratings highlighted its ability to save time by quickly generating a draft that they could then refine. This allowed them to focus more on the content of the message rather than its structure, freeing up mental space for other tasks. We may or may not be able to create human-sounding machines, but creating usable drafts that can be customized with ease is definitely more within reach.

Some physicians felt that the AI still required too much oversight, diminishing any potential time savings. Others were concerned about the accuracy of the information provided in the drafts. For example, while AI can be programmed to recognize common patterns in patient messages, it can also suggest inappropriate or unnecessary actions, like recommending an X-ray when a more careful examination might be required first.

If these problems can be sorted, we may have a workable system. As AIs get better, they can also improve the drafts and make it easier for doctors to edit.

Ultimately, the integration of generative AI into healthcare communication is still in its early stages, and this study offers important insights into its strengths and limitations. While AI-generated replies may not drastically reduce the time physicians spend on patient communication, they offer a starting point that can alleviate some of the cognitive burden.

Journal Reference: Ming Tai-Seale, Sally L. Baxter, Florin Vaida, Amanda Walker, Amy M. Sitapati, Chad Osborne, Joseph Diaz, Nimit Desai, Sophie Webb, Gregory Polston, Teresa Helsten, Erin Gross, Jessica Thackaberry, Ammar Mandvi, Dustin Lillie, Steve Li, Geneen Gin, Suraj Achar, Heather Hofflich, Christopher Sharp, Marlene Millen, Christopher A. Longhurst. AI-Generated Draft Replies Integrated Into Health Records and Physicians’ Electronic Communication. JAMA Network Open, 2024; 7 (4): e246565 DOI: 10.1001/jamanetworkopen.2024.6565