May 5, 2024

Cancer and AI – Can ChatGPT Be Trusted?

A recent research study found that the AI chatbot, ChatGPT, supplied 97% proper responses to typical cancer misconceptions and mistaken beliefs, however raised issues due to its potentially complicated and indirect language, highlighting the need for care in advising patients to utilize chatbots for cancer information.
A study released in the Journal of The National Cancer Institute Cancer Spectrum looked into the increasing usage of chatbots and synthetic intelligence (AI) in offering cancer-related info. The scientists found that these digital resources accurately debunk typical misconceptions and mistaken beliefs about cancer. This pioneering research study was conducted by Skyler Johnson, MD, a physician-scientist at the Huntsman Cancer Institute and an assistant teacher in the department of radiation oncology at the University of Utah. His aim was to examine the dependability and accuracy of cancer info offered by ChatGPT.
Skyler Johnson, MD. Credit: Huntsman Cancer Institute
Johnson and his team utilized the National Cancer Institutes (NCI) list of regular cancer myths and mistaken beliefs as a testing room. They discovered that a substantial 97% of the answers offered by ChatGPT were accurate. This result is accompanied by notable cautions. One considerable issue raised by the team was the capacity for a few of ChatGPTs answers to be misconstrued or misinterpreted.
” This might cause some bad decisions by cancer clients. The group suggested caution when advising patients about whether they ought to utilize chatbots for info about cancer,” states Johnson.

A research study published in the Journal of The National Cancer Institute Cancer Spectrum dove into the increasing usage of chatbots and synthetic intelligence (AI) in providing cancer-related information. The researchers discovered that these digital resources accurately debunk common misconceptions and misunderstandings about cancer. His objective was to evaluate the reliability and precision of cancer info provided by ChatGPT.
Johnson and his team utilized the National Cancer Institutes (NCI) list of regular cancer misconceptions and misconceptions as a testing ground.

The research study discovered customers were blinded, suggesting they didnt know whether the answers originated from the chatbot or the NCI. Though the answers were accurate, reviewers found ChatGPTs language was indirect, vague, and in many cases, unclear.
” I comprehend and acknowledge how tough it can feel for cancer patients and caregivers to gain access to accurate details,” states Johnson. “These sources require to be studied so that we can assist cancer clients navigate the dirty waters that exist in the online info environment as they try to look for answers about their diagnoses.”
Inaccurate info can harm cancer patients. In a previous study by Johnson and his team published in the Journal of the National Cancer Institute, they found that false information prevailed on social networks and had the potential to harm cancer clients.
The next actions are to examine how typically patients are utilizing chatbots to look for out details about cancer, what concerns they are asking, and whether AI chatbots supply accurate responses to unusual or unusual concerns about cancer.
Reference: “Using ChatGPT to evaluate cancer myths and misunderstandings: artificial intelligence and cancer info” by Skyler B Johnson, Andy J King, Echo L Warner, Sanjay Aneja, Benjamin H Kann and Carma L Bylund, 17 March 2023, Journal of The National Cancer Institute– Cancer Spectrum.DOI: 10.1093/ jncics/pkad015.
The research study was funded by the National Cancer Institute and Huntsman Cancer Foundation.