December 23, 2024

AI’s Breast Cancer Blind Spots Exposed by New Study

Example mammogram designated a false-positive case score of 96 in a 59-year-old Black client with scattered fibroglandular breast density. (A) Left craniocaudal and (B) mediolateral oblique views show vascular calcifications in the upper external quadrant at middle depth (box) that were singularly identified by the synthetic intelligence algorithm as a suspicious finding and designated a specific sore rating of 90. This led to a general case rating designated to the mammogram of 96. Credit: Radiological Society of North America (RSNA)Research reveals AI in mammography might produce incorrect positives influenced by patients age and race, highlighting the significance of varied training data.A recent study, which analyzed almost 5,000 evaluating mammograms translated by an FDA-approved AI algorithm, found that patient characteristics like race and age impacted the rate of false positives. The findings were released today (May 21) in Radiology, a journal of the Radiological Society of North America (RSNA).”AI has actually ended up being a resource for radiologists to enhance their effectiveness and precision in reading screening mammograms while reducing reader burnout,” said Derek L. Nguyen, M.D., assistant professor at Duke University in Durham, North Carolina. “However, the impact of patient attributes on AI efficiency has not been well studied.”Challenges in AI ApplicationDr. Nguyen stated while initial data suggests that AI algorithms applied to evaluating mammography tests may enhance radiologists diagnostic performance for breast cancer detection and reduce analysis time, there are some elements of AI to be mindful of.”There are few demographically varied databases for AI algorithm training, and the FDA does not need varied datasets for validation,” he stated. “Because of the distinctions among patient populations, its important to investigate whether AI software can accommodate and perform at the same level for various client ages, ethnic cultures, and races.”Example mammogram designated a false-positive danger score of 1.0 in a 59-year-old Hispanic client with heterogeneously dense breasts. Bilateral reconstructed two-dimensional (A, B) craniocaudal and (C, D) mediolateral oblique views are revealed. The algorithm forecasted cancer within 1 year, but this person did not establish cancer or atypia within 2 years of the mammogram. Credit: Radiological Society of North America (RSNA)Study Design and DemographicsIn the retrospective research study, scientists recognized patients with negative (no proof of cancer) digital breast tomosynthesis screening assessments carried out at Duke University Medical Center in between 2016 and 2019. All patients were followed for a two-year duration after the screening mammograms, and no clients were diagnosed with a breast malignancy.The researchers randomly picked a subset of this group including 4,855 clients (mean age 54 years) broadly dispersed throughout 4 ethnic/racial groups. The subset included 1,316 (27%) white, 1,261 (26%) Black, 1,351 (28%) Asian, and 927 (19%) Hispanic patients.A commercially readily available AI algorithm interpreted each exam in the subset of mammograms, producing both a case rating (or certainty of malignancy) and a threat rating (or 1 year subsequent malignancy threat). AI Performance Across Demographics”Our goal was to assess whether an AI algorithms efficiency was uniform throughout age, breast density types, and different patient race/ethnicities,” Dr. Nguyen said.Given all mammograms in the study were negative for the existence of cancer, anything flagged as suspicious by the algorithm was considered a false positive outcome. False favorable case scores were substantially more likely in Black and older clients (71-80 years) and less most likely in Asian clients and younger patients (41-50 years) compared to white clients and females between the ages of 51 and 60.”This study is crucial due to the fact that it highlights that any AI software application acquired by a health care institution may not carry out similarly throughout all patient ages, races/ethnicities, and breast densities,” Dr. Nguyen said. “Moving forward, I believe AI software application upgrades need to focus on guaranteeing demographic variety.”Considerations for Healthcare ProvidersDr. Nguyen said health care institutions should comprehend the client population they serve before buying an AI algorithm for evaluating mammogram analysis and ask vendors about their algorithm training.”Having a standard understanding of your institutions demographics and asking the vendor about the ethnic and age diversity of their training data will assist you comprehend the constraints youll deal with in medical practice,” he said.Reference: “Patient Characteristics Impact Performance of AI Algorithm in Interpreting Negative Screening Digital Breast Tomosynthesis Studies” by Dr. Nguyen, Yinhao Ren, Ph.D., Tyler M. Jones, B.S., Samantha M. Thomas, M.S., Joseph Y. Lo, Ph.D., and Lars J. Grimm, M.D., M.S., 21 May 2024, Radiology.

Credit: Radiological Society of North America (RSNA)Research reveals AI in mammography may produce false positives affected by clients age and race, underscoring the value of diverse training data.A recent study, which examined almost 5,000 evaluating mammograms interpreted by an FDA-approved AI algorithm, discovered that client characteristics like race and age impacted the rate of incorrect positives. “Because of the distinctions among patient populations, its essential to investigate whether AI software application can carry out and accommodate at the exact same level for different client ethnicities, races, and ages. All clients were followed for a two-year period after the screening mammograms, and no patients were detected with a breast malignancy.The researchers arbitrarily chose a subset of this group consisting of 4,855 clients (average age 54 years) broadly dispersed across 4 ethnic/racial groups. Incorrect positive case ratings were substantially more most likely in Black and older patients (71-80 years) and less most likely in Asian patients and younger clients (41-50 years) compared to white clients and women in between the ages of 51 and 60.