Skip to content Skip to navigation

Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms

Publication Type:

Journal Article

Source:

JAMA Netw Open, Volume 3, Number 3, p.e200265 (2020)

ISBN:

2574-3805 (Electronic)<br/>2574-3805 (Linking)

Accession Number:

32119094

URL:

https://web.stanford.edu/group/rubinlab/pubs/Schaffter-2020-EvaluationCombinedArtifici.pdf

Keywords:

*Deep Learning, *Radiologists, Adult, Aged, Algorithms, Artificial Intelligence, Breast Neoplasms/*diagnostic imaging, Early Detection of Cancer, Female, Humans, Image Interpretation, Computer-Assisted/*methods, Mammography/*methods, Middle Aged, Radiology, Sensitivity and Specificity, Sweden, United States

Abstract:

Importance: Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective: To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants: In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements: Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results: Overall, 144231 screening mammograms from 85580 US women (952 cancer positive </=12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166578 examinations from 68008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance: While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation.