Explainable Deep Learning Improves Physician Interpretation of Myocardial Perfusion Imaging

Robert J.H. Miller, Keiichiro Kuronuma, Ananya Singh, Yuka Otaki, Sean Hayes, Panithaya Chareonthaitawee, Paul Kavanagh, Tejas Parekh, Balaji K. Tamarappoo, Tali Sharir, Andrew J. Einstein, Mathews B. Fish, Terrence D. Ruddy, Philipp A. Kaufmann, Albert J. Sinusas, Edward J. Miller, Timothy M. Bateman, Sharmila Dorbala, Marcelo Di Carli, Sebastien CadetJoanna X. Liang, Damini Dey, Daniel S. Berman, Piotr J. Slomka

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

Artificial intelligence may improve accuracy of myocardial perfusion imaging (MPI) but will likely be implemented as an aid to physician interpretation rather than an autonomous tool. Deep learning (DL) has high standalone diagnostic accuracy for obstructive coronary artery disease (CAD), but its influence on physician interpretation is unknown. We assessed whether access to explainable DL predictions improves physician interpretation of MPI. Methods: We selected a representative cohort of patients who underwent MPI with reference invasive coronary angiography. Obstructive CAD, defined as stenosis ≥50% in the left main artery or ≥70% in other coronary segments, was present in half of the patients. We used an explainable DL model (CAD-DL), which was previously developed in a separate population from different sites. Three physicians interpreted studies first with clinical history, stress, and quantitative perfusion, then with all the data plus the DL results. Diagnostic accuracy was assessed using area under the receiver-operating-characteristic curve (AUC). Results: In total, 240 patients with a median age of 65 y (interquartile range 58-73) were included. The diagnostic accuracy of physician interpretation with CAD-DL (AUC 0.779) was significantly higher than that of physician interpretation without CAD-DL (AUC 0.747, P = 0.003) and stress total perfusion deficit (AUC 0.718, P < 0.001). With matched specificity, CAD-DL had higher sensitivity when operating autonomously compared with readers without DL results (P < 0.001), but not compared with readers interpreting with DL results (P = 0.122). All readers had numerically higher accuracy with CAD-DL, with AUC improvement 0.02-0.05, and interpretation with DL resulted in overall net reclassification improvement of 17.2% (95% CI 9.2%-24.4%, P < 0.001). Conclusion: Explainable DL predictions lead to meaningful improvements in physician interpretation; however, the improvement varied across the readers, reflecting the acceptance of this new technology. This technique could be implemented as an aid to physician diagnosis, improving the diagnostic accuracy of MPI.

Original languageEnglish
Pages (from-to)1768-1774
Number of pages7
JournalJournal of nuclear medicine : official publication, Society of Nuclear Medicine
Volume63
Issue number11
DOIs
Publication statusPublished - 1 Nov 2022

Keywords

  • artificial intelligence
  • deep learning
  • implementation

Fingerprint

Dive into the research topics of 'Explainable Deep Learning Improves Physician Interpretation of Myocardial Perfusion Imaging'. Together they form a unique fingerprint.

Cite this