Explainable AI (XAI) for Clinical Decision Support: Assessing Trust and Performance in Diagnostic Imaging

Explainable AI (XAI) for Clinical Decision Support: Assessing Trust and Performance in Diagnostic Imaging

Authors

  • Henry Wallace Department of Data Science, University of Oxford (UK)

Keywords:

explainable AI, interpretability, diagnostic imaging, clinical decision support, saliency maps, counterfactual explanations, trust, evaluation, regulation

Abstract

Explainable artificial intelligence (XAI) is rapidly becoming central to the safe and trustworthy deployment of deep learning systems in diagnostic imaging. While deep models have achieved or exceeded human-level performance on many imaging tasks, their opaque decision processes undermine clinician trust and complicate regulatory approval and clinical integration. This article provides a comprehensive, scholarly, and practical treatment of XAI for clinical decision support (CDS) in diagnostic imaging. We synthesize theoretical foundations (interpretability vs. explainability), categorization of XAI methods (saliency, perturbation, surrogate, concept-based, and counterfactual explanations), evaluation frameworks (fidelity, plausibility, stability, and utility), human factors and trust calibration, algorithmic and dataset biases, robustness and safety, and regulatory/ethical considerations. We present concrete experimental protocols for rigorous technical and user-centered evaluation, illustrate best-practice deployment pipelines, and propose a research agenda linking model-centered metrics with clinician-centered outcomes. Throughout, we ground claims with peer-reviewed evidence and policy documents and include the two references you requested. This manuscript is written to be submission-ready for a peer-reviewed journal and includes extended methodological appendices and recommended evaluation checklists.

Downloads

Published

2024-12-30

Similar Articles

1-10 of 13

You may also start an advanced similarity search for this article.