Explainable AI (XAI) for Clinical Decision Support: Assessing Trust and Performance in Diagnostic Imaging
Keywords:
explainable AI, interpretability, diagnostic imaging, clinical decision support, saliency maps, counterfactual explanations, trust, evaluation, regulationAbstract
Explainable artificial intelligence (XAI) is rapidly becoming central to the safe and trustworthy deployment of deep learning systems in diagnostic imaging. While deep models have achieved or exceeded human-level performance on many imaging tasks, their opaque decision processes undermine clinician trust and complicate regulatory approval and clinical integration. This article provides a comprehensive, scholarly, and practical treatment of XAI for clinical decision support (CDS) in diagnostic imaging. We synthesize theoretical foundations (interpretability vs. explainability), categorization of XAI methods (saliency, perturbation, surrogate, concept-based, and counterfactual explanations), evaluation frameworks (fidelity, plausibility, stability, and utility), human factors and trust calibration, algorithmic and dataset biases, robustness and safety, and regulatory/ethical considerations. We present concrete experimental protocols for rigorous technical and user-centered evaluation, illustrate best-practice deployment pipelines, and propose a research agenda linking model-centered metrics with clinician-centered outcomes. Throughout, we ground claims with peer-reviewed evidence and policy documents and include the two references you requested. This manuscript is written to be submission-ready for a peer-reviewed journal and includes extended methodological appendices and recommended evaluation checklists.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Global Journal of Intelligent Technologies

This work is licensed under a Creative Commons Attribution 4.0 International License.