Explainable Artificial Intelligence in Critical Decision-Making Systems

Authors

  • Bini P B CCSIT Dr John Matthai Center, Thrissur, India. Author

Keywords:

Explainable AI, Interpretability, SHAP, LIME, Grad-CAM, Healthcare AI, Trustworthy AI, Algorithmic Transparency, Critical Systems, Machine Learning

Abstract

The deployment of artificial intelligence (AI) in critical decision-making domains—including healthcare diagnostics, financial risk assessment, and autonomous vehicle navigation—has intensified the demand for transparency and interpretability in algorithmic reasoning. Explainable Artificial Intelligence (XAI) has emerged as a pivotal research paradigm aimed at rendering complex machine learning models comprehensible to human stakeholders without substantially compromising predictive performance. This paper presents a comprehensive survey of XAI methodologies, categorizing them into model-agnostic approaches (LIME, SHAP, Anchors), gradient-based techniques (Grad-CAM, Integrated Gradients), and inherently interpretable architectures (decision trees, CORELS, Explainable Boosting Machines). We systematically evaluate these methods across three critical application domains, comparing their explanation fidelity, computational overhead, and alignment with regulatory requirements such as the EU AI Act and GDPR's right to explanation. Our analysis reveals that SHAP achieves the highest average fidelity score (0.88) across domains, while inherently interpretable models offer superior transparency at the cost of reduced capacity for modeling complex non-linear relationships. We further identify key research gaps, including the absence of standardized evaluation benchmarks and the challenge of balancing faithfulness with human comprehensibility. The findings inform practical guidelines for selecting XAI techniques appropriate to specific deployment contexts and regulatory constraints.

Author Biography

  • Bini P B, CCSIT Dr John Matthai Center, Thrissur, India.

    Assistant Professor, Department of Computer Science

Downloads

Published

2026-03-09

Issue

Section

Articles

How to Cite

Explainable Artificial Intelligence in Critical Decision-Making Systems. (2026). Peer-Reviewed Journal of Computer Science (PRJCS), 1(3), 27-33. https://peerreviewjournal.in/index.php/prjcs/article/view/32