The Role of Explainability in Human–AI Co-Decision Making

Authors

DOI:

https://doi.org/10.47392/IRJAEM.2026.0105

Keywords:

Human–AI

Abstract

The rapid deployment of artificial intelligence (AI) in decision-support systems has transformed the way humans interact with computational models. While modern AI systems often achieve high predictive accuracy, their lack of transparency can undermine user trust and limit effective collaboration. Human–AI co-decision making, where both human judgment and AI recommendations jointly influence outcomes, requires explainability as a foundational capability rather than an optional feature. This paper investigates the role of explainable artificial intelligence (XAI) in improving co-decision quality, trust calibration, and accountability. A comprehensive literature review is presented, followed by identification of key research gaps. We propose an Explainable Co-Decision Framework (ECDF) that integrates predictive modeling, explanation generation, and adaptive human feedback. Using a simulated risk-assessment dataset comprising 5,000 instances, the framework is evaluated across multiple conditions. Experimental results demonstrate that structured explanations improve joint decision accuracy by up to 10%, reduce trust calibration error by more than 60%, and enhance human engagement with AI outputs. The findings highlight that explanation quality—not merely availability—plays a decisive role in human–AI teaming. The paper concludes with design recommendations and future research directions for robust explainable co-decision systems.

Downloads

Download data is not yet available.

Downloads

Published

2026-04-06