Unified Explain Ability Score (UES): A Comprehensive Framework for Evaluating Trustworthy AI Models

Authors

  • Kailash C Kandpal Research Scholar Department of Computer Science, Harcourt Buttler Technical University, Uttar Pradesh, India. Author
  • Dr. Prabhat Verma Professor Department of Computer Science, Harcourt Buttler Technical University, Uttar Pradesh, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2025.0032

Keywords:

Trustworthy Artificial Intelligence, Evaluation metrics, Explainable Artificial Intelligence, Artificial Intelligence

Abstract

In today’s scenario, artificial intelligence systems are mostly used in critical decision-making processes, but at the same time, the need for effective and reliable explanations of their output is required more than before. While various metrics exist to evaluate explain ability, they often focus on isolated aspects such as trustworthiness, clarity, or fidelity, which can lead to incomplete assessments. In this paper, we have introduced a novel Composite Explain Ability Metric (CEM) which is designed to evaluate the quality of explanations given by XAi Methods in different domains and contexts. We are integrating key dimensions of explain ability like faithfulness, interpretability, robustness, action ability, and timeliness by which CEM provides a unified framework and it eases the effectiveness of explanations. We have prepared a systematic approach to assign relative weights to each metric so that context-specific adjustment could be possible, further reflecting the unique demands of different domains like healthcare, finance, etc. The proposed framework also includes a normalization process which ensures the comparability between metrics and helps to aggregate the scores to a comprehensive explain ability assessment. We have validated our metric using simulation and real-world applications, which shows how our framework helps to provide meaningful insights into XAi. Our finding highlights the importance of standardized evaluation metrics to foster trust and transparency which is a further step towards the development of responsible AI in a high-stakes environment. This work addresses the gap available between evaluations of XAi methods and also contributes to the ongoing discourse on trustworthiness and accountability in AI technologies.

Downloads

Download data is not yet available.

Downloads

Published

2025-02-14