AI-Driven Neuro-Causal Hybrid Framework for Transparent Decision Making in Autonomous Scientific Systems

Authors

  • Arun Kumar Seeni Research Scholar, Department of Computer Science and Engineering, Saveetha School of Engineering, Chennai, India. Author
  • Rajasekar Murugesan Professor-Guide, Department of Computer Science and Engineering, Saveetha School of Engineering, Chennai, India. Author
  • Anitha Gopalan Professor-Guide, Department of Computer Science and Engineering, Saveetha School of Engineering, Chennai, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2026.0240

Keywords:

predictive accuracy, interpretability, causal reasoning, scientific systems, neural networks, explainable AI, transparent decision- making, causal inference, Neuro-Causal Intelligence, autonomous systems

Abstract

The advent of autonomous systems in scientific research has marked a significant evolution in data processing, decision-making, and analysis. While machine learning (ML) and deep learning (DL) algorithms have demonstrated remarkable success in scientific applications, these systems often operate as black boxes, providing minimal transparency regarding their decision-making processes. This lack of interpretability hinders trust and limits the applicability of autonomous systems in high-stakes scientific domains, such as healthcare, environmental monitoring, and complex simulations. In this context, we propose the concept of Neuro-Causal Intelligence, a hybrid framework designed to integrate the strengths of causal reasoning with advanced neural architectures, ensuring transparent, interpretable, and reliable decision-making in autonomous scientific systems. The core principle behind Neuro-Causal Intelligence lies in its ability to merge causal inference with neural network models. Causal inference provides a rigorous approach to understanding the relationships between variables, making it possible to trace the causes of observed outcomes, whereas neural networks excel at identifying patterns and correlations in large datasets. By combining these two methodologies, our framework allows the system to not only predict outcomes but also explain the underlying causes and mechanisms responsible for these outcomes. This hybrid approach is particularly essential for scientific systems that require not only accurate predictions but also understandable reasoning for validation and further analysis. The framework operates in three key stages.

Downloads

Download data is not yet available.

Downloads

Published

2026-05-08