Explainable AI (XAI) for Interpretable Cyber Threat Prediction

Authors

  • Mohammed Sadath P Research scholar - Yenepoya (Deemed to be University), Bangalore, Karnataka, India. Author
  • R. Kaviyarasi Associate Professor - Yenepoya (Deemed to be University), Bangalore, Karnataka, India Author

DOI:

https://doi.org/10.47392/IRJAEM.2026.0209

Keywords:

Cybersecurity, Explainable Artificial Intelligence (XAI), Intrusion Detection Systems (IDS), Malware Detection, Hybrid Ensemble Models, Deep Learning, Machine Learning; SHAP, LIME, Threat Prediction

Abstract

Due to the rapid evolution of cyber threats such as intrusions, botnets, DDoS assaults, and insider threats, malware, advanced persistent threats (APTs), detection and mitigation now require machine learning (ML) and deep learning (DL) models that are smart. Nonetheless, due to the “black-box” nature, they can hinder trust, interpretability, and it will uptake in high-stakes settings-an understanding of how the decision is made by the stakeholders is important. When transparency is introduced into ML/DL frameworks, Explainable Artificial Intelligence (XAI) helps to fill the gap and it allows human to understand and to provide any performance issues. The synthesized review incorporates 15 researches on XAI applications in cybersecurity to IDS, malware analysis, cyber risk assessment, threat prediction in IoT, finance & decentralized smart grid. Common examples of explainable AI (XAI) techniques include SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), and by using decision trees or rule-based models, or using hybrid ensembles such as XGBoost, Random forest or Convolution Neural Networks (CNNs). Techniques applied on datasets like NSL-KDD, UNSW-NB15, and CICIDS2017; achieve high metrics like 99% accuracy, precision, and AUC. They also allow local and global interpretability, revealing feature importance (like network traffic patterns, behavioural logs) and causal reasoning. The results showed the advantages of XAI in terms of less false positive and false negatives, analyst accuracy improvement (up to 31%), and increased manage trust. Dataset imbalances, scalability, standardization, and interpretability versus accuracy trade-offs remain issues. Surveys stress the importance of benchmarking framework, real-time explainability, and privacy-preserving AI and hybrid models. The goal of this study is to make cyber security stronger and clearer. It plans to do this using a few different methods: federated learning, systems where people are involved in the decision-making, and by following ethical guidelines. So, XAI helps us understand what AI-powered defences are doing, which makes managing cyber threats more responsible and effective.

Downloads

Download data is not yet available.

Downloads

Published

2026-05-07