A Comprehensive Review of Federated Learning and Explainable AI Approaches for Privacy-Preserving Breast Cancer Detection: Advancements in Multi-Modal Data Fusion, Interpretability, and Clinical Trust

Authors

  • Sandhya H PG MTech. Student, USN - 1RR23SCS01, Second Year, Dept. of Computer Science & Engineering, Rajarajeswari College of Engineering, Bangalore, Karnataka, India. Author
  • Dr. Kirubha D Project Guide, Professor and HoD, Dept. of Computer Science & Engineering, Rajarajeswari College of Engineering, Bangalore, Karnataka, India. Author
  • Dr. T.C.Manjunath Dean Research (R & D), Professor, Dept. of Computer Science & Engineering, IoT Cyber Security & Blockchain Technology, Rajarajeswari College of Engineering, Bangalore, Karnataka, India. Author

DOI:

https://doi.org/10.47392/IRJAEM.2025.0471

Keywords:

Federated learning, Explainable AI, Privacy-preserving healthcare, Breast cancer detection, Multi-modal data fusion, Interpretability, Clinical trust, Medical AI, Healthcare ethics, Secure machine learning

Abstract

In this review paper, a comprehensive review of federated learning and explainable ai approaches for privacy-preserving breast cancer detection is presented. Advancements in Multi-Modal Data Fusion, Interpretability, and Clinical Trust. Breast cancer remains one of the leading causes of mortality among women worldwide, demanding diagnostic solutions that are accurate, secure, and clinically interpretable. With the rapid growth of artificial intelligence in healthcare, federated learning (FL) and explainable AI (XAI) have emerged as complementary paradigms that address two critical aspects: preserving patient privacy while ensuring transparency in decision-making. This review paper explores how FL enables collaborative model training across distributed clinical institutions without centralizing sensitive medical data, while XAI frameworks enhance trust by making predictions understandable for clinicians. A special emphasis is placed on multi-modal data fusion—integrating mammography, histopathology, genomics, and electronic health records—to improve diagnostic robustness and reliability. The paper synthesizes current advancements, challenges, and future directions, highlighting how privacy-preserving FL techniques, combined with interpretable models, can foster clinical trust and adoption in real-world healthcare systems. By comparing state-of-the-art approaches, the review provides insights into algorithmic performance, interpretability trade-offs, and ethical considerations in medical AI. Ultimately, this work underscores the importance of balancing technological innovation with clinical usability, positioning federated learning and explainable AI as pivotal enablers for the next generation of breast cancer detection systems.

Downloads

Download data is not yet available.

Downloads

Published

2025-09-22