Mastoi, Qurat-Ul-Ain and Latif, Shahid and Brohi, Sarfraz and Ahmad, Jawad and Alqhatani, Abdulmajeed and Alshehri, Mohammed S and Al Mazroa, Alanoud and Ullah, Rahmat (2025) Explainable AI in medical imaging: an interpretable and collaborative federated learning model for brain tumor classification. Frontiers in Oncology, 15. 1535478-. DOI https://doi.org/10.3389/fonc.2025.1535478
Mastoi, Qurat-Ul-Ain and Latif, Shahid and Brohi, Sarfraz and Ahmad, Jawad and Alqhatani, Abdulmajeed and Alshehri, Mohammed S and Al Mazroa, Alanoud and Ullah, Rahmat (2025) Explainable AI in medical imaging: an interpretable and collaborative federated learning model for brain tumor classification. Frontiers in Oncology, 15. 1535478-. DOI https://doi.org/10.3389/fonc.2025.1535478
Mastoi, Qurat-Ul-Ain and Latif, Shahid and Brohi, Sarfraz and Ahmad, Jawad and Alqhatani, Abdulmajeed and Alshehri, Mohammed S and Al Mazroa, Alanoud and Ullah, Rahmat (2025) Explainable AI in medical imaging: an interpretable and collaborative federated learning model for brain tumor classification. Frontiers in Oncology, 15. 1535478-. DOI https://doi.org/10.3389/fonc.2025.1535478
Abstract
Introduction A brain tumor is a collection of abnormal cells in the brain that can become life-threatening due to its ability to spread. Therefore, a prompt and meticulous classification of the brain tumor is an essential element in healthcare care. Magnetic Resonance Imaging (MRI) is the central resource for producing high-quality images of soft tissue and is considered the principal technology for diagnosing brain tumors. Recently, computer vision techniques such as deep learning (DL) have played an important role in the classification of brain tumors, most of which use traditional centralized classification models, which face significant challenges due to the insufficient availability of diverse and representative datasets and exacerbate the difficulties in obtaining a transparent model. This study proposes a collaborative federated learning model (CFLM) with explainable artificial intelligence (XAI) to mitigate existing problems using state-of-the-art methods. Methods The proposed method addresses four class classification problems to identify glioma, meningioma, no tumor, and pituitary tumors. We have integrated GoogLeNet with a federated learning (FL) framework to facilitate collaborative learning on multiple devices to maintain the privacy of sensitive information locally. Moreover, this study also focuses on the interpretability to make the model transparent using Gradient-weighted class activation mapping (Grad-CAM) and saliency map visualizations. Results In total, 10 clients were selected for the proposed model with 50 communication rounds, each with decentralized local datasets for training. The proposed approach achieves 94% classification accuracy. Moreover, we incorporate Grad-CAM with heat maps and saliency maps to offer interpretability and meaningful graphical interpretations for healthcare specialists. Conclusion This study outlines an efficient and interpretable model for brain tumor classification by introducing an integrated technique using FL with GoogLeNet architecture. The proposed framework has great potential to improve brain tumor classification to make them more reliable and transparent for clinical use.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | explainable AI, federated learning, brain tumors, GoogLeNet, medical diagnosis |
Subjects: | Z Bibliography. Library Science. Information Resources > ZZ OA Fund (articles) |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 13 May 2025 14:24 |
Last Modified: | 13 May 2025 15:18 |
URI: | http://repository.essex.ac.uk/id/eprint/40440 |
Available files
Filename: fonc-1-1535478.pdf
Licence: Creative Commons: Attribution 4.0