
nations of tumor-specific properties, such as hetero-
geneity and border irregularity, enabling personalized
therapeutic interventions (Isensee et al., 2021).
Brain tumors affect an estimated 40,000 to 50,000
adults annually in India, with children constituting
20% of these cases. This prevalence, combined with
the unique challenges posed by brain tumors, empha-
sizes the urgent need for improved diagnostic tools.
Many existing algorithms remain opaque, operating
as “black boxes” and providing predictions without
explaining the rationale. The lack of transparency
hampers medical professionals, who rely on clear and
precise information for making informed decisions.
Addressing this gap is crucial to advancing patient
care and fostering confidence in AI-powered solu-
tions.
The rest of the paper is organized as follows:
Section II reviews the relevant literature and high-
lights existing gaps in the domain. Section III de-
tails the methodology, including the ResNet architec-
ture, LRP integration along with Report Generation.
Section IV presents experimental results and analysis,
showcasing the model’s effectiveness in addressing
diagnostic challenges. Section V discusses the clin-
ical applicability of the proposed approach and its po-
tential impact along with future research directions.
2 BACKGROUND STUDY
2.1 Related Work and Prior Studies
Recent advancements in medical imaging have fo-
cused on multi-class classification of brain MRI im-
ages, with deep learning models achieving significant
breakthroughs in accuracy and efficiency. Traditional
machine learning techniques, such as Support Vector
Machines (SVMs) and K-Nearest Neighbors (KNNs),
relied heavily on handcrafted features like Gray-
Level Co-occurrence Matrices (GLCM) and Princi-
pal Component Analysis (PCA) (Bach et al., 2015b).
These methods often struggled with the complexity
and variability inherent in medical imaging datasets.
In contrast, Convolutional Neural Networks (CNNs)
and transfer learning frameworks, including ResNet,
AlexNet, and GoogLeNet, have demonstrated supe-
rior robustness and scalability in classifying MRI im-
ages into multiple classes (Vankdothu and Hameed,
2022). The incorporation of preprocessing techniques
such as data augmentation, skull stripping, and mor-
phological operations further enhances the effective-
ness of these models, showcasing their potential for
clinical applications (Kulkarni and Sundari, 2020).
Despite their success in achieving high classifi-
cation accuracy, deep learning models often suffer
from a black-box nature, which hinders their inter-
pretability and transparency in medical imaging. Ex-
plainability is critical in multi-class classification, as
understanding the reasoning behind predictions fos-
ters reliability and trust among clinicians. Visu-
alization methods like Grad-CAM have been em-
ployed to highlight tumor regions in MRI images,
adding a layer of interpretability to these models
(Pang et al., 2023). Additionally, techniques such
as Layer-wise Relevance Propagation (LRP) have
emerged as powerful tools for explaining classifier
decisions by providing pixel-wise decomposition of
predictions. LRP generates heatmaps that highlight
regions most relevant to a given class prediction, en-
hancing transparency and interpretability (Bach et al.,
2015b). Studies have validated LRP’s utility in multi-
class medical imaging tasks by confirming predic-
tions and identifying biologically meaningful fea-
tures, reinforcing its value in AI-driven diagnostic
systems (Babu Vimala et al., ).
2.2 Gaps in Current Research and How
proposed work Addresses Them
While existing deep learning models have achieved
remarkable performance in classifying brain MRI im-
ages, they often lack adequate interpretability. Meth-
ods like Grad-CAM, though widely used, focus pri-
marily on high-level feature activations and lack the
precision required for fine-grained analysis. Addi-
tionally, they may fail to distinguish subtle differ-
ences among multiple classes, a critical need in med-
ical imaging (Pang et al., 2023). Furthermore, tech-
niques like SHAP (SHapley Additive exPlanations),
which emphasize feature importance, are computa-
tionally expensive and do not provide the spatial vi-
sualizations necessary for medical diagnostics.
These limitations highlight the need for more ad-
vanced explainability methods, such as Layer-wise
Relevance Propagation (LRP), which combines com-
putational efficiency with detailed interpretability.
LRP addresses the black-box challenge by generat-
ing pixel-wise heatmaps that pinpoint the regions con-
tributing most to model predictions, providing a fine-
grained understanding of decision-making processes
(Bach et al., 2015b). Unlike Grad-CAM, LRP en-
sures granularity in analyzing multi-class predictions,
making it suitable for distinguishing subtle differ-
ences in tumor characteristics. Additionally, the pro-
posed integration of LRP with advanced architectures
like ResNet leverages the strengths of deep learn-
ing for robust multi-class classification while enhanc-
ing transparency. By addressing these gaps, the cur-
INCOFT 2025 - International Conference on Futuristic Technology
6