
2 RELATED WORK
GNN has witnessed rapid development in address-
ing the unique demanding situations where data are
presented as graphs at places where traditional deep
learning approaches often fail to provide significant
insight. This comprehensive survey on GNNs of-
fers an in-depth analysis that includes critical aspects
along with the basics of GNN, the interaction with
convolution neural networks, GNN message-passing
mechanisms, various GNN models and suitable ap-
plications. Inside the message-passing mechanism of
a neural network, every node has its message stored
in the form of characteristic vectors(Khemani et al.,
2024). This process aggregates the vectors to create
a new message. Graphs can be classified as directed
or undirected, static or dynamic, homogeneous or het-
erogeneous, transductive or inductive. In (Yuan et al.,
2023), GCN and GAT / GAN are compared with re-
spect to the processes incolved. GCN entails initial-
ization, convolution operation, weighted aggregation,
activation feature, and stacking. GAT/GAN consists
of initialization, self-attention mechanism, attention
computation, weighted aggregation, more than one at-
tention, output mixture, learning weights, and stack-
ing layers. These models have applications in graph
construction, social networks, and citation networks.
Document preprocessing is always an important
step in document classification. (Kavitha et al.,
2023) has used mutual information for feature ex-
traction based on word sense disambiguation. This
method claims to improve the text classification by
distinguishing the sense of polysemy words correctly.
Sparse Graph Auto-Encoders have shown remarkable
contribution to improving the performance of docu-
ment recommendation systems as proved by (Menon
et al., 2023). Explainability is one of the two vital top-
ics of interest these days.In the paper, (Li et al., 2022),
a comprehensive assessment of contemporary GNN
explainability strategies is presented, including evalu-
ations of quantitative metrics and datasets. Further-
more, the paper introduces a novel evaluation met-
ric for comparing various GNN explainability tech-
niques using unique real-world datasets, GNN archi-
tectures, and future instructions for GNN explainabil-
ity. In explainability, the two primary modern meth-
ods are function visualization and behavior approx-
imation(Li et al., 2022). Function visualization en-
compasses techniques such as saliency maps for im-
ages and heatmaps for text, which highlight key re-
gions or words contributing to predictions. However,
these methods encounter challenges when applied to
non-Euclidean data structures, such as graphs, and
can involve subjective evaluation. Behavior approx-
imation, on the other hand, relies on interpretable
models designed to replicate the behavior of black-
box systems.The evaluation of modern explanation
methods revolves around two main criteria: plausi-
bility and correctness. Plausibility refers to how con-
vincing the explanations are to humans, often rely-
ing on subjective human judgment. Correctness, on
the other hand, assesses whether an explanation accu-
rately reflects the reasoning process of the underlying
model, with various metrics proposed for this evalua-
tion(Li et al., 2022).
Explainability methods are generally divided into
two categories: those that originate outside of GNNs
and those specifically developed for GNNs. GNN-
specific strategies often adapt gradient-based and
decomposition-based methods to explain graph neu-
ral networks. Examples of such techniques include
GNN Explainer and PGExplainer(Parameterized Ex-
plainer), which aim to generate explanations by iden-
tifying important sub-graphs, DeepLIFT, GNN-LRP,
Grad-CAM, SubgraphX, and XGNN.These are pro-
vided in pytorch geometric. Other methods, like
Graph Mask and SubgraphX, provide both instance-
level and global explanations by effectively discard-
ing unnecessary edges or exploring diverse sub-
graphs. XGNN offers a model level clarification with
the aid of producing graph patterns for class predic-
tions(Yuan et al., 2023).
Explainable AI tools are used in (Reghu et al.,
2024) to interpret the output produced by retrieval
systems. It has a classifier as a sub-task which pre-
dicts the relevance of a document for a given query.
The results of the evaluation metrics for this sys-
tem are explained using various tools like LIME,
SHAP, Partial Dependency Plots, DALEX, Anchors
and saliency maps.
The critical significance of evaluating the qual-
ity and reliability of factors generated by using graph
neural networks (GNNs) in diverse high-stake appli-
cations is given in (Agarwal et al., 2022). It empha-
sizes the need for standardized evaluation techniques
and reliable information sources to assess GNN cor-
rectly. The authors introduce ShapeGN (Shape Gen-
eration Networks), an artificial graph data genera-
tor, and GraphXAI, a graph explainability library, as
gear to aid the benchmarking modern GNN explain-
ers. These resources enhance explainability research
in GNNs with the aid of providing a broader sur-
roundings for evaluating post-hoc motives throughout
numerous real-world packages.
A modern approach to text summarization with
the usage of Graph Neural Networks (GNNs) and
Named Entity Recognition (NER) models is pre-
sented in (Khan et al., 2024). The paper highlights the
INCOFT 2025 - International Conference on Futuristic Technology
94