
the summaries using ROUGE scores. These ROUGE
scores (R-1, R-2, and R-L) measure the overlap of un-
igrams, bigrams, and longest common subsequences
between the generated summary and the ground truth.
The results highlight the performance of each model
in terms of efficiency (execution time) and summa-
rization quality (ROUGE scores), offering insights
into how effectively each model condenses the origi-
nal text while maintaining meaning and relevance.
The Proposed Model CNN-Bart generally exhibits
higher execution times compared to T5 across all
examples, with some examples showing a signifi-
cant difference in speed. Despite this, the CNN-
Bart model tends to generate summaries with better
ROUGE scores, especially for more detailed or com-
plex texts, suggesting it may perform better at captur-
ing the key concepts of the original text. On the other
hand, T5 demonstrates faster execution times but gen-
erates summaries with slightly lower ROUGE scores,
indicating that while it is quicker, it may sacrifice
some accuracy in capturing the essence of the orig-
inal content. This trade-off between speed and qual-
ity is evident in the overall performance, with CNN-
Bart being more effective in terms of summarization
quality but less efficient, while T5 offers faster exe-
cution but with marginally lower quality in the gener-
ated summaries.
Figure 4: This caption has one line so it is centered.
6 CONCLUSIONS
The growing demand for advanced multi-document
summarization necessitates innovative methods to ef-
fectively represent and understand document seman-
tics. In this paper, we introduced a framework for
abstractive multi-document summarization using Se-
mantic Link Networks (SLNs), which transforms and
represents document content. Our proposed approach
constructs an SLN by extracting and connecting key
concepts and events from source documents, creating
a semantic structure that captures their interrelations.
A coherence-preserving selection mechanism is then
applied to identify and summarize the most critical
components of the network.
Unlike extractive methods that copy content ver-
batim, our approach generates summaries that are se-
mantically rich and concise, aligning closely with the
context of the original documents. Through experi-
ments on benchmark datasets, including CNN/Daily
Mail, we demonstrated that the proposed method
achieves significant improvements over state-of-the-
art baselines, with a 10.5% increase in ROUGE-1 and
a 12.3% improvement in BLEU scores. Additionally,
our framework achieves an overall accuracy of 94.8%
in semantic coherence and content coverage, substan-
tially outperforming existing methods.
These results underscore the potential of SLNs to
bridge the gap between document representation and
understanding for abstractive summarization tasks.
By providing a novel and effective framework, our
work advances summarization techniques and high-
lights SLNs as a robust tool for semantic-based infor-
mation processing.
REFERENCES
Abo-Bakr, H. and Mohamed, S. A. (2023). Automatic
multi-documents text summarization by a large-scale
sparse multi-objective optimization algorithm. Com-
plex Intell. Syst., 9:4629–4644.
Dhankhar, S. and Gupta, M. K. (2022). A statistically based
sentence scoring method using mathematical combi-
nation for extractive hindi text summarization. Jour-
nal of Interdisciplinary Mathematics, 25(3):773.
Ketineni, S. and J., S. (2023). Metaheuristic aided im-
proved lstm for multi-document summarization: A
hybrid optimization model. Journal of Web Engineer-
ing, 22(4):701–730.
Laskar, M. T. R., Hoque, E., and Huang, J. X. (2022).
Domain adaptation with pre-trained transformers for
query-focused abstractive text summarization. Com-
putational Linguistics, 48(2):279.
Li, W. and Zhuge, H. (2021). Abstractive multi-document
summarization based on semantic link network. IEEE
Transactions on Knowledge and Data Engineering,
33(1):43–54.
Liu, S., Cao, J., Deng, Z., Zhao, W., Yang, R., Wen, Z., and
Yu, P. S. (2024). Neural abstractive summarization for
long text and multiple tables. IEEE Transactions on
Knowledge and Data Engineering, 36(6):2572–2586.
Narwadkar, Y. P. and Bagade, A. M. (2023). Abstractive
text summarization models using machine learning al-
gorithms. In 2023 7th International Conference On
Computing, Communication, Control And Automation
(ICCUBEA), pages 1–6.
INCOFT 2025 - International Conference on Futuristic Technology
862