extraction module based on topic perception and title
orientation, it enriches semantic features by mining
potential themes, and uses highly summarized and
valuable information in the title to jointly guide
keyword generation. Secondly, we added the ERNIE
pre trained language model to enhance the
representation of Chinese text syntax structure and
entity phrases. Finally, we added a keyword
information pointer to the original single pointer
generation network, forming a dual pointer network
to fully utilize the extracted key word information.
The research results on the LCSTS dataset show that
compared with other current methods, our method
generates less redundant summary information,
contains more critical information in the source text,
has better readability, and has achieved
improvement in the ROUGE evaluation index.
REFERENCES
Sutskever I, Vinyals O, Le Q V 2014 Sequence to
Sequence Learning with Neural Networks Preprint
arXiv:1409.3215.
Rush A M, Chopra S, Weston J 2015 A Neural Attention
Model for Abstractive Sentence Summarization
Preprint arXiv:D15-1044.
Over P, Dang H, Harman D 2007 DUC in context
Information Processing & Management, vol. 43, no. 6,
pp. 1506-1520 Preprint
DOI:10.1016/j.ipm.2007.01.019.
Napoles C, Gormley M, Durme B V 2012 Annotated
Gigaword Proceedings of the Joint Workshop on
Automatic Knowledge Base Construction and Web-
scale Knowledge Extraction.
Nallapati R, Zhou B, Santos C N D, et al. 2016
Abstractive Text Summarization Using Sequence-to-
Sequence RNNs and Beyond Preprint arXiv:K16-
1028.
Hu B, Chen Q, Zhu F 2015 LCSTS: A Large Scale
Chinese Short Text Summarization Dataset Computer
Science:2667-2671 Preprint
DOI:10.1109/GLOCOM.2007.506.
See A, Liu P J, Manning C D 2017 Get To The Point:
Summarization with Pointer-Generator Networks
Preprint arXiv:P17-1099.
Wan X, Yang J, Xiao J 2007 Towards an Iterative
Reinforcement Approach for Simultaneous Document
Summarization and Keyword Extraction Meeting of
the Association of Computational Linguistics.
Xu W, Li C, Lee M, et al. 2020 Multi-task learning for
abstractive text summarization with key information
guide network Journal on Advances in Signal
Processing, vol. 2020, no. 1 Preprint
DOI:10.1186/s13634-020-00674-7.
Li H, Zhu J, Zhang J, et al. 2020 Keywords-Guided
Abstractive Sentence Summarization Proceedings of
the AAAI Conference on Artificial Intelligence, vol.
34, no. 5, pp. 8196-8203 Preprint
DOI:10.1609/aaai.v34i05.6333.
Akuma S, Lubem T, Adom I T 2022 Comparing Bag of
Words and TF-IDF with different models for hate
speech detection from live tweets International Journal
of Information Technology, vol. 14, no. 7, pp. 3629-
3635 Preprint DOI:10.1007/s41870-022-01096-4.
Sun Y, Wang S, Li Y, et al. 2019 ERNIE: Enhanced
Representation through Knowledge Integration
Preprint arXiv:1904.09223.
Mikolov T, Martin Karafiát, Burget L, et al. 2015
Recurrent neural network based language model
Interspeech, Conference of the International Speech
Communication Association, Makuhari, Chiba, Japan,
September Preprint DOI:10.1109/EIDWT.2013.25.
Gu J, Lu Z, Li H, et al. 2016 Incorporating Copying
Mechanism in Sequence-to-Sequence Learning
Preprint arXiv:P16-1154.
Yin X, May J, Zhou L, et al. 2020 Question Generation for
Supporting Informational Query Intents Preprint
arXiv:2010.09692.
Huang J, Pan L, Xu K, et al. 2020 Generating Pertinent
and Diversified Comments with Topic-aware Pointer-
Generator Networks Preprint arXiv:2005.04396.