Dagdelen, J., Dunn, a., Lee, S., Walker, N., Rosen, a. S.,
Ceder, G., ... & Jain, a. (2024). Structured Information
Extraction from Scientific Text with Large Language
Models. Nature Communications, 15(1), 1418.
Datta, S., & Roberts, K. (2022). Fine-Grained Spatial
Information Extraction in Radiology as Two-Turn
Question Answering with BERT. International Journal
of Medical Informatics, 162, 104754.
Gupta, T., Zaki, M., Krishnan, N. M. a., & Mausam. (2022).
Matscibert: a Materials Domain Language Model for
Text Mining and Information Extraction. Npj
Computational Materials, 8, 141.
Hu, Z., Wang, M., & Han, Y. (2024). Multidimensional
Indicator Identification and Evolution Analysis of
Emerging Technology Topics Based on LDA2Vec-
BERT——a Case Study of Blockchain Technology in
the Field of Disruptive Technology. Journal of Modern
Information, 44(9), 42-58. (in Chinese)
Liu, Y. (2025). Discovering Topics and Trends in
Biosecurity Law Research: A Machine Learning
Approach. One Health, 20, 100964.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., … &
Stoyanov, V. (2019). Roberta: a Robustly Optimized
BERT Pretraining Approach. Arxiv Preprint
Arxiv:1907.11692. Https://Arxiv.Org/Abs/1907.11692.
Liu, Z., Yang, J., & Ma, Y. (2023). Research on Keyword
Survival Model from the Perspective of Term Function.
Journal of Intelligence, 42(8), 150-156+176. (in
Chinese)
Mohtasham, F., Yazdizadeh, B., & Mobinizadeh, M.
(2023). Research Gaps Identified in Iran’s Health
Technology Assessment Reports. Health Research
Policy and Systems, 21(1), 132.
Morris, S. a., Yen, G., Wu, Z., & Asnake, B. (2003). Time
Line Visualization of Research Fronts. Journal of the
American Society for Information Science and
Technology, 54(5), 413-422.
Onishi, T., Kadohira, T., & Watanabe, I. (2018). Relation
Extraction with Weakly Supervised Learning Based on
Process-Structure-Property-Performance
Reciprocity. Science and Technology of Advanced
Materials, 19(1), 649-659.
Qiao, X., Gu, S., Cheng, J., Peng, C., Xiong, Z., Shen, H.,
& Jiang, G. (2025). Fine-Grained Entity Recognition
via Large Language Models. IEEE Transactions on
Neural Networks and Learning Systems.
Rodríguez, a. J. C., Castro, D. C., & García, S. H. (2022).
Noun-Based Attention Mechanism for Fine-Grained
Named Entity Recognition. Expert Systems with
Applications, 193, 116406.
Tan, C., & Xiong, M. (2021). Contrastive Analysis at Home
and Abroad on the Evolution of Hot Topics in the Field
of Data Mining Based on LDA Model. Information
Science, 39(4), 174-185. (in Chinese)
Thakuria, a., & Deka, D. (2024). a Decadal Study on
Identifying Latent Topics and Research Trends in Open
Access LIS Journals Using Topic Modeling
Approach. Scientometrics, 129(7), 3841-3869.
Wang, G., Shi, R., Cheng, W., Gao, L., & Huang, X. (2023).
Bibliometric Analysis for Carbon Neutrality with
Hotspots, Frontiers, and Emerging Trends Between
1991 and 2022. International Journal of Environmental
Research and Public Health, 20(2), 926.
Wei, X., Cui, X., Cheng, N., Wang, X., Zhang, X., Huang,
S., ... & Han, W. (2023). Chatie: Zero-Shot Information
Extraction via Chatting with Chatgpt. Arxiv Preprint
Arxiv:2302.10205.
Westgate, M. J., Barton, P. S., Pierson, J. C., &
Lindenmayer, D. B. (2015). Text Analysis Tools for
Identification of Emerging Topics and Research Gaps
in Conservation Science. Conservation Biology, 29(6),
1606-1614.
Wu, R., Zong, H., Wu, E., Li, J., Zhou, Y., Zhang, C., ... &
Shen, B. (2025). Improving Large Language Models
for Mirna Information Extraction via Prompt
Engineering. Computer Methods and Programs in
Biomedicine, 109033.
Xu, X., Zheng, Y., & Liu, Z. (2016). Study on the Method
of Identifying Research Fronts Based on Scientific
Papers and Patents. Library and Information Service, 60
(24), 97-106. (in Chinese)
Yin, X., Zheng, S., & Wang, Q. (2021). Fine-Grained
Chinese Named Entity Recognition Based on Roberta-
WWM-Bilstm-CRF Model. 6th International
Conference on Intelligent Computing and Signal
Processing.
Yu, L., Qian, L., Fu, C., & Zhao, H. (2019). Extracting
Fine-Grained Knowledge Units from Texts with Deep
Learning. Data Analysis and Knowledge Discovery,
3(1), 38-45. (in Chinese)
Zhang, X., Li, P., & Li, H. (2020). AMBERT: a Pre-Trained
Language Model with Multi-Grained Tokenization.
Arxiv Preprint Arxiv:2008.11869.