framework is complex, which leads to the complexity
of model training and optimization, that is, it requires
a lot of time and computational resources. Second,
although DMATS considers the effectiveness of tasks
in task scheduling, it still assumes that there is some
correlation between different tasks, which may not be
fully valid in practical applications, especially in
scenarios where user preferences are very diverse,
which means the lack of scenario generalization
ability.
Knowledge graph methods are also widely used
in cold-start recommendation, taking KEGL as an
example. KEGL proposes a collaborative-enhanced
guaranteed embedding generator to guarantee the
quality of embedding and constructs a knowledge-
enhanced gated attention aggregator to adaptively
control the weights of guaranteed embedding and
neighbor embedding, which effectively improves the
recommendation quality and solves the items CSP.
However, even so, this method still fails to avoid the
common problems faced by knowledge graph
methods. First, its effectiveness is highly dependent
on the quality and completeness of the knowledge
graph, and when the quality of the knowledge graph
itself is not high and complete enough, the
recommendation effect will be reduced. Second, the
introduction of two additional operators also
increases the computational complexity, which is
already not low, and raises the time cost of model
training. Finally, although KEGL considers cold-start
neighbors, the lack of interaction information in a
completely cold-start scenario also reduces the
recommendation efficiency.
The LLMs is discussed at the end. On the one
hand, compared with other methods, the LLMs is
nascent, take AutoDisenSeq-LLM as an example, the
performance of facing the CSP is not good enough
compared with other methods. But on the other hand,
because the youth of LLMs brings more possibilities
for this method, LLMs is likely to become the trend
of CSP research in the future. The advantages of
LLMs are obvious, with its excellent text
comprehension in AutoDisenSeq-LLM, the
recommendation list is sorted and optimized, which
improves the accuracy of the recommendation well.
Equally obvious are the drawbacks, which is that the
model is too complex and requires a lot of
computational resources and time, which is prevalent
in current methods.
Current evaluation metrics for cold-start
problems, such as MAE, RMSE, Precision, Recall,
and NDCG, provide a multidimensional assessment
of method performance. However, the lack of
standardized metrics makes it difficult to make a
comprehensive comparison of methods. A more
standardized and comprehensive assessment
framework is needed to address this problem. In terms
of datasets, while public datasets such as MovieLens,
Amazon, and Yelp support cold-start research, they
tend to suffer from low timeliness, slow updates, and
single-domain limitations. These issues hinder the
assessment of method generalizability and cross-
domain capabilities. In addition, some datasets
contain sensitive user information, raising privacy
concerns in an era when data security is increasingly
important. Future datasets should prioritize timeliness,
multi-domain coverage, and privacy protection to
better support the development and evaluation of
cold-start recommendation methods.
4 CONCLUSION
This paper focuses on the current state of the CSP in
RS and analyzes the principles, performance, and
limitations of three advanced approaches: meta-
learning, knowledge graphs, and LLMs. Specifically,
DMATS improves embedding accuracy and
automation, KEGL enhances recommendation
quality through high-quality embeddings, and
AutoDisenSeq-LLM leverages the text
comprehension ability of LLMs to optimize
recommendation lists, thereby improving accuracy.
Experiments with datasets such as Amazon,
MovieLens, and Yelp demonstrate good performance;
however, challenges such as weak scene
generalization, low computational efficiency, and
issues related to datasets timeliness and privacy still
persist. Future research will focus on integrating
LLMs with other advanced methods to leverage their
strengths, improving algorithm efficiency while
reducing computational costs. Additionally, efforts
will be made to develop privacy-preserving datasets
to enhance cross-domain generalization and real-
world applicability. These studies aim to improve the
performance further and generalizability of RS in
dynamic contexts.
REFERENCES
Finn, C., Abbeel, P., & Levine, S. 2017. Model-agnostic
meta-learning for fast adaptation of deep networks. In
International conference on machine learning, 1126-
1135.
He, D., Cui, J., Wang, X., Song, G., Huang, Y., & Wu, L.
2025. Dual Enhanced Meta-learning with Adaptive