Study on Copyright Infringement Liability of Short Video Platforms
from the Perspective of Intelligent Algorithm Recommendation
Xiaopin Lyu
1
and Yaqing Yang
2,*
1
Faculty of Law, Shandong Agricultural University, Tai'an, Shandong, 271018, China
2
Faculty of Law, Zhejiang University of Technology, Hangzhou, Zhejiang, 310014, China
*
Keywords: Intelligent Algorithm Recommendation Technology, Short Video Platform, Copyright, Principle of
Technology Neutrality.
Abstract: In the context of the Internet era, with the continuous development of the short video industry, the intelligent
algorithmic recommendation technology to provide personalized content for users is widely used by short
video platforms. However, the use of this technology has facilitated the occurrence of infringement on a large
scale, resulting in damage to the rights and interests of copyright owners. China's existing legislation lacks
specific and detailed provisions on the specific liability scenarios of "algorithmic recommendation". In
judicial practice, there is no uniform standard for determining whether and how the platform should be held
liable, and the academic community also has different opinions. To address this issue, it is necessary to
analyze the issue from the perspective of the traditional safe haven principle and in conjunction with specific
theories such as the principle of technological neutrality. In addition, it is also necessary to explore the
divergence in judicial practice on the duty of care of the platform in the light of classic judicial cases. On this
basis, further specific paths for reconfiguring the boundaries of responsibility are proposed.
1 INTRODUCTION
As the market scale of the short video industry is
growing explosively and the group of short video
creators is getting bigger and bigger, the intelligent
algorithm recommendation technology has become
the core means for platforms to enhance user
stickiness and commercial revenue. Intelligent
algorithm technology can efficiently collect, mine
and parse the massive user behavior data accumulated
on the platform, so as to further accurately identify
the interests and preferences of each user. Ultimately,
through information matching technology, the
content that is highly suitable for the personalized
needs of the user is automatically recommended to
the user's field of vision, so as to realize the precise
and automated information push service (Le &Wang
et al., 2025).
However, this technology has also contributed to
the large-scale dissemination of infringing behaviors
such as editing and handling of movie and television
works, resulting in frequent damage to the interests of
*
Corresponding author
right holders and a sharp increase in the cost of social
justice. In this context, the traditional "safe harbor
principle" is lagging behind due to its reliance on the
"notice-and-delete" rule, and platforms often use
"technology neutrality" as a reason to avoid
responsibility. At the same time, the existing
technical difficulties are the high cost of infringement
identification due to massive short video content, and
the algorithm accuracy and efficiency still need to be
improved. The existing law lacks detailed provisions
on the specific liability scenarios of "algorithmic
recommendation", and the courts are divided on the
determination of the platform's duty of care in judicial
practice.
In this context, in view of the current legislative
status, judicial practice differences and theoretical
disputes, this paper is to summarize the existing focus
of controversy from the specific judicial practice -
whether the platform is responsible for the results of
the algorithmic recommendation, and how to be
responsible for. From the specific theory, to put
forward a specific path of reconstruction of the
boundary of responsibility.
Lyu, X. and Yang, Y.
Study on Copyright Infringement Liability of Short Video Platforms from the Perspective of Intelligent Algorithm Recommendation.
DOI: 10.5220/0014359300004859
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 1st International Conference on Politics, Law, and Social Science (ICPLSS 2025), pages 197-203
ISBN: 978-989-758-785-6
Proceedings Copyright © 2026 by SCITEPRESS Science and Technology Publications, Lda.
197
2 RESEARCH STATUS
2.1 Status of Chinese legislation
China's current legal framework has gradually
improved the regulation of platform liability for
algorithmic recommendations, but there are still gaps
in the system's articulation. The Copyright Law, as
amended in 2021, expands the scope of copyright
protection by introducing the concept of "audiovisual
works", but fails to clearly define the liability of
platforms in the context of algorithmic
recommendations. Articles 1194-1197 of the Civil
Code, which are the basic norms of network
infringement, establish the joint and several liability
of platforms when they "know or should know" of the
infringement, and although it stipulates the obligation
of "notification and deletion", the scope of
application is still limited to the passive response
mechanism, which fails to cover the new
communication form of algorithmic active
recommendation. The State Council issued the
"Regulations on the Protection of the Right to
Network Dissemination of Information" to continue
the principle of safe harbor, requiring platforms to
passively respond to requests for deletion, and
strengthening the platform's obligation to dispose of
the aftermath, but did not make the mandatory
requirements for platforms to review and filter video
content. The "Regulations on the Administration of
Algorithmic Recommendation of Internet
Information Services" came into force in 2022. These
regulations added some important new rules. They
clearly state that algorithmic recommendation service
providers must not use technical means to spread
infringing content. Providers must set up a system to
handle user complaints. They should also keep
improving their algorithms to reduce the risk of
infringement. In addition, they must take active steps
to limit the spread of illegal content, such as deleting
it and controlling its visibility.
However, China's existing laws lack detailed
provisions on the specific liability scenarios of
"algorithmic recommendation". It is worthwhile to
further study how to apply the relevant provisions in
specific cases, and to clarify the responsibility and
duty of care borne by the platform.
2.2 China's Judicial Status and Policy
Exploration
At the judicial level, some of the jurisprudence
represented by the case of iQIYI v. a short-video
platform adopted the "red-flag standard". They found
that the platform "should have known" about the
infringing content of the hit series. They broke the
boundary of the traditional "notice-and-delete" rule.
It reflected an expansive judicial interpretation of the
duty of care in the context of technological innovation
(Beijing Haidian District People's Court, 2018). In
terms of industry governance, although the "Internet
Information Service Algorithm Recommendation
Management Regulations" advocates the value
orientation of algorithms "upward and good", the
design of specific provisions of copyright protection
is still too rough, and there is a lack of clear and
operable specific implementation rules.
Therefore, some scholars suggest drawing on the
EU Digital Single Market Copyright Directive to
promote the adoption of filtering technologies such as
Content ID by platforms in order to build a preventive
liability system. However, other scholars oppose the
introduction of the mandatory filtering mechanism in
the EU Digital Single Market Copyright Directive,
arguing that it may inhibit innovation, emphasizing
the principle of technological neutrality and the
principle of migratory platforms to determine
liability. But the Chinese legislation has not yet
included filtering technology in the scope of
mandatory obligations (Tang, 2017 … Li, 2025).
Currently, legislation is lagging behind
technological development, resulting in the criteria
for determining platform liability oscillating between
judicial discretion and technological circumvention.
In judicial practice, the court has not yet formed a
unified standard for the judgment of the duty of care
of the platform under the algorithmic
recommendation scenario, and there is disagreement
between the theoretical community and the practical
community on the delineation of the boundary of
responsibility (Sin, 2024). The principle of
"technological neutrality" has also been gradually
exposed as a loophole for abuse in the process of
application, and some platforms have utilized it to
circumvent the obligation of active review, which
reflects the risk of disorder in the rule system (Peng
& Ding, 2022). How to establish a mechanism to
balance technological innovation and rights
protection in order to regulate the short video
platform algorithmic recommendation infringement
phenomenon has become an important issue that the
copyright system in the digital era needs to respond
to.
ICPLSS 2025 - International Conference on Politics, Law, and Social Science
198
3 THEORETICAL
CONTROVERSIES AND
ADJUDICATIVE
DISAGREEMENTS
In judicial practice, platforms often rely on the "safe
haven principle" to launch the defense, to receive
notice has been taken down, blocking the infringing
content as a defense, so that the traditional "notice-
deletion" mechanism is in a predicament, the
phenomenon of infringement is worsening
(Peng &
Ding, 2022). At the same time, in the context of the
increasing popularity of algorithmic recommendation
technology, platforms advocate "technological
neutrality" as a means of avoiding the obligation to
review algorithmically recommended content, as well
as the responsibility arising therefrom. This
phenomenon has led to differences in academic and
judicial practice on the determination and division of
the duty of care assumed by the platform.
3.1 Dilemmas in the Application of the
Safe Haven Principle
The "safe harbor rule" was created by the Digital
Millennium Copyright Act, enacted in the United
States, whose traditional "notice-and-takedown"
mechanism relies on manual review. However, short
video platforms actively intervene in the
dissemination of content through algorithms (e.g.,
dynamically adjusting search weights, implementing
personalized push), gradually building a new
governance framework of "algorithmic notification-
algorithmic deletion" and transferring the obligation
of censorship to algorithms. This shift has led to the
traditional "notice-and-takedown" mechanism falling
into an inefficient cycle - copyright owners are not
willing to defend their rights due to the difficulty of
proof and low compensation, while platforms have
formed a path of dependence and passively wait for
notices of infringement from rights holders (Li, 2025
& Xu & Wei, 2025). However, faced with a massive
amount of short video content, algorithms are
difficult to correctly and accurately identify fair use,
resulting in fair use content being mistakenly deleted,
and the anti-notification program, which should be a
relief mechanism, is difficult to play a substantive
role due to technical barriers and procedural idleness,
which objectively results in an imbalance between
copyright protection and freedom of expression (Li,
2025).
3.2 The Contradiction Between
Platform Responsibility and
Technological Neutrality
The core competitiveness of the platform as an
information intermediary lies in the efficiency of its
algorithm, which can accurately recommend
commodities and contents through users' preferences
and browsing data to enhance the commercial value of
the platform. However, when recommending
contents, the platform may be driven by commercial
interests and prioritize the pushing of contents of
certain copyright holders while ignoring the rights and
interests of other copyright holders, to achieve
differentiated management. Such selective copyright
protection can lead to unfair market competition and
jeopardize the interests of copyright holders and users.
When faced with allegations of copyright
infringement, platforms often claim technological
neutrality and evade responsibility on the grounds that
the algorithms run automatically and that they cannot
control the results of their own recommendations
(Zhou, 2023). This has triggered a reflection on
technology neutrality, i.e., whether the technology is
neutral and whether the platform should take
responsibility for the results of algorithmic
recommendation. Algorithmic recommendation is not
a completely neutral technology, and will be driven by
the commercial interests of the platform, the platform
in order to maximize traffic and maximize commercial
interests, will be actively intervening in the
distribution of content through parameter adjustments.
For example, the platform may increase the
recommendation weight of certain popular content or
paid content, and reduce the probability of
recommendation of other content. This intervention
constitutes the basis for the determination of "should
know" infringement in Article 1195 of the Civil Code,
i.e., the platform should know that the content it
recommends may have the risk of infringement, but
still recommend it, and thus should bear the
responsibility for infringement. This shows that the
platform in the algorithmic recommendation is not a
passive technology provider, but a subject with
subjective intent and control ability, should bear the
duty of care. Based on the non-neutrality of
algorithmic recommendation, the obligation of the
platform should also be expanded. From the point of
view of foreseeable possibility, the platform, as the
developer and manager of the algorithm, should
foresee the risk of infringement and other negative
impacts that the algorithmic recommendation may
bring. From the point of view of control ability,
platforms can adjust and optimize the algorithms
through technical means, to control the
Study on Copyright Infringement Liability of Short Video Platforms from the Perspective of Intelligent Algorithm Recommendation
199
recommendation results. Therefore, platforms need to
be responsible for the recommendation results,
especially for popular content, and need to assume a
higher audit obligation. For example, platforms need
to adopt technical filtering (e.g. video fingerprinting)
and manual auditing for popular dramas and well-
known works to form a "selective" protection
mechanism.
3.3 Differences in Judicial Practice on
the Recognition of the Duty of Care
of Platforms
3.3.1 Typological Analysis of Typical Cases
The differences in the determination of the platform's
duty of care under the algorithmic recommendation
scenario are centrally manifested in the differences in
the weighing scale between the principle of
"technological neutrality" and the standard of "should
know" in judicial decisions. By sorting out the two
types of typical cases, it can be summarized into
"strict standard" and "loose standard" adjudication
path.
First, strict standards - take the case of "iQIYI v.
ByteDance" as an example (Beijing Haidian District
People's Court, 2018). In the case of iQIYI v.
ByteDance, the platform pushed a large number of
clips infringing on the copyright of Yanxi Raiders to
users through an algorithm. The court broke through
the traditional mechanism of "notification-deletion"
and held that the platform's failure to take the
necessary measures despite knowing the existence of
a large number of infringing behaviors by users was
an act of helping infringement. As Table 1 shows,it
also argued that the platform was "knowingly" at fault
from three aspects: technical interference,
commercial profitability, and technical feasibility.
The court's final decision rejected the platform's
"technology neutrality" defense and imposed a higher
duty of care on the platform. The case demonstrated
a strict standard of adjudication, and the judgment's
determination of the scope of the platform's "due
diligence" was also a source of controversy.
The second is a lenient standard - take the case of
"The General History of China in the Museum"
copyright dispute as an example (Beijing Intellectual
Property Court, 2024). In that case, the platform
recommended infringing documentaries uploaded by
users through a collaborative filtering algorithm, but
did not set specific recommendation rules. As Table
1 shows, the court strictly applied the standard of
"knowledge or substantial assistance", and held that
the platform's algorithm was generated automatically
based on the user's behavioral data, not as a result of
the platform's active intervention, with respect to the
infringing videos other than those repeatedly
uploaded
by the user "Duo Moumou". Some of the
Table 1: Comparison table of adjudication standards between the case of iQIYI v. ByteDance and the copyright dispute case
of "The General History of China in the Museum.
Dimension
Whether the platform
actively intervenes in the
technolog
y
Platform commercial
profitability
Feasibility
Scope of the Platform's
"due diligence"
iQIYI v.
Byte Jump
The platform actively
intervened with
technology that utilized
algorithms to actively
recommend infringing
content.
The platform profited
from actively
intervening by
recommending and
distributing infringing
content.
The platform had more
sophisticated copyright
filtering technology, but
didn't take the necessary
measures to block the
distribution of infringing
content.
The platform had
sufficient conditions,
capacity and reasonable
grounds to know that the
user had committed the
infringement in question.
Copyright
dispute over
The General
History of
China in
Museums
The platform didn't
actively intervene in the
technology and only used
collaborative filtering
algorithms to recommend
and didn't set special
recommendation rules.
The platform's
algorithm was only
given to the automatic
generation of the user's
behavioral data, not the
result of the platform's
active intervention, and
the platform didn't have
the intention to profit
from infringing content.
The platform had more
mature copyright
filtering technology, but
didn't take action to stop
repeat infringement that
matched its distribution
capacity, but removed
other infringing videos in
a timely manner.
The platform's
algorithmic
recommendation was a
personalized
recommendation tailored
to the user's behavioral
data, which was different
from the judicial active
recommendation, and the
platform did not know
that the personalized
recommendation was an
infrin
g
in
g
video.
ICPLSS 2025 - International Conference on Politics, Law, and Social Science
200
judgments in this case reflect the neutral and tolerant
stance of the judiciary towards algorithmic
recommendation technology, and reject the view that
platforms adopting algorithms should bear a higher
degree of responsibility.
3.3.2 Focus of Divergence in Court Decisions
Algorithmic recommendation technology has had an
impact on the traditional tort liability system. It is
mainly reflected in the differences in the
interpretation of the elements of "should know" and
"necessary measures" in Article 1197 of the Civil
Code. In judicial practice, there has been a large
discrepancy in the court's determination of the two
main elements. The determination of the platform's
duty of care and liability has also generated
considerable disagreement. The standard of "should
know" is based on the traditional "red flag standard",
that is, the fact of infringement is obvious and can be
easily recognized, but the emergence of algorithmic
recommendation technology makes it difficult to
apply this standard (Chen, 2023). Is there a
presumption that a platform "should have known"
because it uses algorithmic recommendation
techniques? If the algorithm has difficulty in
recognizing infringing videos when they are first
uploaded, does the platform have a "due diligence"
obligation? These questions require further study.
4 PATHS TO RECONFIGURE
THE BOUNDARIES OF
RESPONSIBILITY
4.1 Amendment of the Principle of
"Technology Neutrality"
Through the introduction of the "substantially
contributes to infringement" standard, if the
algorithm design significantly increases the risk of
infringement (e.g., collaborative filtering to
recommend popular infringing content), the defense
of neutrality will be negated. The "initiative" and
"purpose" of the algorithm design is the key to
determining liability. If the platform deliberately
guides users to infringing content through
collaborative filtering and heat weighting, its
behavior is beyond the scope of technological
neutrality. In the ByteDance case, for example, the
court found that the platform actively recommended
infringing short videos through algorithms, which
significantly improved the efficiency of content
dissemination and constituted a substantial promotion
of infringement and thus could not invoke the
principle of "technological neutrality" to exempt
itself from liability. In addition, the
"Recommendation System Transparency" clause in
the EU's Digital Services Act (DSA) requires
platforms to disclose the logic of their algorithmic
recommendations to assist courts in assessing
whether the algorithms tend to infringe. For example,
if the platform's algorithm gives priority to
unauthorized film and television clips, even if the
content is not directly uploaded, the algorithm may be
liable for indirect infringement due to the "induced"
design of the algorithm.
Judging the platform's proof of subjective fault in
the judiciary can be aided by algorithmic
transparency requirements for judicial
determinations. In terms of the algorithm filing
system, platforms are required to file core
recommended algorithm parameters and update
records with the regulatory authorities to ensure
traceability after the fact. For example, China's
"Regulations on the Administration of Algorithm
Recommendation for Internet Information Services"
have made clear the filing requirements, which can be
extended to copyright infringement scenarios. In
terms of the obligation to provide an explanatory
report, in infringement litigation, platforms are
required to submit an explanatory report on the
algorithm's decision-making logic, proving that they
have taken reasonable measures to avoid the
proliferation of infringing content. If the report
contains logical contradictions or avoids key issues,
the platform is presumed to be at fault.
In terms of third-party technical audits, a third-
party independent technical organization can be
introduced to conduct compliance audits of
algorithms, focusing on evaluating the effectiveness
of their copyright filtering measures, such as
comparison coverage, false positive rate and other
indicators.
4.2 Setting a Dynamic Standard of
Duty of Care
4.2.1 Tiered Model of Obligations
According to Article 1195 of the Civil Code, based
on the provisions of network infringement liability,
combined with Article 42 of the E-Commerce Law,
"Notice-Delete" rule, the platform liability is
positively correlated with the algorithmic control and
Article 24 of the Cybersecurity Law, "Necessary
Limits of Technological Measures", should be
Study on Copyright Infringement Liability of Short Video Platforms from the Perspective of Intelligent Algorithm Recommendation
201
adopted. The platform's liability is positively related
to the control of the algorithm and the "necessary
limit of technical measures" in Article 24 of the
Cybersecurity Law. Therefore, the platform should
make a comprehensive judgment on the control of the
algorithm (e.g. whether to actively set up the
recommendation rules), the heat of the infringing
content, and the record of repeated infringement. At
the first level, the platform designs the algorithmic
rules completely independently, but the platform
needs to assume the obligation of prior filtering (e.g.,
deploying copyright fingerprinting system) and real-
time monitoring of hot content. At the second level,
third-party algorithm services are used, but the
compliance of the third-party algorithms must be
audited, and reports on the handling of infringing
content must be submitted regularly. At the third
level, the platform only provides basic
recommendation functions and fulfills the obligation
of "notification and deletion" but needs to take
measures to restrict or block repeated infringing
users.
4.2.2 Technical Feasibility Considerations
Based on the feasibility of technology and the
effectiveness of filtering, platforms should adopt
filtering measures (such as keyword shielding and
copyright library comparison) that match the
algorithmic capabilities. In small and medium-sized
platforms, basic measures such as keyword shielding
and MD5 hash value comparison can be implemented
for filtering, while in large-sized platforms, AI image
recognition and audio fingerprinting technologies
(e.g., YouTube Content ID) can be deployed to
realize accurate interception of infringing content, or
to explore the automatic authorization of blockchain
deposits and smart contracts, so as to build an
ecosystem of full-chain copyright protection of
"creation-dissemination-vindication" or explore
blockchain certificate and smart contract automatic
authorization, to build a "creation-dissemination-
rights defense" full-chain copyright protection
ecology.
4.3 Construction of a
Multi-Dimensional Co-Governance
Mechanism
The reconfiguration of the liability boundary should
consider the balance between copyright protection
and technological development. As for the platform,
as the beneficiary of the algorithm, the platform
should bear the main responsibility for data security
according to Article 9 of the Data Security Law, and
should also avoid the "one-size-fits-all" type of
censure, and be allowed to gradually optimize the
algorithm within the scope of technical feasibility; for
the right holders, the exceptions to the "Safe Harbor
Principle" should be perfected. For rights holders, it
is necessary to improve the exceptions to the "safe
haven principle" and clarify the standard of proof for
substantive infringement; for users, it is also
necessary to protect the reasonable use of space, and
to avoid excessive filtering to inhibit secondary
creativity and cultural exchanges. Through the
establishment of a copyright pre-authorization
database and an efficient notification and deletion
mechanism, the cooperation between platforms and
right holders can be realized, while promoting the
intervention of algorithmic ethical review and third-
party technical assessment. In the future, market-
based mechanisms such as "algorithmic liability
insurance" can be explored to diversify the
compliance risk of platforms, while promoting the
formation of technical autonomy standards in the
industry.
5 CONCLUSION
At a time when intelligent algorithmic
recommendation is booming, the issue of copyright
infringement liability of short video platforms has
become more and more prominent. From amending
the principle of "technology neutrality", to
establishing a dynamic duty of care standard, to
improving the infringement relief mechanism, this
series of initiatives aims to balance the relationship
between technological innovation and copyright
protection. However, as technology continues to
evolve, new forms of infringement and complex
issues will continue to emerge. In the context of
globalization, the dissemination of short videos
knows no boundaries, so international cooperation in
short video copyright protection will also become
increasingly close. Countries need to strengthen
exchanges and collaboration, jointly formulate
internationally accepted rules and standards for short
video copyright protection, combat cross-border
infringement, and create a favorable international
environment for the healthy development of the short
video industry. In the future, it is necessary to pay
continuous attention to the development of the
industry and continuously improve the relevant legal
system and governance measures, to prompt short
video platforms to continue to innovate on a legal and
compliant track, and to realize a win-win situation
ICPLSS 2025 - International Conference on Politics, Law, and Social Science
202
between technological progress and copyright
protection.
AUTHORS CONTRIBUTION
All the authors contributed equally and their names
were listed in alphabetical order.
REFERENCES
B. Chen, 2023. Multi-dimensional thinking of platform
algorithmic regulation: from case-by-case infringement
adjudication to algorithmic comprehensive governance.
L. A. J. 3, 91-94.
Beijing Haidian District People's Court. (2021, December
31). Civil Judgment [(2018) Jing 0108 Min Chu No.
49421].
https://www.sohu.com/a/518790526_121123754
Beijing Intellectual Property Court. (2024, December 19).
Civil Judgment [(2024) Jing 73 Min Zhong No.180].
C. Y. Le, Z. X. Wang, J. P. Zhang, et al. 2025. Research on
user algorithm response behavior under intelligent
recommendation of short-video platforms. Libr. Dev. 4,
1-28.
G. B. Peng, Y. W. Ding, 2022. The Copyright Dispute of
Short Video Platform in Intelligent Communication and
Its Governance Path--An Appraisal of the First Case of
Algorithmic Recommendation. S. J. R. 9, 86-96.
H. Zhou, 2021. Governance of Imbalance of Interests in
Copyright License Contracts- Mirror of the EU Digital
Single Market Copyright Directive. Intellect. Prop. 5,
41-55.
J. M. Sin, 2024. Controversy and Optimization of Judicial
Determination of Copyright Duty of Care for Short
Video Platforms under Algorithmic Recommendation.
J. S. U. (P. S. S.). 26, S1, 106-111.
K. Y. Xu, J. Wei, 2025. Study on Selective Copyright
Protection Behavior of Short Video Platform
Algorithms. S. S. I. G. 2, 38-52, 286.
S. H. Tang, 2017. Study on Copyright Exceptions for Text
and Data Mining in Big Data Environment - A
Perspective on the EU's Proposal for the DSM
Copyright Directive. Intellect. Prop. 10, 109-116.
S. H. Zhou, 2023. "The first case of algorithmic
recommendation" in the short video platform's duty of
care determination study. S. S. I. G. 4, 64-72.
X. Y. Li, 2025. Systematic Regulation of Algorithmic
Recommendation Infringement on Short Video
Platforms. H. L. S. 6, 147-163.
Y. Wan, 2021. China's Choice of Mandatory Filtering
Mechanism in Copyright Law. S. I. L. B. 6, 184-196.
Study on Copyright Infringement Liability of Short Video Platforms from the Perspective of Intelligent Algorithm Recommendation
203