3 THEORETICAL
CONTROVERSIES AND
ADJUDICATIVE
DISAGREEMENTS
In judicial practice, platforms often rely on the "safe
haven principle" to launch the defense, to receive
notice has been taken down, blocking the infringing
content as a defense, so that the traditional "notice-
deletion" mechanism is in a predicament, the
phenomenon of infringement is worsening
(Peng &
Ding, 2022). At the same time, in the context of the
increasing popularity of algorithmic recommendation
technology, platforms advocate "technological
neutrality" as a means of avoiding the obligation to
review algorithmically recommended content, as well
as the responsibility arising therefrom. This
phenomenon has led to differences in academic and
judicial practice on the determination and division of
the duty of care assumed by the platform.
3.1 Dilemmas in the Application of the
Safe Haven Principle
The "safe harbor rule" was created by the Digital
Millennium Copyright Act, enacted in the United
States, whose traditional "notice-and-takedown"
mechanism relies on manual review. However, short
video platforms actively intervene in the
dissemination of content through algorithms (e.g.,
dynamically adjusting search weights, implementing
personalized push), gradually building a new
governance framework of "algorithmic notification-
algorithmic deletion" and transferring the obligation
of censorship to algorithms. This shift has led to the
traditional "notice-and-takedown" mechanism falling
into an inefficient cycle - copyright owners are not
willing to defend their rights due to the difficulty of
proof and low compensation, while platforms have
formed a path of dependence and passively wait for
notices of infringement from rights holders (Li, 2025
& Xu & Wei, 2025). However, faced with a massive
amount of short video content, algorithms are
difficult to correctly and accurately identify fair use,
resulting in fair use content being mistakenly deleted,
and the anti-notification program, which should be a
relief mechanism, is difficult to play a substantive
role due to technical barriers and procedural idleness,
which objectively results in an imbalance between
copyright protection and freedom of expression (Li,
2025).
3.2 The Contradiction Between
Platform Responsibility and
Technological Neutrality
The core competitiveness of the platform as an
information intermediary lies in the efficiency of its
algorithm, which can accurately recommend
commodities and contents through users' preferences
and browsing data to enhance the commercial value of
the platform. However, when recommending
contents, the platform may be driven by commercial
interests and prioritize the pushing of contents of
certain copyright holders while ignoring the rights and
interests of other copyright holders, to achieve
differentiated management. Such selective copyright
protection can lead to unfair market competition and
jeopardize the interests of copyright holders and users.
When faced with allegations of copyright
infringement, platforms often claim technological
neutrality and evade responsibility on the grounds that
the algorithms run automatically and that they cannot
control the results of their own recommendations
(Zhou, 2023). This has triggered a reflection on
technology neutrality, i.e., whether the technology is
neutral and whether the platform should take
responsibility for the results of algorithmic
recommendation. Algorithmic recommendation is not
a completely neutral technology, and will be driven by
the commercial interests of the platform, the platform
in order to maximize traffic and maximize commercial
interests, will be actively intervening in the
distribution of content through parameter adjustments.
For example, the platform may increase the
recommendation weight of certain popular content or
paid content, and reduce the probability of
recommendation of other content. This intervention
constitutes the basis for the determination of "should
know" infringement in Article 1195 of the Civil Code,
i.e., the platform should know that the content it
recommends may have the risk of infringement, but
still recommend it, and thus should bear the
responsibility for infringement. This shows that the
platform in the algorithmic recommendation is not a
passive technology provider, but a subject with
subjective intent and control ability, should bear the
duty of care. Based on the non-neutrality of
algorithmic recommendation, the obligation of the
platform should also be expanded. From the point of
view of foreseeable possibility, the platform, as the
developer and manager of the algorithm, should
foresee the risk of infringement and other negative
impacts that the algorithmic recommendation may
bring. From the point of view of control ability,
platforms can adjust and optimize the algorithms
through technical means, to control the