Assessment Model," integrating prompt engineering,
parameter tuning, and post-generation optimization
into originality evaluations— later expanded in the
Tencent Dreamwriter case, which recognized
"parameter adjustments as manifestations of aesthetic
judgment"(Shenzhen Court, 2019).
2.2 Creative Agency Substitution
A "Fictitious Human Author" test framework
evaluates AI-assisted outputs by hypothetically
removing technological elements. If the residual
content fails traditional originality standards,
exclusion stems from expressive deficiencies, not
authorship debates. This aligns with the U.S. Second
Circuit’s "Creator Eligibility Precondition" in the
Monkey Selfie case and the U.K.’s THJ Systems Ltd
v Sheridan ruling denying protection to "mechanical
intellectual achievements"(English Court of Appeal,
2023). Such jurisprudence reflects a paradigm shift
from "subject-centric" to "object-centric"
evaluations, circumventing AI personhood debates
while preserving originality standards.
2.3 Abstract Frame
Copyright disputes arise from tensions between legal
"human authorship" requirements and AI autonomy.
Low-intervention scenarios (e.g., single prompts)
face originality challenges under the "Human Author
Principle," while high-intervention processes (e.g.,
iterative parameter adjustments) may qualify as
derivative works. China recognizes curated AI
outputs, and the EU proposes "sufficient control"
criteria. Persistent ambiguities include technological
opacity in contribution quantification and training
data’s copyright risks.
3 GLOBAL RESEARCH
LANDSCAPE
The UK’s Copyright, Designs and Patents Act 1988
marked a milestone by stipulating that computer-
generated works belong to the "person by whom the
arrangements necessary for the creation are
undertaken (British Parliament,1988)." Legislative
notes clarify that such persons must make "substantial
contributions," potentially encompassing developers,
operators, or project organizers. This provision
acknowledges non-human creative output, expanding
"creative input" to include system design and opening
a third path for AI copyright protection.
The U.S. Copyright Office and courts uphold
"human authorship," requiring demonstrable human
intellectual contribution. The Feist v. Rural precedent
established that originality demands "minimal
creativity," while the Naruto v. Slater ruling
reinforced that copyright excludes non-human actors
(United States Supreme Court,1991; United States
Court of Appeals for the Ninth Circuit, 2017).
Japan’s 2009 Copyright Act amendments
introduced an "information analysis clause,"
exempting AI’s non-expressive data use from
infringement. A 2018 extension distinguished
"learning" from "generation," with scholars like Ueno
Tatsuhiro cautioning against conflating the two to
avoid stifling innovation.
Germany’s Copyright Act restricts authorship to
natural persons, requiring human intellectual input for
protection. A 2015 patent case recognized AI
software IP, but scholars question "developer rights,"
advocating context-specific contribution assessments
(German Federal Patent Court, 2015).
Divergences between common and civil law
systems reflect deeper theoretical orientations: the
former emphasizes "sweat of the brow" labor, while
the latter prioritizes "author’s rights" tied to personal
expression. These differences manifest in varied
challenges — common law systems grapple with
defining "substantial labor," whereas civil law
systems must reconcile "intellectual creation"
standards with non-human agency.
4 EMERGING RISKS AND
CHALLENGES
4.1 Copyrightability Determination
Most jurisdictions require human creative
contributions, rendering purely AI-generated outputs
unprotected. China’s Copyright Law conceptualizes
works as "externalizations of human personality,"
while U.S. and EU frameworks demand "human
authorship." AI outputs, as algorithmic syntheses of
training data, often lack original thought, exemplified
by courts rejecting protection for mechanically
compiled AI articles. The EU’s Artificial Intelligence
Act further complicates this by mandating "public
interest alignment" for AI-generated content, raising
questions about quality thresholds for copyright