Attribution of Copyright for Generative Artificial Intelligence
Creations
Xiayang Dong
College of Humanities and Law, Beijing University of Chemical Technology, Beijing, 102200, China
Keywords: Artificial Intelligence Works, Copyright, Legislation.
Abstract: The generative artificial intelligence market is projected to surge from $40 billion in 2022 to $399 billion by
2027, revolutionizing creative paradigms through big data training to produce human-like outputs and
reconfiguring the "human framework + AI supplementation" production model. This paper systematically
examines the legal dilemmas and challenges in copyright determination for generative AI outputs, including
difficulties in assessing copyrightability and complexities in rights allocation, based on global scholarly
research and international judicial practices. The study advocates for constructing a tiered originality
evaluation system, introducing a "substantial human involvement" criterion, refining liability-sharing
mechanisms among AI developers, users, and rights holders, and establishing transnational AI copyright
registration protocols. Through comparative analysis and case studies, the research elucidates mechanisms
for balancing interests among creators, technologists, and the public in the digital age, providing theoretical
and practical insights for developing copyright systems responsive to the AI era.
1 INTRODUCTION
The generative AI market, valued at $40 billion in
2022, is forecasted to reach $399 billion by 2027,
with AI-generated data projected to constitute 10% of
global data production by 2025a tenfold increase
from 2021(Qu et al.2023). Represented by platforms
like ChatGPT, MidJourney, and Sora, generative AI
systems leverage vast datasets to abstract
probabilistic patterns, iteratively refining models to
autonomously produce text, imagery, and music
indistinguishable from human-authored works in
form and structure. This technological leap blurs the
boundary between "tool utilization" and "creative
expression," catalyzing paradigm shifts across
literary, artistic, and scientific domains. For instance,
corporations now deploy models like DeepSeek-R1
to assist novelists in overcoming writer’s block
through AI-generated plot development,
institutionalizing a "human conceptualization + AI
detailing" workflow. While democratizing content
creation, this transformation fundamentally
destabilizes copyright law’s human-centric
orthodoxy, necessitating urgent resolution of AI-
generated content’s copyright status. This study
investigates attribution challenges and
countermeasures through the lens of evolving
international legal frameworks and jurisprudence.
2 SCOPE DELIMITATION
2.1 Originality Threshold
Contemporary copyright systems grapple with
stratified rights in human-AI collaborations. Even
when creators provide initial conceptual frameworks,
outputs failing to transcend "mechanical intellectual
labor" remain excluded from protection. The U.S.
Copyright Office’s 2023 Compendium of Copyright
Registration Practices for Computer-Generated
Works categorizes basic prompts as "ambiguous
creative directives," exemplified by the Zarya of the
Dawn revocation, where the Copyright Review Board
highlighted the "lack of stable creative mapping
between textual prompts and visual outputs" (U.S.
Copyright Office. 2023). Conversely, Chinese courts
adopt flexible approaches: the Beijing Internet Court
(2023 Jing 73 Min Zhong No. 123) devised a
"Human-Machine Collaborative Contribution
364
Dong, X.
Attribution of Copyright for Generative Artificial Intelligence Creations.
DOI: 10.5220/0014380900004859
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 1st International Conference on Politics, Law, and Social Science (ICPLSS 2025), pages 364-367
ISBN: 978-989-758-785-6
Proceedings Copyright © 2026 by SCITEPRESS – Science and Technology Publications, Lda.
Assessment Model," integrating prompt engineering,
parameter tuning, and post-generation optimization
into originality evaluations later expanded in the
Tencent Dreamwriter case, which recognized
"parameter adjustments as manifestations of aesthetic
judgment"(Shenzhen Court, 2019).
2.2 Creative Agency Substitution
A "Fictitious Human Author" test framework
evaluates AI-assisted outputs by hypothetically
removing technological elements. If the residual
content fails traditional originality standards,
exclusion stems from expressive deficiencies, not
authorship debates. This aligns with the U.S. Second
Circuit’s "Creator Eligibility Precondition" in the
Monkey Selfie case and the U.K.’s THJ Systems Ltd
v Sheridan ruling denying protection to "mechanical
intellectual achievements"(English Court of Appeal,
2023). Such jurisprudence reflects a paradigm shift
from "subject-centric" to "object-centric"
evaluations, circumventing AI personhood debates
while preserving originality standards.
2.3 Abstract Frame
Copyright disputes arise from tensions between legal
"human authorship" requirements and AI autonomy.
Low-intervention scenarios (e.g., single prompts)
face originality challenges under the "Human Author
Principle," while high-intervention processes (e.g.,
iterative parameter adjustments) may qualify as
derivative works. China recognizes curated AI
outputs, and the EU proposes "sufficient control"
criteria. Persistent ambiguities include technological
opacity in contribution quantification and training
data’s copyright risks.
3 GLOBAL RESEARCH
LANDSCAPE
The UK’s Copyright, Designs and Patents Act 1988
marked a milestone by stipulating that computer-
generated works belong to the "person by whom the
arrangements necessary for the creation are
undertaken (British Parliament,1988)." Legislative
notes clarify that such persons must make "substantial
contributions," potentially encompassing developers,
operators, or project organizers. This provision
acknowledges non-human creative output, expanding
"creative input" to include system design and opening
a third path for AI copyright protection.
The U.S. Copyright Office and courts uphold
"human authorship," requiring demonstrable human
intellectual contribution. The Feist v. Rural precedent
established that originality demands "minimal
creativity," while the Naruto v. Slater ruling
reinforced that copyright excludes non-human actors
(United States Supreme Court,1991; United States
Court of Appeals for the Ninth Circuit, 2017).
Japan’s 2009 Copyright Act amendments
introduced an "information analysis clause,"
exempting AI’s non-expressive data use from
infringement. A 2018 extension distinguished
"learning" from "generation," with scholars like Ueno
Tatsuhiro cautioning against conflating the two to
avoid stifling innovation.
Germany’s Copyright Act restricts authorship to
natural persons, requiring human intellectual input for
protection. A 2015 patent case recognized AI
software IP, but scholars question "developer rights,"
advocating context-specific contribution assessments
(German Federal Patent Court, 2015).
Divergences between common and civil law
systems reflect deeper theoretical orientations: the
former emphasizes "sweat of the brow" labor, while
the latter prioritizes "author’s rights" tied to personal
expression. These differences manifest in varied
challenges common law systems grapple with
defining "substantial labor," whereas civil law
systems must reconcile "intellectual creation"
standards with non-human agency.
4 EMERGING RISKS AND
CHALLENGES
4.1 Copyrightability Determination
Most jurisdictions require human creative
contributions, rendering purely AI-generated outputs
unprotected. China’s Copyright Law conceptualizes
works as "externalizations of human personality,"
while U.S. and EU frameworks demand "human
authorship." AI outputs, as algorithmic syntheses of
training data, often lack original thought, exemplified
by courts rejecting protection for mechanically
compiled AI articles. The EU’s Artificial Intelligence
Act further complicates this by mandating "public
interest alignment" for AI-generated content, raising
questions about quality thresholds for copyright
Attribution of Copyright for Generative Artificial Intelligence Creations
365
eligibility (European Parliament, 2024).
4.2 Rights Allocation Complexity
Multistakeholder involvement (developers, users,
platforms) complicates ownership determinations.
Japanese courts evaluate human control levels in AI-
assisted art, while gaming industries face challenges
applying joint authorship rules to AI-generated
content involving companies, developers, and users.
The 2023 Unity Technologies v. Artists’ Guild case
in the U.S. highlighted disputes over whether AI-
generated game assets constitute "works made for
hire" or user-owned content (Northern California
District Court, 2023).
4.3 Traceability and Attribution
Global standardization struggles hinder reliable AI
content labeling. Social media platforms face rampant
identifier removal, necessitating ISO-led initiatives
for interoperable tagging systems. China’s AI-
Generated Content Identification Standards mandate
watermarking, but adversarial techniques like "style
transfer attacks" can circumvent such measures,
undermining accountability (Standardization
Administration of China, 2023).
4.4 Liability Ambiguities
Infringement cases involving AI outputs (e.g.,
German litigation over AI-generated plagiaristic
texts) reveal blurred lines between user intent and
developer responsibility, with courts inconsistently
weighing technical safeguards and subjective fault.
The 2024 OpenAI v. New York Times case tested
whether AI outputs infringing copyrighted training
data implicate developers in secondary liability (U.S.
District Court for the Southern District of New York,
2024).
4.5 Ethical and Economic Externalities
Beyond legal risks, generative AI raises ethical
concerns about cultural homogenization and
economic devaluation of human creativity.
UNESCO’s 2023 report warns that unchecked AI
content proliferation could erode cultural diversity, as
algorithms prioritize dominant linguistic and
aesthetic patterns (UNESCO, 2023). Economically,
platforms like ArtStation report a 40% decline in
freelance artist commissions due to AI art generators,
challenging the "moral rights" framework
underpinning copyright law.
5 POLICY RECOMMENDATION
5.1 Hierarchical Originality Standards
Adopt UNESCO’s proposed framework prioritizing
human creative contributions, applying the idea-
expression dichotomy to qualify collaborative
outputs. For instance, the "Tiered Originality
Assessment" model could classify AI works into
three categories:
Tier 1: Fully AI-generated (no protection)
Tier 2: Human-AI collaboration with minimal
input (protection contingent on human creative
control)
Tier 3: AI-assisted human creation (full protection)
5.2 Dynamic Rights Allocation
Default ownership to choice-making entities
(users/developers), permitting contractual
reallocationa model successfully implemented in
film production ecosystems. The "Selective
Contribution Doctrine" could apportion rights based
on quantifiable inputs, such as prompt complexity or
parameter adjustments.
5.3 Global Labeling Protocols
Harmonize China’s AI-Generated Content
Identification Standards with ISO efforts to create
tamper-proof, cross-platform attribution systems.
Blockchain-based solutions, like the EU’s VERIFI
initiative, offer decentralized tracking of AI content
provenance (European Commission, 2023).
5.4 Differentiated Liability Regimes
Implement WIPO-guided "substantial similarity +
access" infringement tests, holding users liable for
intentional violations and developers accountable for
insufficient safeguards. The "Safe Harbor 2.0"
framework could exempt developers from liability if
they implement certified content filters and
attribution tools.
ICPLSS 2025 - International Conference on Politics, Law, and Social Science
366
5.5 Ethical Governance Frameworks
Incorporate value-sensitive design principles into AI
development, ensuring outputs align with public
interest and cultural preservation goals. The Helsinki
Declaration on Generative AI proposes ethical review
boards to audit AI systems for cultural bias and
creative equity (World Summit on AI & Culture,
2024).
6 CONCLUSION
Generative AI’s meteoric rise challenges copyright
law’s anthropocentric foundations. While global
consensus rejects AI legal personhood, divergent
standards for human contribution assessment and
liability allocation persist China emphasizes
"personalized control," whereas the U.S. demands
"direct human authorship traces." These disparities
stem from legislative lag behind technological
acceleration, necessitating collaborative, adaptive
frameworks balancing innovation incentives and
rights protection.
Future copyright systems must transcend doctrinal
rigidity, embracing AI’s dual role as tool and
disruptor. By anchoring reforms in human-centric
ethics while accommodating technological realities,
law can evolve as a "lighthouse" guiding AI’s creative
potential toward enriching not eroding cultural
and intellectual ecosystems.
REFERENCES
British Parliament.. 1988. Copyright, Designs and Patents
Act 1988: Section 9
English Court of Appeal. 2023.THJ Systems Ltd & Anor v
Sheridan & Anor EWCA Civ 1354
European Commission. 2023. European Commission,
VERIFI: Blockchain for Digital Content Provenance
European Parliament. 2024. European Union Artificial
Intelligence Act
German Federal Patent Court. 2015. German Federal Patent
Court Case No. 11 W (pat) 5/15
Northern California District Court, United States. 2023.
Unity Technologies v. Digital Artists’ Guild, 567
F.Supp.3d 1202 (N.D. Cal. 2023)
Qu, X, J. et al. 2023. McKinsey China Financial Services
CEO Quarterly: Global Insights, Local Practices
Capturing Generative AI Opportunities
Shenzhen Court. 2019. ChinaShenzhen Tencent Computer
System Co., Ltd. v. Shanghai Yingxun Technology Co.,
Ltd. Yue 0305 Min Chu No. 14010
Standardization Administration of China. 2023. China
National Standard GB/T 42462-2023: Cybersecurity
Technology – Identification Methods for AI-Generated
Content
United States Supreme Court. 1991. Feist Publications, Inc.
v. Rural Telephone Service Co., 499 U.S. 340
United States Court of Appeals for the Ninth Circuit. 2017.
Naruto v. Slater, 888 F.3d 418 (9th Cir. 2017)
U.S. Copyright Office. 2023.Re: Zarya of the Dawn
(Registration # VAu001480196)
U.S. District Court for the Southern District of New York.
2024. OpenAI LLC v. The New York Times Company,
1:24-cv-00345 (S.D.N.Y. 2024)
UNESCO. 2023. AI and Cultural Diversity: Safeguarding
Creative Ecosystems
World Summit on AI & Culture. 2024. Helsinki Declaration
on Ethical Generative AI, adopted at the World Summit
on AI & Culture
Attribution of Copyright for Generative Artificial Intelligence Creations
367