have not yet constructed an effective legal normative
framework that is deeply compatible with the creative
characteristics of generative artificial intelligence.
When algorithmic autonomy has become a prominent
feature in the creative process, should institutional
innovation adhere to the bottom line of human-
centeredness, or need to breed a new type of rights
ecology adapted to human-machine symbiosis?
Based on the paradigm crisis brought by
generative artificial intelligence to the legal system at
this stage, this paper focuses on the innovation of the
existing laws on the regulation of subjectivity, and
tries to put forward solutions to the three dilemmas
faced by the current laws: First, to solve the
contradiction between the ‘tool-subject’ binary
opposition, and to create a normative space for the
autonomy of the algorithm in the civil and
commercial legal system; Secondly, in the
transmission chain of ‘data-algorithm-output’, a
traceable and quantifiable contribution identification
system is constructed. Third, balance technological
innovation incentives, protect fairness and justice in
the distribution of rights and interests, and prevent
technological advantages from evolving into a
monopoly of rights. This paper hopes to open up a
new institutional channel for man-machine
collaborative creation by creating a ‘dynamic rights
generation model’ and a ‘gradient weight evaluation
mechanism’, so that the copyright law will still
radiate new vitality in the era of artificial intelligence.
2 DECONSTRUCTION OF THE
PARADIGM OF LEGAL
SUBJECTIVITY CRISIS
As the proportion of algorithmic content in the digital
content market exceeds 60 %, new challenges facing
the legal system have emerged, Under the
background that the algorithm system gradually
breaks through the boundary of tools and shows its
characteristics of quasi-subjectivity, does the subject
category of the current ‘copyright law’ with natural
persons and legal persons as the core have an
institutional conflict with the gradually developing
technological civilization (WIPO, 2024)? The
traditional civil and commercial law theory has fallen
into the dual dilemma of " absence of subjectivity "
and " imbalance of interest distribution " when
dealing with the ownership dispute of generative
artificial intelligence (Coase, 1960). It is urgent to
realize the paradigm transformation through the
expansion of legal subjectivity theory and
institutional innovation.
At present, judicial practice always adheres to the
logic of ‘instrumentalism’ and regards artificial
intelligence as the dominant creative medium. The U.
S. Copyright Office’s 2023 ‘Generative Artificial
Intelligence Copyright Policy Statement’: ‘Human
Creative Control’ is the core standard for rights
attribution (United States Copyright Office, 2023). In
the typical case of generative artificial intelligence,
‘Fehling Case’, Fehling Law Firm uses legal data
analysis software to automatically generate ‘judicial
big data analysis report of film and television
entertainment industry’, which covers data screening,
statistics and visual chart design. However, the court
held that the core content of the report was
automatically completed by the algorithm and lacked
the ‘original expression of natural persons’, which did
not constitute a work in the sense of copyright law. In
spite of this, the court still recognized the user’s
investment in the operation of the software, giving it
the right to prohibit unauthorized dissemination by
others (Beijing Internet Court, 2018). This judgment
reflects the current legal attitude towards generative
artificial intelligence. It still focuses on the traditional
" tool theory, " regards generative artificial
intelligence as a creative tool, and regards whether the
user’s intervention in the generation process of
artificial intelligence is sufficient to constitute
original labor as the criterion for judgment. However,
when the generated content exceeds the developer’s
preset range, the causal relationship between the
user’s instructions and the output results is also
separated due to the autonomy of the algorithm. The
developer is not directly involved in the content
generation, and the user cannot fully control the
generated results. At this time, the ‘tool theory’
ownership standard has fallen into a theoretical
dilemma. For example, the image created by deep
forgery technology with adversarial training is not the
design result obtained by the developer directly, nor
the creation result that the user can fully control.
At the same time, the limitations of the ‘natural
person-legal person’ dual subject structure of
traditional civil and commercial law have become
more and more significant in the context of the
gradual enhancement of the autonomy of modern
algorithms. On the one hand, the behavioral
responsibility of AI-generated content is difficult to
clearly identified, and the boundary of responsibility
between developers, users and AI systems is blurred
due to the black box of technology; on the other hand,
the mechanism of income distribution also faces the
risk of imbalance. It is difficult to quantify and