Intelligence and Autonomy (Roberts et al., 2020).
Nations may establish gradient regulatory systems
based on their digital economic development levels.
However, such fragmented legislation leads to
overlapping compliance costs. Another proposal is
regional collaborative regulation. For instance, the
European Union and the United States have engaged
in technical cooperation, such as the third EU-U.S.
Joint Technology Competition Policy Dialogue
(TCPD) held in Washington in March 2023, aimed at
consolidating collaborative outcomes and ensuring
fair competition in the digital domain (Von Struensee,
2021). Nonetheless, this model risks technological
hegemony, and regional governance rules face
challenges in global scalability.
Additionally, scholars advocate bilateral
coordination mechanisms. In August 2020, Singapore
and Australia signed the Singapore-Australia Digital
Economy Agreement (SADEA), accompanied by
memoranda of understanding (MoUs) to promote
Artificial Intelligence best practices and shared
ethical governance frameworks (Shen & Zhao, 2023).
While bilateral models enhance regulatory efficiency,
they entail issue-linkage risks, with some agreements
imposing conditions such as data localization or
market access.
Based on this analysis, this study concludes that
although the current international legal framework for
regulating Artificial Intelligence remains incomplete,
it represents the sole pathway capable of balancing
national sovereignty with the uneven technological
development and digital economic disparities
underlying Artificial Intelligence. A multi-
stakeholder governance model not only accelerates
Artificial Intelligence advancement but also fosters a
universally equitable global legal governance
environment.
3 CURRENT LEGAL
REGULATION OF ARTIFICIAL
INTELLIGENCE
3.1 Domestic Legal Pathways
In 2017, the State Council of China issued the New
Generation Artificial Intelligence Development Plan
(AIDP), establishing a national Artificial Intelligence
strategy through 2030. Managed by the Ministry of
Science and Technology (MOST), the New
Generation Artificial Intelligence Development Plan
aims to guide private enterprises toward Artificial
Intelligence ethical development and deployment
aligned with state values. To this end, the Chinese
government has incentivized Artificial Intelligence
initiatives by private enterprises, provided such
initiatives conform to its values and objectives.
However, the New Generation Artificial Intelligence
Development Plan’s failure to precisely define
artificial intelligence has triggered a series of
complex issues.
The 2023 Artificial Intelligence Law established
a "classified and tiered regulatory" framework. On
October 18 of the same year, the Cyberspace
Administration of China released the Global
Artificial Intelligence Governance Initiative,
emphasizing the establishment of a risk-level testing
and evaluation system, safeguarding personal privacy
and data security in Artificial Intelligence research,
development and application, achieving fairness and
non-discrimination principles, and improving
Artificial Intelligence ethical guidelines, norms, and
accountability mechanisms (Guo & Xu, 2024).
However, challenges persist, including overly
abstract principles, regulatory frameworks lacking
practical implementation experience, and instances of
regulatory absence or overreach.
The United States primarily adopts an “interstate
legislation + federal guidance” model to regulate
Artificial Intelligence. For example, California’s
proposed comprehensive Artificial Intelligence
framework-Assembly Bill 311-requires companies
developing critical Artificial Intelligence products (in
employment, education, housing, etc.) to conduct
impact assessments, provide notice and opt-out rights
to California residents, and implement governance
plans with reasonable administrative and technical
safeguards to address algorithmic discrimination
risks (Raja & John, 2019). This law would be
enforced by the California Attorney General and
includes limited private litigation rights. Although the
bill remains under committee review, California’s
legislature tends to proactively regulate tech policy
issues regardless of federal or other states’ actions.
Meanwhile, New York State has proposed
requirements for bias audits of Artificial Intelligence
recruitment tools. However, measures such as impact
assessments may impede Artificial Intelligence
enterprises’ research, development, and growth to
some extent. These measures are confined to
domestic technological advancement, diminishing
opportunities and efficiency for international
collaboration, restricting the progress of global
Artificial Intelligence projects, and posing obstacles
to the synergistic development of worldwide data
flows and the digital economy.