some AI systems, their decision - making logic is
difficult to trace, and developers often shirk their
responsibilities on the grounds of technological
neutrality. In this regard, legislation should be used to
force high-risk AI tools (such as deepfake programs,
automated recommendation systems) to disclose the
core algorithm framework and embed an
interpretability module. For example, attach a
"technical path description" to the output of
generative AI, recording the data source and decision-
making basis; introduce a third-party audit
mechanism in the algorithm training stage to ensure
that the development process complies with ethical
norms. Only by breaking through the technical
barriers can objective bases be provided for the
determination of "subjective knowledge" and
"assisting acts" in judicial practice and the balance
between the principle of technological neutrality and
the principle of liability adaptation be achieved.
5.2 Strengthen Industry Supervision
The rapid iteration and wide application of AI
technology require that industry supervision must
shift from passive response to active prevention and
control. To achieve this goal, it is necessary to build a
trinity supervision framework of "legal norms-
enterprise self-discipline-technical support" and form
a risk prevention and control system covering the
entire life cycle of AI through multi-level system
design, criminal compliance incentives, and the
coordination of technology and law.
First, establish a multi-level three-dimensional
supervision system to achieve comprehensive
coverage from legislation to practice. At the
legislative level, special laws such as the AI Law
should be accelerated. The bottom-line requirements
for technology research and development, data use,
and product deployment should be clarified. For
example, it should be stipulated that the design of AI
tools must embed an ethical review mechanism, and
the development of algorithm models with obvious
criminal orientation should be prohibited. At the same
time, industry access standards should be refined
through administrative regulations, and enterprises
should be required to complete safety assessment and
filing before entering the market to ensure the legality
of technology application. At the administrative
supervision level, it is necessary to strengthen the
cross-departmental cooperation mechanism. For
example, the Internet Information Office, the public
security organ, and the science and technology
management institution jointly establish an "AI
Safety Supervision Committee" to regularly conduct
special inspections on high - risk areas (such as
deepfake, automated recommendation systems) and
impose dynamic penalties on illegal enterprises.
Technical supervision needs to rely on third - party
testing institutions to conduct transparent reviews of
the operation logic and output results of AI systems
through technical means such as algorithm auditing
and data traceability to avoid supervision blind spots
caused by "black-box operations."
Second, promote the corporate criminal
compliance plan and internalize risk prevention and
control into the conscious actions of industry
development. The practical experience of the EU's AI
Liability Directive can be borrowed to require AI
enterprises to establish a compliance management
system including risk identification, internal control,
and emergency response. For example, when an
enterprise develops a face recognition system, it
needs to pre-evaluate the risk that it may be used for
illegal monitoring or identity theft and embed a
"usage scenario restriction" function in the algorithm;
in the data collection link, user explicit consent and
anonymization processing should be adopted to avoid
privacy violations. For enterprises that actively fulfill
compliance obligations, policy incentives such as tax
reduction and priority in market access can be given;
conversely, for enterprises that allow the abuse of
technology, administrative penalty intensity should
be increased, and even the criminal liability of
relevant responsible persons should be investigated.
In addition, industry associations should take the lead
in formulating the AI Ethical Guidelines to guide
enterprises to integrate the concept of "technology for
good" into the entire product process. For example,
generative AI tools are required to mark "deepfake
risk warnings" to reduce the possibility of technology
abuse from the source.
Third, strengthen the in-depth integration of
technology and law to improve the accuracy and
efficiency of judicial governance. On the one hand, it
is necessary to improve the cognitive level of judicial
personnel on AI technology through professional
training and interdisciplinary cooperation. For
example, when hearing cases involving algorithmic
recommendation, judges should master basic
machine learning principles and be able to distinguish
between "technically neutral push" and "malicious
inducement behavior"; when determining "subjective
knowledge," technical experts can be used to analyze
system logs and algorithm parameters to judge