services rely on collecting and analyzing a large
amount of users' personal data, including their
financial information, consumption habits, and social
activities. Although these data can help AI algorithms
better understand and predict user needs, it will pose
a serious threat to user privacy once the data is leaked
or misused. In recent years, multiple data breaches
have shown that even top financial institutions cannot
completely avoid data security issues. The disclosure
of personal privacy information can lead to serious
consequences such as poor social situation and even
loss of job opportunities for consumers. The illegal
trade and abuse of personal information seriously
infringes the rights of financial consumers and
destroys social relations. Because it has a complete
independent legal personality, people enjoy the legal
protection of the personal and property relationship of
rights independently. As an independent object of
rights, personal information is an important part of
individual rights and should be protected by law.
However, in the era of artificial intelligence, many
consumer financial institutions use "big data" and
"artificial intelligence" and other means to illegally
trade and abuse consumers' personal information,
resulting in consumers suffering from frequent
consumer financial phone calls and SMS harassment,
as well as malicious collection and violent collection
problems. This kind of behavior not only cause
significant damage to the financial consumers' social
life, and in the case of without the consent of the
parties to the trafficking and abuse of information, but
also a serious violation of financial consumer rights.
Therefore, financial institutions need to strengthen
data protection measures to ensure the security of data
in the process of collection, transmission, and storage,
as well as to prevent data from unauthorized access
and use.
Secondly, the algorithm of prejudice and
discrimination problem is an important risk in AI
personalized applications. ai systems rely on
historical data for training, which may contain bias
and discrimination. If left unchecked, AI systems can
inadvertently amplify these biases, leading to unfair
treatment of certain groups of users when accessing
financial services. For example, in a credit scoring
system, certain groups may be assessed as high risk
because of biases in historical data, making it difficult
to get loans. For example, in Nick Bostrum and Elieze
Udkowski's hypothetical experiment on whether a
machine learning algorithm should accept or reject a
home mortgage loan, it is easy to find "algorithmic
discrimination" when examining the neural network
decision making process and the results: the loan
application approval rate of blacks is significantly
lower than that of whites (Cheng, 2021). Once the
user is considered as a group, "top" or "bottom", they
have received financial information, advertising and
product recommendations, etc., will only with
machine learning algorithms for its default identity.
Machine learning algorithms complexity and
limitation of led to consumer finance in the scene
"algorithm discrimination", it will lead to the "digital"
bottom always stay on "digital" bottom. Qualification
of these potential customers may be because
"discrimination" to be fair to obtain consumer
financial services, may also be because of "price
discrimination" inability to properly enjoy consumer
financial services. To avoid this situation, financial
institutions need to be introduced in data processing
and algorithm design principles of fairness and
transparency, and regularly review and adjust the
model, to ensure that the AI system fair to all users.
3.3 On the Artificial Intelligence
Applied in Risk Management of
Risk
Although the application of artificial intelligence in
the field of risk management has greatly improved the
efficiency and effectiveness, it also brings a series of
significant potential risks and challenges. The quality
and bias of data are inevitable key issues in the risk
management of artificial intelligence. These systems
rely on a large amount of high-quality data for training
and performing tasks, and any bias or uncertainty in
the data may seriously affect the model's output.
Financial data of the complexity and diversity, as well
as universality and inconsistencies in the quality of
the data sources, data problems may lead to risk
assessment and prediction results appeared deviation,
leading to wrong decisions. Model transparency is
another challenge faced by AI in risk management.
Based on the deep learning model, in particular, they
"black box" feature makes the decision making
process is difficult to explain. In the financial field, it
is important to address the lack of transparency
because regulators and users around the world need to
understand the decision basis of the model. Suppose
ai system decision-making process is not transparent.
In that case, financial institutions will be difficult to
verify and explain the risk assessment results, may
lead to users and regulators mistrusted artificial
intelligence system, causing legal risk. Although the
artificial intelligence in the application of risk
management has great potential and advantages, its
potential risks and challenges are nots allowed to
ignore. Financial institutions need to ensure that the
application of AI technology in risk management is