new situations, different communities and beliefs,
and lifestyles, common and appropriate legal
decisions cannot be made for every situation. This
situation has created some legal gaps and risks.
Views on the legal status of AI are gathered
around the issue of whether AI should be positioned
as an object, thing or product, or as a non-human
subject. The view that evaluates AI as an object
argues that the rights and responsibilities of AI can
develop in a very limited way and that this can be
regulated by an insurance system.”(Perennou, 2019).
While it is obvious that AI is coded by humans,
the first idea that makes sense is that it is considered
as a thing. For this reason, it is the most reasonable
legally to see it as a thing. It should be accepted that
insurance companies can be intermediaries in
allocating the damages that AI may cause, therefore
it cannot be a subject of rights, but it should be
accepted as an object that can be defined by
ownership. (Akkurt, 2019).
The concept of electronic personhood was
proposed in the Recommendation on Civil Law Rules
on Robotics (27 January 2017) prepared by the
European Parliament (EP) Committee on Legal
Affairs, as a possible solution to some fundamental
issues in the fields of robotics and law. The electronic
person concept is considered more appropriate than
the object concept when considering the autonomous
characteristics of AI(Yenice, 2024).
While there is no clear consensus on its legal
status yet, it is still unclear who the law should punish
and hold responsible in which areas and how. For
example, if an autonomous vehicle causes an
accident, how will it determine who will be
responsible? Who will be responsible if a robotic
device used in a surgery causes the death of a patient?
In addition, the decision-making processes of AI
systems are referred to as a "black box"(Öztemel,
2012). The lack of transparency in these systems
makes legal oversight and accountability difficult.
This is because the term "Black Box" refers to the
lack of transparency and accountability in the data
used by AI and human observers, or in the decision-
making processes. In other words, "Black Box" AI
systems refer to AI systems that are primarily opaque
neural networks, whose inputs and operations are
invisible to neither the user nor other interested
parties(MacCarthy, 2020). For this very reason, the
XAI initiative demonstrates the ability to explain the
decision-making processes underlying such large and
complex systems in terms and formats
understandable to experts in the field(Angelov et al.,
2021).
4 INTERNATIONAL LEGAL
APPROACHES
In order to find solutions to the ethical, social and
security problems of AI systems, international
organizations such as the OECD and the European
Parliament have considered that a set of rules and
frameworks should be determined and have included
comprehensive regulations that will make AI more
problem-free in terms of ethical and social effects and
protect human life from negative effects(Güner,
2019).
The negative impacts of AI use on human rights
have led to increased concerns in this area at national
and international levels. Accordingly, in the “Guide
to Ethical Principles for Trustworthy Artificial
Intelligence Systems” published by the Council of
Europe on December 18, 2018, which guides AI
designs based on human rights, it is seen that various
requirements aimed at addressing these concerns are
addressed. These requirements are; “maintaining
basic human rights, technical robustness and security,
which are closely related to the principle of
prevention of harm, and privacy (privacy of private
life) and data management, which are closely related
to the principle of explainability, transparency, which
are closely related to the principle of fair treatment,
diversity, non-discrimination and fairness, which are
closely related to the principle of accountability and
fair treatment, and ensuring social well-being and
protecting the environment, which are closely related
to the principle of prevention of harm.”(Singil, 2022).
4.1 OECD AI Principles
The OECD Principles on Artificial Intelligence
support AI to be innovative, trustworthy and ethical.
These principles were adopted by OECD member
countries on May 22, 2019, and are among the first
global principles on AI to be signed by governments.
Non-OECD countries such as Argentina, Brazil,
Colombia, Costa Rica, Peru and Romania also adhere
to these principles(OECD, 2024).
Although OECD recommendations are not legally
binding, the framework they created and the decisions
taken have become the basis of international
standards and governments have prepared their own
legislation within this framework(Güner, 2019).
It states that AI systems should be beneficial to
society and support inclusive growth and sustainable
development. It also emphasizes that AI should be
developed in a way that respects the rule of law and
human personal rights, always keeping the
transparency criterion at the forefront and ensuring its
ICEEECS 2025 - International Conference on Advances in Electrical, Electronics, Energy, and Computer Sciences