in traffic accidents every year in the world, and about
90% of these accidents are caused by human error.
However, driverless technology with the support of
AI can well reduce the probability of human error,
because AI can predict and avoid potential dangers
and reduce the incidence of accidents by analyzing
sensor data in real time.
In addition, AI also optimizes vehicle path
planning and traffic management, which can reduce
traffic congestion and improve overall transportation
efficiency. AI can also continuously optimize the
vehicle's self-diagnosis and maintenance prediction
functions through machine learning and big data
analysis, and improve the reliability of the system
(Bhardwaj, 2024).
However, it is worth noting that although AI can
rapidly promote the development of driverless
technology, it still has shortcomings in some aspects.
For example, when driverless cars face some complex
road conditions, the system may not have processed
similar data before and make wrong decisions. In
addition, when facing some extreme weather
conditions, AI processing data will also make similar
mistakes.
Therefore, exploring smarter AI models and
finding better sensors to achieve the future AI can
quickly analyze some unknown data sets and make
correct decisions efficiently during driving under
some disadvantageous conditions. This paper focuses
on summarizing what relevant models and algorithms
AI will use in driverless technology, and will point
out what kind of profound impact the arrival of AI
will have on society, and how governments should
take the right measures to prevent the harm caused by
AI in the face of these problems.
2 THE KEY ROLE OF AI IN
DRIVELESS TECHNOLOG
Y
The AI can play a key role in the field of driverless
driving, so what specific key roles can it play? First,
AI can enable self-driving vehicles to perceive the
surrounding environment through advanced sensors,
cameras, radar, lidar, GPS and other equipment, and
make real-time decisions. Secondly, AI can improve
the safety of autonomous driving through algorithms,
especially methods based on deep learning, which
have made significant progress in key components
such as vehicle perception, object detection and
planning.
An Intel report revealed that self-driving
technology will reduce users’ commuting times and
save hundreds of thousands of lives over the next
decade. For example, the first self-driving vehicle
using neural networks in 1988 was able to generate
control commands from camera images acquired by a
laser rangefinder. Finally, and more importantly,
since the decision-making process of autonomous
driving systems is opaque to humans, explainable AI
(XAI) is needed to provide transparency in the
decision-making process, enhance user trust in the
system, and meet regulatory requirements (Shahin et
al., 2024).
The main way for humans to perceive the outside
world is through various senses. Similarly, cars
perceive the outside world mainly through various
sensors that act as the "senses" of the car to perceive
changes in the external environment. Sensors play an
extremely important role in the process of
autonomous driving and are the key to the decision-
making process of autonomous vehicles.
The data they collect has heterogeneous and
multimodal characteristics, and these data are further
integrated to form effective decision-making rules. In
the autonomous driving system, they are the basis for
perceiving the surrounding environment of the
vehicle. In addition, they can also provide detailed
information about objects around the vehicle, roads
and traffic conditions.
Sensors can detect obstacles, identify road signs,
measure vehicle speed and position, etc., which
greatly improves driving safety. Sensors are divided
into short-range, medium-range and long-range
according to the transportation range of wireless
technology. Specific sensor types include cameras,
millimetre wave radar (MMW-Radar), GPS, inertial
measurement unit (IMU), lidar, ultrasonic and
communication modules. Each sensor has its own
characteristics. For example, cameras can be widely
used for environmental observation and can produce
high-resolution images, but are affected by lighting
and weather conditions. LiDAR can estimate distance
and generate point cloud data by emitting laser pulses
and measuring the time it takes to reflect back, but it
is expensive. By comparison, so each sensor has its
advantages and disadvantages (Ignatious et al., 2022).
It is obvious that a powerful sensor system is
needed to truly realize unmanned driving, so perform
sensor fusion to complement the advantages and
disadvantages of each sensor and achieve efficient
operation. Sensor fusion is the combination of data
from multiple sensors to obtain more accurate and
reliable environmental information than a single
sensor. Scholars have studied many sensors fusion
strategies. This paper mainly discusses three
important fusion levels. (1) Low-level Fusion (LLF):