
automatic driving target detection methods mainly
include Faster-RCNN (Zhou et al., 2021) based on
RGB images and PointNet based on point cloud data.
The RGB image method focuses on visual
recognition and point cloud data is more suitable for
processing three-dimensional structures. In addition,
multi-modal fusion technology combines data from
different sensors to effectively improve perception,
positioning and decision-making capabilities. Future
improvements and research will make breakthroughs
in improving sensor data quality, optimizing multi-
modal fusion algorithms, and enhancing model
generalization ability. First, improving the data
quality of the sensors is the basis for ensuring the safe
operation of the autonomous driving system in
complex road conditions, especially the accuracy of
data acquisition in bad weather is crucial. Second, the
optimization of multimodal fusion technology will
enable autonomous vehicles to better combine data
from cameras, LiDAR and radar to improve
environmental awareness and decision-making. In
addition, enhancing the generalization ability of the
model will be key, especially in response to different
geographical environments and uncertainties, and the
robustness of the model needs to be significantly
improved. These future studies will advance the
safety and reliability of autonomous driving systems
in the real world, preparing them for wider
deployment.
REFERENCES
Daily, M., Medasani, S., Behringer, R., & Trivedi, M.,
2017. Self-driving cars. Computer, 50(12), pp. 18–23.
Dickmann, J., et al., 2016. Automotive radar: The key
technology for autonomous driving: From detection
and ranging to environmental understanding. In 2016
IEEE Radar Conference (RadarConf), Philadelphia,
PA, USA, pp. 1–6.
Faisal, A., et al., 2019. Understanding autonomous vehicles:
A systematic literature review on capability, impact,
planning, and policy. Journal of Transport and Land
Use, 12(1), pp. 45–72.
Hussain, M. I., Azam, S., Rafique, M. A., Sheri, A. M., &
Jeon, M., 2022. Drivable region estimation for self-
driving vehicles using radar. IEEE Transactions on
Vehicular Technology, 71(6), pp. 5971–5982.
Liu, Z., Zhang, X., He, H., & Zhao, Y., 2022. Robust target
recognition and tracking of self-driving cars with radar
and camera information fusion under severe weather
conditions. IEEE Transactions on Intelligent
Transportation Systems, 23(7), pp. 6640–6653.
Ni, J., Shen, K., Chen, Y., Cao, W., & Yang, S. X., 2022.
An improved deep network-based scene classification
method for self-driving cars. IEEE Transactions on
Instrumentation and Measurement, 71, pp. 1–14.
Niranjan, D. R., VinayKarthik, B. C., & Mohana, 2021.
Deep learning-based object detection model for
autonomous driving research using CARLA simulator.
In 2021 2nd International Conference on Smart
Electronics and Communication (ICOSEC), Trichy,
India, pp. 1251–1258.
Paigwar, A., Erkent, O., Wolf, C., & Laugier, C., 2019.
Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR)
Workshops. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern
Recognition (CVPR) Workshops, pp. 0–0.
Ren, M., He, P., & Zhou, J., 2021. Improved shape-based
distance method for correlation analysis of multi-radar
data fusion in self-driving vehicles. IEEE Sensors
Journal, 21(21), pp. 24771–24781.
Shahian Jahromi, B., Tulabandhula, T., & Cetin, S., 2019.
Real-time hybrid multi-sensor fusion framework for
perception in autonomous vehicles. Sensors, 19(19), pp.
4357.
Silva, A. L., Oliveira, P., Duraes, D., Fernandes, D., Nevoa,
R., Monteiro, J., Melo-Pinto, P., & Machado, J., 2023.
A framework for representing, building, and reusing
novel state-of-the-art three-dimensional object
detection models in point clouds targeting self-driving
applications. Sensors, 23(23), pp. 6427.
Simhambhatla, R., Okiah, K., Kuchkula, S., & Slater, R.,
2019. Self-driving cars: Evaluation of deep learning
techniques for object detection in different driving
conditions. SMU Data Science Review, 2(1), pp. 1–23.
Wang, L., & Goldluecke, B., 2021. Sparse-PointNet: See
further in autonomous vehicles. IEEE Robotics and
Automation Letters, 6(4), pp. 7049–7056.
Yang, T., & Lv, C., 2022. A secure sensor fusion
framework for connected and automated vehicles under
sensor attacks. IEEE Internet of Things Journal, 9(22),
pp. 22357–22365.
Zhou, Y., Wen, S., Wang, D., Mu, J., & Richard, I., 2021.
Object detection in autonomous driving scenarios based
on an improved Faster-RCNN. Applied Sciences,
11(11), pp. 11630.
DAML 2024 - International Conference on Data Analysis and Machine Learning
450