Table 2: Model performance.
Evaluation Value
Accuracy 87.63%
Precision 89.69%
Recall 86.14%
F1-score 87.88%
7 CONCLUSIONS
This work focuses on improving the fatigue driving
recognition model and designing an alert system for
fatigue driving in the IoV system. There are three
main advantages of this work. The first is combining
image and trajectory information for fatigue driving.
The facial expression of the driver is captured by
image information, while the driving pattern of the
vehicle is analyzed by trajectory information, which
improves the accuracy of identifying the fatigued
driving state. Second, the fusion of the two kinds of
data at the model level is achieved, which does not
rely on manually setting thresholds for judgment.
Compared with the existing multimodal data
decision-making algorithms, the recognition process
is more flexible and stable. Finally, by combining the
fatigue driving recognition system with IoV, the
design of a fatigue driving warning system in-vehicle
and inter-vehicle is proposed, which provides a new
solution for the application of fatigue driving
recognition algorithms. Based on this study, the
future research directions are as follows. Further
training of the model, the existing model version is
limited by time and resources, and the accuracy can
be improved. Experiment with other model
frameworks and data sources and train on more
datasets to find a better solution for fatigued driving
recognition. Continue to complete the development of
the simulation application for the IoV system in the
short-term plan and further apply the design to a real
IoV system in the long-term plan.
AUTHORS CONTRIBUTION
All the authors contributed equally and their names
were listed in alphabetical order.
REFERENCES
Tian, Y., & Cao, J., 2021. Fatigue driving detection based
on electrooculography: A review. EURASIP Journal on
Image and Video Processing, 2021(1), 33.
Chen, L., Xin, G., Liu, Y., Huang, J., & Chen, B., 2021.
Driver Fatigue Detection Based on Facial Key Points
and LSTM. Sec. and Commun. Netw., 2021.
Liu, L., Wang, Z., & Qiu, S., 2020. Driving Behavior
Tracking and Recognition Based on Multisensor Data
Fusion. IEEE Sensors Journal, 20(18), 10811–10823.
Tang, X., 2024. Research on Dangerous Driving Behavior
Recognition based on the fusion of localization and
vision technology. [Unpublished master’s thesis].
Nanjing University of Science and Technology
Awarding the Degree.
Abbas, Q., & Alsheddy, A., 2020. Driver Fatigue Detection
Systems Using Multi-Sensors, Smartphone, and Cloud-
Based Computing Platforms: A Comparative Analysis.
Sensors (Basel, Switzerland), 21.
He, K., Zhang, X., Ren, S., & Sun, J., 2015. Deep Residual
Learning for Image Recognition. 2016 IEEE
Conference on Computer Vision and Pattern
Recognition (CVPR), 770-778.
Geiger, A., Lenz, P., & Urtasun, R., 2012. Are we ready for
autonomous driving? The KITTI vision benchmark
suite. 2012 IEEE Conference on Computer Vision and
Pattern Recognition, 3354-3361.
Gupta, I., Garg, N., Aggarwal, A., Nepalia, N., & Verma,
B., 2018. Real-Time Driver's Drowsiness Monitoring
Based on Dynamically Varying Threshold. 2018
Eleventh International Conference on Contemporary
Computing (IC3), 1-6.
Sommer, C., German, R., & Dressler, F., 2011.
Bidirectionally Coupled Network and Road Traffic
Simulation for Improved IVC Analysis. IEEE
Transactions on Mobile Computing, 10, 3-15.
Behrisch, M., Bieker, L., Erdmann, J., & Krajzewicz, D.,
2011. SUMO - Simulation of Urban MObility. The
Third International Conference on Advances in System
Simulation, pp. 63-68
Varga, A., & Hornig, R., 2008. An overview of the
OMNeT++ simulation environment. International
ICST Conference on Simulation Tools and Techniques.