capable of doing deep-fake detection and, therefore,
good at unveiling minute manipulations
(M.Venkateswarlu, 2025) (N.Saravanan et al,2025)
Deepfake generation methods, including facial
swapping, expression manipulation, or synthesis-
based facial animation, are all variable methods that
will require a contrasting range of capabilities for
accurate identification and distinguishable traits
incorporated within them. ResNet50 scores high in
feature extraction, which allows it to withstand
variation in the described attributes (H. Chen, 2025)
Due to its great generalizability across varied
datasets, it can distinguish deepfakes coming from
different sources and formats, which is an exciting
feature for real-world deployment. The extremely
high accuracy shows that deep learning-based
methods have prospects like ResNet50. Future work
will extend to increase diversity of the dataset,
enhance pre-processing, and boost the accuracy with
ensemble learning. Attention mechanisms and
adversarial training will improve robustness.
Optimizations will minimize further computational
overhead. Extracting both image and audio features
for generating the model for detection shall increase
this performance.
7 CONCLUSIONS
The deepfake detection system was designed to train,
and test for the performance in the differentiating
between real and deepfake images utilizing the
ResNet50 algorithm. The results showed that the
ResNet50 model displayed significantly better than
the performance of the MobileNetV2 model and
demonstrated between 91.81 % to 97.87 % detection
accuracy, while MobileNetV2 achieved 86.37 % to
89.56 % accuracy. This signifies the wide margin for
the improvement of the determines ResNet50 model's
capability of detecting minute anomalies and
manipulations in facial images and essential for high-
accuracy applications and considering stability with
the MobileNetV2 showing a standard deviation of
1.01267, indicating some variability in the
performance across the different datasets. On the
other hand, the lower standard deviation of 0.86375
for the ResNet50 model indicates the model had
consistently good performance across varying
datasets.
REFERENCES
A. V. Santhosh Babu et al, “Performance analysis on
cluster-based intrusion detection techniques for energy
efficient and secured data communication in MANET,”
International Journal of Information Systems and
Change Management, Aug. 2019, Accessed: Feb. 03,
2025. [Online]. Available: https://www.inderscienceo
nline.com/doi/10.1504/IJISCM.2019.101649
A. Qadir, R. Mahum, M. A. El-Meligy, A. E. Ragab, A.
AlSalman, and M. Awais, “An efficient deepfake video
detection using robust deep learning,” Heliyon, vol. 10,
no. 5, p. e25757, Mar. 2024.
C. Yang, S. Ding, and G. Zhou, “Wind turbine blade
damage detection based on acoustic signals,” Sci Rep,
vol. 15, no. 1, p. 3930, Jan. 2025.
C. Wang, C. Shi, S. Wang, Z. Xia, and B. Ma, “Dual-Task
Mutual Learning with QPHFM Watermarking for
Deepfake Detection.” Accessed: Feb. 03, 2025.
[Online]. Available:
https://doi.org/10.1109/LSP.2024.3438101
D. Zhu, C. Li, Y. Ao, Y. Zhang, and J. Xu, “Position
detection of elements in off-axis three-mirror space
optical system based on ResNet50 and LSTM,” Opt
Express, vol. 33, no. 1, pp. 592–603, Jan. 2025.
E. Şafak and N. Barışçı, “Detection of fake face images
using lightweight convolutional neural networks with
stacking ensemble learning method,” PeerJ Comput
Sci, vol. 10, p. e2103, Jun. 2024.
H. Chen, G. Hu, Z. Lei, Y. Chen, N. M. Robertson, and S.
Z. Li, “Attention-Based Two-Stream Convolutional
Networks for Face Spoofing Detection.” Accessed:
Feb. 03, 2025. [Online]. Available: https://doi.org/10.
1109/TIFS.2019.2922241
I. N. K. Wardana, “Design of mobile robot navigation
controller using neuro-fuzzy logic system,” Computers
and Electrical Engineering, vol. 101, p. 108044, Jul.
2022.
K. Stehlik-Barry and A. J. Babinec, Data Analysis with
IBM SPSS Statistics. Packt Publishing Ltd, 2017.
L. Pham, P. Lam, T. Nguyen, H. Nguyen, and A. Schindler,
“Deepfake Audio Detection Using Spectrogram-based
Feature and Ensemble of Deep Learning Models.”
Accessed: Feb. 03, 2025. [Online]. Available:
https://doi.org/10.1109/IS262782.2024.10704095
L. Si et al., “A Novel Coal-Gangue Recognition Method for
Top Coal Caving Face Based on IALO-VMD and
Improved MobileNetV2 Network.” Accessed: Feb. 03,
2025. [Online]. Available: https://doi.org/10.1109/TI
M.2023.3316250
L. Zhou, C. Ma, Z. Wang, Y. Zhang, X. Shi, and L. Wu,
“Robust Frame-Level Detection for Deepfake Videos
with Lightweight Bayesian Inference Weighting.”
Accessed: Feb. 03,2025.
[Online].Available:https://doi.org/10.1109/JIOT.2023.
3337128
M. C. Gursesli, S. Lombardi, M. Duradoni, L. Bocchi, A.
Guazzini, and A. Lanata, “Facial Emotion Recognition
(FER) Through Custom Lightweight CNN Model:
Performance Evaluation in Public Datasets.” Accessed: