
in brushstroke techniques. Such ambiguities align
with findings from Vuttipittayamongkol et al., who
emphasized that addressing overlapping features is
critical to improving model performance
(Vuttipittayamongkol et al., 2020) . These factors
collectively make it challenging for the model to
clearly distinguish between styles, necessitating more
sophisticated approaches to mitigate their impact.
To address these challenges, several
improvements were made in the data processing for
this experiment. By rigorously screening the training
data, transitional-style artworks were further filtered
out to ensure the representativeness and purity of each
style category, thereby reducing feature confusion
between categories. Furthermore, the sample size was
increased, particularly for categories with complex or
easily confusable style features. By expanding the
artwork samples, the model’s understanding of
stylistic diversity was further enhanced.
Future research could focus on introducing multi-
label classification methods to enable the model to
identify multiple stylistic features that may coexist
within a single artwork. This approach aligns with the
advancements discussed by Coulibaly et al., who
proposed a Multi-Branch Neural Network (MBNN)
framework for multi-label classification (Coulibaly et
al., 2022). Their work highlights the potential of
combining multitask learning and transfer learning to
enhance the performance of classification models,
particularly for datasets with overlapping features or
complex label structures (Coulibaly et al., 2022). By
applying similar methodologies, art style
classification systems can better reflect the
complexity and diversity of art styles.
Additionally, incorporating external information,
such as the creation dates of artworks or background
information about the artists, could provide richer
contextual support for classification and enhance the
model’s recognition capability. Coulibaly et al. also
emphasized the role of external information through
pre-trained feature extractors and attention
mechanisms to improve classification accuracy
(Coulibaly et al., 2022). Inspired by this, future
models could leverage contextual data to more
effectively identify complex and transitional styles.
4 CONCLUSIONS
This study successfully developed a deep learning-
based system for art style classification, utilizing the
MobileNetV2 model combined with techniques such
as data augmentation and transfer learning.
The system achieved satisfactory accuracy in
classifying eight representative art styles. This
achievement not only provides technical support for
automated art style recognition but also offers
valuable insights into the intersection of artificial
intelligence and cultural heritage preservation.
However, despite these successes, the system's
performance is still limited by challenges such as the
overlapping complexity between art styles and the
lack of diverse annotated datasets. These limitations
indicate that there is room for improvement in data
preprocessing and feature extraction.
Building on the findings in Section 3.3, future
work could focus on introducing multi-label
classification methods to better capture the
coexistence of multiple stylistic features within a
single artwork. Additionally, integrating contextual
data, such as creation dates or artist backgrounds,
could enhance classification robustness and provide
richer insights. As highlighted in (Yu et al., 2021),
combining transfer learning with external contextual
data is a promising approach to address the challenges
of multi-label classification, offering improved model
versatility and generalization. Furthermore, the
outcomes of this study contribute not only to art
education and cultural dissemination but also to
potential applications in cultural heritage
preservation and digital management, thereby driving
technological innovation in the art domain.
REFERENCES
Coulibaly, S., Kamsu-Foguem, B., Kamissoko, D., &
Traore, D. (2022). Deep convolution neural network
sharing for the multi-label images classification.
Machine Learning with Applications, 10, 100422.
Gulzar, Y. (2023). Fruit image classification model based
on MobileNetV2 with deep transfer learning technique.
Sustainability, 15(3), 1906.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017).
ImageNet classification with deep convolutional neural
networks. Communications of the ACM, 60(6), 84-90.
Maharana, K., Mondal, S., & Nemade, B. (2022). A review:
Data pre-processing and data augmentation techniques.
Global Transitions Proceedings, 3(1), 91-99.
Saleh, B., & Elgammal, A. (2015). Large-scale
classification of fine-art paintings: Learning the right
metric on the right feature. arXiv preprint
arXiv:1505.00855.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen,
L. C. (2018). Mobilenetv2: Inverted residuals and linear
bottlenecks. In Proceedings of the IEEE conference on
computer vision and pattern recognition (pp. 4510-
4520).
Art-Style Classification Using MobileNetV2: A Deep Learning Approach
19