reliance on local receptive fields may limit its ability
to integrate cross-magnification information
effectively. Future research could explore self-
attention-based architectures, such as ViT or Swin
Transformer, to improve global feature extraction.
Additionally, leveraging DenseNet or other feature-
reuse networks may enhance robustness, especially in
small-sample scenarios.
Second, although the EBHI dataset was used to
validate the proposed approach, further experiments
on larger and more diverse pathological datasets are
necessary to assess the generalizability and stability
of the method. Additionally, in real-world clinical
practice, pathologists rely not only on static images
but also on clinical history and lesion evolution over
time. Future research should explore Multimodal
Fusion Models, integrating multi-magnification
information with other clinical data, to enhance
diagnostic decision-making.
Overall, while this study demonstrates the
effectiveness of stepwise learning and multi-
magnification fusion, further improvements in model
selection, dataset diversity, and clinical applicability
are necessary to enhance the practical deployment of
such methods in pathology.
4 CONCLUSION
This study systematically investigates the impact of
multi-magnification information on pathological
image classification by designing and validating three
learning strategies: Single-Magnification Training,
Multi-Channel Fusion, and Stepwise Cumulative
Learning. The experiments, conducted using
ResNet50 on the EBHI dataset, demonstrate the
effectiveness of the proposed strategies in enhancing
classification performance.
The results confirm that the proposed strategies
significantly enhance classification performance. In
Single-Magnification Training, the classification
accuracy was improved from the previously reported
highest accuracy of 83.81% to 94.64% at 200×
magnification through the optimization techniques
applied in this study. Stepwise Cumulative Learning
achieved the highest accuracy among all strategies,
particularly in malignant pathology detection, where
it further improved classification accuracy to 98.27%
on 400× test images. Additionally, the study
highlights the impact of different missing
magnification image filling strategies, showing that
the Strict Filtering approach yields the best
classification performance (96.06%).
These findings suggest that progressively
incorporating low-magnification information
enhances the model’s ability to extract discriminative
features, improving overall classification accuracy.
Moreover, this study validates the suitability of the
EBHI dataset for multi-magnification learning
research, providing a useful reference for future
dataset selection.
In summary, this study presents a novel
optimization approach for multi-magnification
pathological image classification, laying the
groundwork for future advancements in intelligent
pathology image analysis.
REFERENCES
Altman, D. G., & Bland, J. M., 1994. Diagnostic tests. 1:
Sensitivity and specificity. BMJ: British Medical
Journal, 308(6943), 1552.
Das, K., Karri, S. P. K., Roy, A. G., Chatterjee, J., & Sheet,
D., 2017. Classifying histopathology whole-slides
using fusion of decisions from deep convolutional
network on a collection of random multi-views at multi-
magnification. In 2017 IEEE 14th International
Symposium on Biomedical Imaging (ISBI 2017), pp.
1024-1027. IEEE.
Hao, R., Namdar, K., Liu, L., Haider, M. A., & Khalvati, F.,
2021. A comprehensive study of data augmentation
strategies for prostate cancer detection in diffusion-
weighted MRI using convolutional neural networks.
Journal of Digital Imaging, 34, 862-876.
Hashimoto, N., Fukushima, D., Koga, R., Takagi, Y., Ko,
K., Kohno, K., et al., 2020. Multi-scale domain-
adversarial multiple-instance CNN for cancer subtype
classification with unannotated histopathological
images. In Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition, pp. 3852-
3861.
He, K., Zhang, X., Ren, S., & Sun, J., 2016. Deep residual
learning for image recognition. In Proceedings of the
IEEE conference on computer vision and pattern
recognition, pp. 770-778.
Hu, W., Li, C., Rahaman, M. M., Chen, H., Liu, W., Yao,
Y., et al., 2023. EBHI: A new Enteroscope Biopsy
Histopathological H&E Image Dataset for image
classification evaluation. Physica Medica, 107, 102534.
Khan, A. A., Arslan, M., Tanzil, A., Bhatty, R. A., Khalid,
M. A. U., & Khan, A. H., 2024. Classification of colon
cancer using deep learning techniques on
histopathological images. Migration Letters, 21(S11),
449-463.
Kim, S. H., Koh, H. M., & Lee, B. D., 2021. Classification
of colorectal cancer in histological images using deep
neural networks: An investigation. Multimedia Tools
and Applications, 80(28), 35941-35953.
Malik, J., Kiranyaz, S., Kunhoth, S., Ince, T., Al-Maadeed,
S., Hamila, R., & Gabbouj, M., 2019. Colorectal cancer