Authors:
Pranav Jeevan
;
Nikhil Kurian
and
Amit Sethi
Affiliation:
Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, India
Keyword(s):
Histopathology, Classification, Vision-Transformer, Token-Mixers, Generalization.
Abstract:
Convolution neural networks (CNNs) are widely used in medical image analysis, but their performance degrades when the magnification of testing images differs from that of training images. The inability of CNNs to generalize across magnification scales can result in sub-optimal performance on external datasets. This study aims to evaluate the robustness of various deep learning architectures for breast cancer histopathological image classification when the magnification scales are varied between training and testing stages. We compare the performance of multiple deep learning architectures, including CNN-based ResNet and MobileNet, self-attention-based Vision Transformers and Swin Transformers, and token-mixing models, such as FNet, ConvMixer, MLP-Mixer, and WaveMix. The experiments are conducted using the BreakHis dataset, which contains breast cancer histopathological images at varying magnification levels. We show that the performance of WaveMix is invariant to the magnification of
training and testing data and can provide stable and good classification accuracy. These evaluations are critical in identifying deep learning architectures that can robustly handle domain changes, such as magnification scale.
(More)