
shape, and brightness. Because there aren’t enough
ophthalmologists in India, screening each patient by
hand takes time. The indian Diabetic Retinopathy im-
age consists of 512 images. The resolution of this im-
ages are 4288 X 2848 pixels. This dataset composed
of 5 DR and 3 DME. It provides severity level of DR
and DME for each image in the dataset. It also pro-
vides normal retina and DR lesions structures.
2 LITERATURE SURVEY
HUA et al., a proposed design called TFA-Net, which
is a Twofold Feature Augmentation mechanism con-
nected to a backbone convolutional network.Several
convolution blocks are used in the former to extract
representational information at different scales(Bilal
et al., 2021). The latter is built in two stages: first,
a Reverse Cross-Attention (RCA) stream is deployed,
and then weight-sharing convolution kernels are em-
ployed.
M. M. Abdelsalam, M. A. Zahran et al., proposed
a detailed explanation of a revolutionary multifrac-
tal geometry-based early DR detection technique is
provided. Image analysis using OCTA (macular
optical coherence tomography angiography) for the
early detection of non-proliferative diabetic retinopa-
thy (NPDR)(Chaudhary and Pachori, 2022).
X. Zeng et al., proposed Automated diagnosis of di-
abetic retinopathy can be achieved by dividing color
retinal fundus photos into two categories(Dharmana
and Aiswarya, 2020).This research describes the use
of transfer learning to train a unique convolutional
neural network model with a Siamese-like topology.
L. Qiao et al., suggested a system that uses convo-
lutional neural network algorithms to analyze fundus
images for the presence of microaneurysms. Deep
learning is incorporated as a key component, and the
system is accelerated by GPUs (Graphic Processing
Units)(Hua et al., 2020).
K. Shankar et al. proposed using a brand-new auto-
mated Hyperparameter Tuning Inception-v4 (HPTI-
v4) model to recognize and categorize DR in color
fundus pictures(Shankar et al., 2020). The contrast
limited adaptive histogram equalization (CLAHE)
model will be used during the preprocessing stage
to raise the fundus image’s contrast level. The pre-
processed image is then segmented using a segmenta-
tion model based on histograms.
J. Wang et al., proposed a retinal fundus image can
be used to directly identify one or more fundus ill-
nesses using a multi-label classification ensemble
model based on CNN. Each and every model has
two components. The second section includes a pro-
prietary classification neural network for multi-label
classification challenges, whereas the first uses an
EfficientNet-based feature extraction network. Ulti-
mately, the final recognition result is a fusion of the
output probabilities from various models. Addition-
ally, training and testing were conducted using the
data set made available by the Peking University In-
ternational Competition on Ocular Disease Intelligent
Recognition (ODIR 2019)(Wang et al., 2020b).
Juan Wang et al., proposed a hierarchical multi-
task deep learning architecture for diagnosing fun-
dus photos’ DR-related properties and severity con-
currently(Wang et al., 2020a). To account for the
random relationship between DR severity levels and
DR-related features, a hierarchical framework is pro-
posed.
M. D. Alahmadi et al. created a deep neural network
that employs style and content recalibration to scale
informative regions for diabetic retinopathy classifi-
cation.To draw emphasis to texture details in the style
representation, the texture attention module applies a
high-pass filter. To identify the most informative area
of the input image, the spatial normalization module
uses a convolutional approach(Alahmadi, 2022).
W. Nazih et al.proposed an automated method for de-
termining the severity of DR in fundus images. To
find long-range correlations in images, we developed
a vision transformer deep learning pipeline(Nazih
et al., 2023). To train a large vision model on a lim-
ited dataset, the researchers employed transfer learn-
ing. The new real-world FGADR dataset was used to
train the model in order to test it.
ZHOU et al. developed a methodology for gener-
ating high-resolution DR images that performs well
with grading and lesion data. Synthesized data can
improve grading model performance, especially for
photos with high DR levels.(Zhou et al., 2020)
Natarajan Chidambaram et al., focused automated
CAD system that can identify and categorize
exudates in DR. Prior research mostly concen-
trated on using region-based techniques, such as
the Hough transform, watershed transform, re-
gion growth approaches, etc., to segment the optic
disc.(Chidambaram and Vijayan, 2018)
Bindhumol et al. system makes use of Transfer Learn-
ing methods, including ResNet50 and EfficientNetB5.
When comparing the two models classification and
confusion matrix results, it was found that ResNet50
performed better at classifying the DR images than
EfficientNetB5.(Bindhumol et al., 2022)
Meher Madhu Dharmanan et al. focused on blob de-
tection and image preprocessing are used to present
an effective, straightforward, and precise feature ex-
traction technique. In the proposed paradigm, testing
Detection of Diabetic Retinopathy Using MobileNet Model
479