(Hollon et al. 2020) provided a parallel approach
that uses deep convolutional neural networks (CNNs)
in conjunction with label-free optical imaging
technique stimulated Raman histology to detect
disease at predict almost real-time. In particular, our
CNNs—which were trained on more than 2.5 million
SRH images—can diagnose brain tumors in the
operating room in less than 150 seconds, which is
orders of magnitude quicker than traditional methods
(which take, say, 20–30 minutes).
(Arif F et al. 2022) In order to enhance
performance and streamline the medical picture
segmentation process, a deep learning classifier and
Berkeley's wavelet transformation (BWT) have been
the foundation of the suggested system's research.
Utilizing the gray-level-co-existence matrix (GLCM)
approach, significant features are identified from each
segmented tissue and then optimized using a genetic
algorithm. Based on factors including accuracy,
sensitivity, specificity, spatial overlap, AVME, FoM,
Jaccard's coefficient, and coefficient of dice, the
creative outcome of the employed approach was
evaluated.
(Alsaif et al. 2022) The suggested approach
performs exceptionally well for the initial cluster
centers and size. Segmentation is carried out utilizing
BWT methods, which have lower computational
speed and accuracy. This paper suggests a method to
divide the brain tissue that involves very little human
intervention. The primary motive of this approach is
to expedite the process of patient identification for
neurosurgeons or other human experts. Comparing
the testing results to the most advanced technology,
the accuracy is 98.5%. There is still room for
improvement in terms of computational time, system
complexity, and memory usage when executing the
algorithms. The same methodology can also be
applied to the identification and examination of
various illnesses present in other bodily organs, such
as the kidney, liver, or lungs. It is possible to employ
several classifiers with optimization techniques.
Utilizing the Faster R-CNN deep learning
architecture, (R. Sa et al. 2017) propose a method to
identify intervertebral discs in X-ray pictures.
Scientists employ this CNN to enhance the accuracy
and efficiency of intervertebral disc recognition, a
vital stage in diagnosing spinal problems. Their
methodology demonstrates significant improvements
in detection accuracy compared to traditional
approaches, highlighting the potential of Faster R-
CNN for application in medical image processing.
The study demonstrates how sophisticated deep
learning methods may improve radiology's capacity
for diagnosis. This problem was resolved by (R. Sa et
al. 2017). Traditional machine learning methods
require a manually generated feature for
classification. However, without requiring human
feature extraction, deep learning systems can be
developed to yield accurate classification results.
Since there are a lot of MRI pictures in the first
dataset, we use a 23-layer CNN to build our models
at first.
(Alanazi, Muhannad Faleh et al. 2022) To
evaluate how well convolutional neural networks
(CNNs) perform on brain magnetic resonance
imaging (MRI), they are built from the ground up
using various layers. The 22-layer, binary-
classification (tumor or no tumor) isolated-CNN
model is then utilized once more to re-adjust the
weights of the neurons for the purpose of classifying
brain MRI pictures into tumor subclasses using the
transfer-learning concept. This results in a high
accuracy of 95.75% for the transfer-learned model
developed for the MRI images from the same MRI
machine. The created transfer-learned model has also
been validated using brain MRI images from another
machine to verify its general competence, flexibility,
and reliability for future real-time application. The
results show that the proposed model achieves a high
accuracy of 96.89% for a previously unobserved
brain MRI dataset. Thus, the recommended deep
learning.
3 METHODOLOGY
The Hybrid approach for Brain Tumor detection
using CNN with ResNet50 was proposed and detailed
description is given below:
This methodology follows three step process.
Firstly, trained the data. Secondly, various pooling
techniques are applied and finally classifiers are used
to find features. The Concatenating pooling layer
from the ResNet50 model yielded the final features.
Ultimately, a concatenated feature vector measuring
4096 × 1 is obtained. Because each pre-trained CNN
model's final pooling layers seek to gather the best
features for classifying the target class rather than
irrelevant features.
Figure 2 presents the suggested hybrid deep
learning model. It performs Radiography
classification method using two base models and a
heading model. Concatenating with CNN models
with ResNet50 results in a single feature vector. The
output metrics are examined using the deep neural
network classifier. Because of their easy training
times and straightforward structure, two pre-trained
models are recommended.