followed by interval-free reconstruction, the window
width was set to 1 450 HU and the window position
was set to-831.1HU to ensure accurate identification
of lesions.Related data (such as communication data,
medical digital imaging, etc.) after import relevant
imaging examination, AI pulmonary nodule
diagnosis system, will identify and check two stages,
then use computer-aided quantitative parameter
system data preprocessing, single box detector
simulation training, nodule measurement operation,
automatically check the nodule edge, this series of
operation do the three-dimensional state of nodule
long diameter, short diameter, volume, maximum
cross-sectional area and other parameters. However,
the study of Li Juan, Tang Xiangyu et al. used AI
analysis for intracranial hematoma, using two
datasets (Li, Tang, 2021). Data set 1 contained 9594
plain scan images of craniocerebral CT, of which 223
patients with positive intracranial hemorrhage were
used as the test set, and the rest were used as the
training set. Dataset 2 contains 819 CT images of
bleeding foci that have been manually delineated, of
which 74 were used as a test set to verify the
consistency between algorithm segmentation and
manual segmentation. Data input of the CT images
was first performed, and all of the CT images were
performed in a standard DICOM format. Data
preprocessing included image correction, skull
removal, and grayscale normalization. In the cross-
axis data, the position of the two endpoints in the
midline brain is detected based on deep learning, and
the CT image of the cross-axis is rotated, which
automatically positions the brain position. Then, the
brain tissue area was automatically segmented based
on deep learning, and the interference information
including the skull and other images in the image, was
automatically removed. After the gray scale
normalization to [-1,1], the calling residual network
(ResNet) of the image was classified according to five
bleeding types and a total of six labels. For each layer
of classification results, call circulating neural
network (long short-term memory network, LSTM)
results correction.For bleeding focus segmentation
task, after image pretreatment, call V network (VB-
Net) model training, which through the voxel
statistics and spacing conversion, the introduction of
the algorithm and the model can automatically get
each bleeding statistics to calculate the hematoma
volume, is a big improvement.
Jinxiu Cai et al. studied a deep learning-based
chest X-ray (CXR) image classification model(Cai,
2022), using the Vgg16 network to classify different
types of chest X-rays, and successfully distinguished
between adult anteroposterior chest x-rays, lateral x-
rays, bedside x-rays, and infant x-rays. The model
showed high accuracy (94%~100%) in the test set and
external validation, and was able to automatically
screen out qualified images for subsequent disease
diagnosis. The characteristics of this research are that
the automation level of image classification is
improved through deep learning models, the errors of
manual operation are reduced, and the work
efficiency of the imaging department is improved. In
the future, the model is expected to be further
optimized and extended to more image quality
evaluation scenarios.
Ramadhan Hardani Putra et al. reviewed the
application of artificial intelligence in digital dental
radiology, covering multiple aspects such as caries
diagnosis, periodontal bone loss analysis, cyst and
tumor classification, etc (Ramadhan, 2022). The
study has shown that deep learning (DL) and
convolutional neural networks (CNNs) excel in
dental image analysis, automatically identifying
complex image patterns and providing quantitative
analysis. The research is characterized by the fact that
it demonstrates the potential of deep learning for a
wide range of applications in dental imaging,
especially in terms of automated diagnosis and image
quality enhancement. In the future, with the
expansion of datasets and the improvement of
algorithms, deep learning is expected to play a greater
role in dental clinical practice.
Finally, Xian Chang et al studied the identification
of key parameters of lumbar X-ray based on deep
learning model, and constructed a fully convolutional
neural network based on U-net+Attention, which was
used to automatically measure the intervertebral
space height index, lumbar spine motion angle and
segmental mobility (Xian, 2024) . The average IOU
of the model on the test set reached 0.940 and the Dice
coefficient was 0.980, showing high segmentation
accuracy. The characteristics of this study are that the
automatic measurement of lumbar spine imaging
parameters is realized through deep learning
technology, which reduces the error and workload of
manual measurement, and improves the accuracy of
clinical decision-making. In the future, the model is
expected to be further optimized and applied to more
spine image analysis scenarios. In summary, the
contribution of deep learning in the field of X-ray is
mainly reflected in automation, high precision and
high efficiency. Through different deep learning
models, researchers have successfully solved the
problems of medical image classification,
segmentation, and parameter measurement, and
significantly improved the efficiency and accuracy of