accuracy of unobstructed face recognition. Therefore,
the research on masked face recognition is
particularly urgent and important. In recent years,
masked face recognition has also been a hot research
direction in the field of face recognition.
3 FACE RECOGNITION IN A
BLOCKED ENVIRONMENT
Although face recognition technology has matured, it
still faces many challenges in real life. This is
because, in real life, it is impossible to ensure that the
face is not covered every time it is tested, such as
masks, hair, hats, etc., or light occlusion caused by
uneven external light. Especially with the outbreak of
COVID-19 in 2020, people's health awareness has
gradually increased. In many public places, people
will choose to wear masks, which greatly increases
the difficulty of face detection. Therefore, the study
of masked face recognition has become an inevitable
choice. In this chapter, I will introduce five methods
of masking face recognition.
Xu and others proposed a face image recognition
method based on the circular generation of
confrontation networks (Xu et al., 2022). The model
reconstructs the entire image and outputs the image
that restores the original image features, completes
the face repair, and trains through two pairs of
distinguishers and generators to ensure the accuracy
of the repair. After the repair is completed, the
residual network ResNet-50 is used to extract facial
features and the loss function RegularFace is
introduced to deal with the impact of different classes
of interclass distance on classification. Although the
repair effect will be affected by factors such as linear
nonlinear occlusion and the occlusion area of the
occlusion part, resulting in large differences in repair
effects, in general, when using the repaired pictures
for face detection, the accuracy of the detection will
be significantly improved.
Zhou et al. proposed a block-based obscured face
recognition algorithm in combination with
convolutional neural networks (Zhou et al., 2018).
The algorithm obtains the feature points of a face
through the self-coding network (CFAN) and divides
them into four areas: left and right eyes, mouth and
nose. After the blocking is completed, an occlusion
discrimination network is trained based on the
InceptionV3 network to perform occlusion
discrimination for each area, and feature fusion and
similarity detection are carried out according to the
discrimination results to obtain the final facial
features. The literature compares this method with the
classical Sparse Representation-based Classification
(SRC), Group Sparse Representation-based
Classification (GSRC) and Robust Sparse Coding
(RSC) algorithms in terms of the covering part and
the covering area (Wright et al., 2008; Yang & Zhang,
2010). When covering a large area, the algorithm has
maintained an extremely high accuracy. However, in
the case of sunglasses occlusion, the algorithm is not
as accurate as the RSC algorithm, which may be
because too much feature block occlusion leads to the
loss of eye features, so the algorithm still needs to be
improved.
Zhou and others optimized the scale invariant
feature transformation (SIFT) algorithm of human
faces (Zhou & Lai, 2011). The traditional SIFT
algorithm can find out the key points of most matches
in the image, but there are still some mismatches.
Therefore, they proposed a new matching idea, taking
a key point in an image and finding out the first two
key points in another image that are closest to that
point. In these two key points, if the nearest distance
divided by the sub-close distance is less than a certain
proportional threshold, the two key points are
considered to be matched. Experiments were
conducted on the AR face library and the Manchester
face library, and the results showed that the
recognition rate of the optimization method was about
10% higher than that of the traditional SIFT
algorithm.
Li and others proposed a masked face recognition
method based on the detection and elimination of face
heterogenous areas of the average face (Li et al.,
2015). This method obtains the error face image by
performing the difference between the test face and
the average face formed by the training picture and
segmenting the error image to obtain the information
description of the occlusion area. This is a very
critical step in the whole algorithm and determines
the division of the obscured area of the test set and the
training set later. The training set and the test set form
a new data set after removing the corresponding
blocking parts. The calculation difficulty of this
algorithm is relatively small, which reduces the
difficulty of implementation. However, experiments
have proved that the segmentation effect of the error
face image of the algorithm is poor in the presence of
light changes, and the impact of light intensity on the
algorithm needs to be further explored.
Li and others proposed an algorithm that
combines machine learning based on thinking
evolution with local features (Li G. & Li W., 2014).
Considering that face recognition in practical
applications will be affected by uncertainties such as