Image Copy-Move Forgery Detection using Color Features and
Hierarchical Feature Point Matching
Yi-Lin Tsai and Jin-Jang Leou
Department of Computer Science and Information Engineering, National Chung Cheng University,
Chiayi 621, Taiwan
Keywords: Copy-Move Forgery Detection, Hierarchical Feature Point Matching, Color Feature, Iterative Forgery
Localization.
Abstract: In this study, an image copy-move forgery detection approach using color features and hierarchical feature
point matching is proposed. The proposed approach contains three main stages, namely, pre-processing and
feature extraction, hierarchical feature point matching, and iterative forgery localization and post-processing.
In the proposed approach, Gaussian-blurred images and difference of Gaussians (DoG) images are constructed.
Hierarchical feature point matching is employed to find matched feature point pairs, in which two matching
strategies, namely, group matching via scale clustering and group matching via overlapped gray level
clustering, are used. Based on the experimental results obtained in this study, the performance of the proposed
approach is better than those of three comparison approaches.
1 INTRODUCTION
Copy-move forgery, a common type of forged images,
copies and pastes one or more regions onto the same
image (Cozzolino, Poggi, and Verdoliva, 2015). Some
image processing operations, such as transpose,
rotation, scaling, and JPEG compression, will make
images more convincing. To deal with copy-move
forgery detection (CMFD), many CMFD approaches
have been proposed, which can be roughly divided
into three categories: block-based, feature point-
based, and deep neural network based.
Cozzolino, Poggi, and Verdoliva (2015) used
circular harmonic transform (CHT) to extract image
block features. A fast approximate nearest-neighbor
search approach (called patch match) is used to deal
with invariant features efficiently. Fadl and Semary
(2017) proposed a block-based CMFD approach using
Fourier transform for feature extraction. Bi, Pun, and
Yuan (2016) proposed a CMFD approach using
hierarchical feature matching and multi-level dense
descriptor (MLDD).
Amerini, et al. (2011) proposed a feature point-
based CMFD approach using scale invariant feature
transform (SIFT) (Lowe, 2004) for feature point
extraction. Amerini, et al. (2013) developed a CMFD
approach based on J-linkage, which can effectively
solve the problem of geometric transformation. Pun,
Yuan, and Bi (2015) proposed a CMFD approach
using feature point matching and adaptive
oversegmentation. Warif, et al. (2017) proposed a
CMFD approach using symmetry-based SIFT feature
point matching. Silva, et al. (2015) presented a CMFD
approach using multi-scale analysis and voting
processes. Jin and Wan (2017) proposed an improved
SIFT-based CMFD approach. Li and Zhou (2019)
developed a CMFD approach using hierarchical
feature point matching. Huang and Ciou (2019)
proposed a CMFD approach using superpixel
segmentation, Helmert transformation, and SIFT
feature point extraction (Lowe, 2004). Chen, Yang,
and Lyu (2020) proposed an efficient CMFD approach
via clustering SIFT keypoints and searching the
similar neighborhoods to locate tampered regions.
Zhong and Pun (2020) proposed a CMFD scheme
using a Dense-InceptionNet. Dense-InceptionNet is
an end-to-end multi-dimensional dense-feature
connection deep neural network (DNN), which
consists of pyramid feature extractor, feature
correlation matching, and hierarchical post-processing
modules. Zhu, et al. (2020) proposed a CMFD
approach using an end-to-end neural network based on
adaptive attention and residual refinement network
(AR-Net). Islam, Long, Basharat, and Hoogs (2020)
proposed a generative adversarial network with a
Tsai, Y. and Leou, J.
Image Copy-Move Forgery Detection using Color Features and Hierarchical Feature Point Matching.
DOI: 10.5220/0010492301530159
In Proceedings of the Inter national Conference on Image Processing and Vision Engineering (IMPROVE 2021), pages 153-159
ISBN: 978-989-758-511-1
Copyright
c
๎€ 2021 by SCITEPRESS โ€“ Science and Technology Publications, Lda. All rights reserved
153
dual-order attention model to detect and locate copy-
move forgeries. In this study, an image copy-move
forgery detection approach using color features and
hierarchical feature point matching is proposed.
This paper is organized as follows. The proposed
image copy-move forgery detection approach is
described in Section 2. Experimental results are
addressed in Section 3, followed by concluding
remarks.
2 PROPOSED APPROACH
2.1 System Architecture
As shown in Figure 1, in this study, an image copy-
move forgery detection approach using color features
and hierarchical feature point matching is proposed.
The proposed approach contains three main stages,
namely, pre-processing and feature extraction,
hierarchical feature point matching, and iterative
forgery localization and post-processing.
Figure 1: Framework of the proposed approach.
2.2 Pre-processing and Feature
Extraction
Let ๐ด
๎ฏ™
(
๐‘ฅ,๐‘ฆ
)
, 1โ‰ค๐‘ฅโ‰ค๐‘€, 1โ‰ค๐‘ฆโ‰ค๐‘, be the input
RGB color image with size ๐‘€ร—๐‘. The input color
image will be converted from RGB color space to HSI
color space and the intensity component image (I) is
enhanced by histogram equalization, which is
converted from HSI color space back to RGB color
space, denoted as ๐ด
๎ฏ‹๎ฏ€๎ฎป
(
๐‘ฅ,๐‘ฆ
)
,1โ‰ค๐‘ฅโ‰ค๐‘€, 1โ‰ค๐‘ฆโ‰ค
๐‘. To extract enough feature points, in this study,
๐ด
๎ฏ‹๎ฏ€๎ฎป
(
๐‘ฅ,๐‘ฆ
)
is enlarged by 2ร—2 linear interpolation,
denoted as ๐ธ
๎ฏ‹๎ฏ€๎ฎป
(
๐‘ฅ,๐‘ฆ
)
, 1โ‰ค๐‘ฅโ‰ค2๐‘€, 1โ‰ค๐‘ฆโ‰ค2๐‘.
Then, image ๐ธ
๎ฏ‹๎ฏ€๎ฎป
(
๐‘ฅ,๐‘ฆ
)
is converted into gray-level
image ๐ธ
๎ฏš๎ฏฅ๎ฏ”๎ฏฌ
(
๐‘ฅ,๐‘ฆ
)
, 1โ‰ค๐‘ฅโ‰ค2๐‘€, 1โ‰ค๐‘ฆโ‰ค2๐‘, which
is convolved with Gaussian filters of different scales.
Gaussian-blurred image ๐ฟ
(
๐‘ฅ,๐‘ฆ,๐‘š
๎ฐˆ
๐œŽ
)
,1โ‰ค๐‘ฅโ‰ค2๐‘€,
1โ‰ค๐‘ฆโ‰ค2๐‘, is computed as
๐ฟ
(
๐‘ฅ,๐‘ฆ,๐‘š
๎ฐˆ
๐œŽ
)
=๐บ
(
๐‘ฅ,๐‘ฆ,๐‘š
๎ฐˆ
๐œŽ
)
โจ‚๐ธ
๎ฏš๎ฏฅ๎ฏ”๎ฏฌ
(
๐‘ฅ,๐‘ฆ
)
,
๐›ผ=0,1,โ€ฆ,4,
(1)
where ๐บ
(
๐‘ฅ,๐‘ฆ,๐‘š
๎ฐˆ
๐œŽ
)
denotes the Gaussian kernel, ๐‘š
is a constant (here, ๐‘š=
โˆš
2
), โจ‚ denotes the
convolution operator, and ๐œŽ denotes a prior
smoothing value (here, ๐œŽ=1.6). Difference of
Gaussians (DoG) image ๐ท๎ตซ๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ๎ตฏ,1 โ‰ค ๐‘ฅโ‰ค
2๐‘€,1โ‰ค๐‘ฆโ‰ค2๐‘, is computed as
๐ท
๎ตซ
๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ
๎ตฏ
=๐ฟ
๎ตซ
๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰๎ฌพ๎ฌต
๐œŽ
๎ตฏ
โˆ’๐ฟ
๎ตซ
๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ
๎ตฏ
,
๐›ฝ=0,1,2,3.
(2)
As multiple octaves shown in Figure 2 (Lowe,
2004), each octave contains five Gaussian-blurred
images and four DoG images. The first scale value of
the i-th octave is ๐‘š
๎ฌถ(๎ฏœ๎ฌฟ๎ฌต)
๐œŽ. The first octave size is
2๐‘€ร—2๐‘, the second octave size with down-sampling
is ๐‘€ร—๐‘, โ€ฆ, etc.
Within an octave, to detect the local maxima and
minima of ๐ท๎ตซ๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ๎ตฏ, if the value of a pixel larger
(or smaller) than those of its 8 neighbors in the same
image and those of 2ร—9 neighbors in the two
neithboring DoG images with different scales, this
pixel is detected as a feature point. Note that the first
and last DoG images in each octave do not have
feature points.
Figure 2: Illustrated schematic diagram of Gaussian-blurred
images and DoG images (Lowe, 2004).
Second, using edge and contrast thresholds, all
candidate feature points will be refined so that
unstable extrema in SIFT feature points can be filtered
out. The extrema value is computed as
๐ท๎ตซ๐น
๎ท 
๎ตฏ=๐ท+
1
2
๎ตฌ
๐œ•๐ท
๐œ•๐น
๎ตฐ
๎ฏ
๐น
๎ท 
,
(3)
๐น
๎ท 
=โˆ’
๐œ•
๎ฌถ
๐ท
๐œ•
๐น
๎ฌถ
๎ฌฟ๎ฌต
ร—
๐œ•๐ท
๐œ•
๐น
,
(4)
IMPROVE 2021 - International Conference on Image Processing and Vision Engineering
154
where ๐น=(๐‘ฅ,๐‘ฆ,๐œŽ)
๎ฏ
and ๐‘‡ is a transpose. All
extrema with
|
๐ท
(
๐‘ฅ๎ทœ
)|
being less than ๐‘
๎ฏ›
(set to 0.1) are
discarded.
Third, to achieve rotational invariance, a gradient
magnitude ๐œ‡๎ตซ๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ๎ตฏ and a guiding direction
๐œƒ๎ตซ๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ๎ตฏ defined as
๐œ‡๎ตซ๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ๎ตฏ =
๎ถง
๐‘‘
๎ฏซ
๐Ÿ
+๐‘‘
๎ฏฌ
๐Ÿ
,
(5)
๐œƒ๎ตซ๐‘ฅ,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ๎ตฏ =tan
๎ฌฟ๎ฌต
(๐‘‘
๎ฏฌ
/๐‘‘
๎ฏซ
),
(6)
๐‘‘
๎ฏซ
=๐ท๎ตซ๐‘ฅ+1, ๐‘ฆ, ๐‘š
๎ฐ‰
๐œŽ๎ตฏโˆ’๐ท๎ตซ๐‘ฅโˆ’1,๐‘ฆ,๐‘š
๎ฐ‰
๐œŽ๎ตฏ, (7)
๐‘‘
๎ฏฌ
=๐ท๎ตซ๐‘ฅ,๐‘ฆ+1,๐‘š
๎ฐ‰
๐œŽ๎ตฏโˆ’๐ท๎ตซ๐‘ฅ,๐‘ฆโˆ’1,๐‘š
๎ฐ‰
๐œŽ๎ตฏ,
(8)
are allocated to each subsisted feature point. A generic
SIFT feature point ๐‘ƒ
๎ฏž
can be described as a four-
dimensional vector, i.e.,
๐‘ท
๐’Œ
=
(
๐’™
๐’Œ
,๐’š
๐’Œ
,๐’Ž
๐’”
๐ˆ,๐œฝ
๐’Œ
)
, ๐’Œ=๐Ÿ, ๐Ÿ, โ€ฆ,๐’,
(9)
where (๐‘ฅ
๎ฏž
,๐‘ฆ
๎ฏž
) denotes feature point coordinate, ๐‘›
denotes the total number of feature points, and ๐‘š
๎ฏฆ
๐œŽ
and ๐œƒ
๎ฏž
denote the scale and guiding direction of ๐‘ƒ
๎ฏž
,
reprectively.
(a) (b)
(c) (d)
Feature point
Figure 3: Schematic diagram of feature point descriptor
(Lowe, 2004): (a) gradient magnitudes and guiding
directions in a 8ร—8 region around a central feature point, (b)
a 2ร—2 descriptor, (c) a 16ร—16 region around a central feature
point, (d) a 128-dimensional descriptor.
As shown in Figure 3, an eight-direction histogram
is formed from gradient magnitudes and guiding
directions of feature points within a 4ร—4 region,
which has 8 quantized histogram entries covering 360
ยฐ
with the length of each arrow denoting its gradient
magnitude. In a 16ร—16 region around a central
feature point, 16 eight-direction histograms are
generated, resulting in 128-dimensional (16 ร— 8) row
vector descriptors ๐œ”
๎ฏž
=๎ตซ๐œ”
๎ฏž,๎ฌต
, ๐œ”
๎ฏž,๎ฌถ
,โ€ฆ,๐œ”
๎ฏž,๎ฌต๎ฌถ๎ฌผ
๎ตฏ, ๐‘˜=
1, 2, ..., ๐‘›. For ๐‘ƒ
๎ฏž
, let ๐ธ๐ท
๎ฏž
,๐‘˜=1,2,...,๐‘›โˆ’1
denote the Euclidean distances between descriptor ๐œ”
๎ฏž
and other (๐‘›โˆ’1) descriptors. Let ratio R be defined as
๐‘น=๐‘ฌ๐‘ซ
๐Ÿ
/๐‘ฌ๐‘ซ
๐Ÿ
,
(10)
where ๐ธ๐ท
๎ฌต
and ๐ธ๐ท
๎ฌถ
denote the smallest and second-
smallest Euclidean distances, respectively. If ratio ๐‘…
is less than threshold ๐‘
๎ฏง
(๐‘
๎ฏง
=0.6), feature point ๐‘ƒ
๎ฌต
having the smallest Euclidean distance ๐ธ๐ท
๎ฌต
is a
matching feature point of ๐‘ƒ
๎ฏž
. ๐‘ƒ
๎ฏž
, ๐‘˜=1,2,...,๐‘›
having a matching feature point as well as its matching
feature point, i.e., a matching feature point pair, will
be kept; otherwise, it is discarded.
2.3 Hierarchical Feature Point
Matching
In this study, a modified version of hierarchical
feature point matching (Li and Zhou, 2019) is
employed, in which two matching strategies, namely,
group matching via scale clustering and group
matching via overlapped gray level clustering, are
used.
Because Gaussian-blurred images are grouped by
octave, feature points detected in different scales will
be clustered closely, which can be separately
processed. In this study, matching procedures are
performed separately in each single high-resolution
octave and jointly in multiple low-resolution octaves.
Note that feature points in high-resolution octaves are
much sparse than feature points in low-resolution
octaves. In addition, feature points in low-resolution
octaves having higher recognition capabilities can
strongly resist large-scale resizing attack.
Based on the scale values, remaining feature points
are divided into three categories: ๐ถ
๎ฌต
=
{
๐‘ƒ
๎ฏž
|
๐›พ
๎ฌต
โ‰ค
๐‘š
๎ฏฆ
๐œŽ<๐›พ
๎ฌถ
}
, ๐ถ
๎ฌถ
=
{
๐‘ƒ
๎ฏž
|
๐›พ
๎ฌถ
โ‰ค๐‘š
๎ฏฆ
๐œŽ<๐›พ
๎ฌท
}
, ๐ถ
๎ฌท
=
{
๐‘ƒ
๎ฏž
|
๐‘š
๎ฏฆ
๐œŽโ‰ฅ๐›พ
๎ฌท
}
, where ๐›พ
๎ฏœ
denotes the scale value of
the second DoG image in the i-th octave. Note that ๐ถ
๎ฌต
contains the first octave, ๐ถ
๎ฌถ
contains the second
octave, and ๐ถ
๎ฌท
contains the other octaves. Feature
point matching schemes are performed separately on
๐ถ
๎ฌต
,๐ถ
๎ฌถ
, and ๐ถ
๎ฌท
.
Because any feature point ๐‘ƒ
๎ฏž
and its matching
feature point ๐‘ƒ
๎ฌต
have similiar pixel values, feature
points in cluster ๐ถ
๎ฏœ
, ๐‘–=1,2,3, can divided into several
overlapped ranges by pixel (gray) values. In this study,
the range [0, 1, โ€ฆ, 255] of pixel (gray) values is
split into ๐‘ˆ overlapped ranges,
Image Copy-Move Forgery Detection using Color Features and Hierarchical Feature Point Matching
155
๐‘ˆ=๎ตค
255 โˆ’ ๐‘
๎ฌต
๐‘
๎ฌต
โˆ’๐‘
๎ฌถ
๎ตจ+1,
(11)
where ๐‘
๎ฌต
denotes a range size and ๐‘
๎ฌถ
denotes an
overlapped size (๐‘
๎ฌต
>๐‘
๎ฌถ
). Let
๐ถ
๎ฏœ,๎ฏ
=
{
๐‘ƒ
๎ฏž
|
๐‘Ž
๎ฏœ
โ‰ค๐บ
๎ฏฅ
(
๐‘ƒ
๎ฏž
)
<๐‘
๎ฏœ
,๐‘ƒ
๎ฏž
โˆˆ๐ถ
๎ฏœ
}
,
๐‘—
=1,2,โ€ฆ,๐‘ˆ,
(12)
๐‘Ž
๎ฏœ
=
(
๐‘—
โˆ’1
)
ร—
(
๐‘
๎ฌต
โˆ’๐‘
๎ฌถ
)
,
(13)
๐‘
๎ฏœ
=๐‘š๐‘–๐‘›(๐‘Ž
๎ฏœ
+๐‘
๎ฌต
, 255),
(14)
where ๐บ
๎ฏฅ
(๐‘ƒ
๎ฏž
) denotes the average gray value of 9
pixels in a 3ร—3 region centered at ๐‘ƒ
๎ฏž
. Then, feature
point matching schemes are performed separately in
๐ถ
๎ฏœ,๎ฏ
, ๐‘–=1,2,3, ๐‘—=1,2,โ€ฆ,๐‘ˆ. Let
๐‘„=
๏ˆซ
๐‘„
๎ฏœ,๎ฏ
, ๐‘–โˆˆ
{
1, 2,3
}
,
๐‘—
=1, 2, โ€ฆ,๐‘ˆ, (15)
where ๐‘„
๎ฏœ,๎ฏ
denotes the set containing the matched
feature point pairs of ๐ถ
๎ฏœ,๎ฏ
.
2.4 Iterative Forgery Localization and
Post-processing
For feature point-based copy-move forgery detection,
we face two problems. First, when multiple
replications are performed, the homography is usually
not unique and the number of repeated areas is
uncertain. Second, all matched feature point pairs
usually have no matching orders, and the original and
forged points are usually not distinguished by feature
point matching. In this study, a modified version of
iterative localization (Li and Zhou, 2019) without
segmentation and clustering processes is employed,
which contains four steps: elimination of isolated
matched feature point pairs, estimation of local
homography, homography verification and inlier
selection, and forgery localization using color
information and scale.
Because copy-move forgery is usually performed
in a continuous shape, isolated matched feature point
pairs can be detected. For each matched feature point
pair
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
)
โˆˆ๐‘„, if ๐‘
๎ฏž
and ๐‘
๎ฏž
๏‡ฒ
denote the numbers
of neighboring matched feature points for ๐‘ƒ
๎ฏž
and ๐‘ƒ
๎ฏž
๏‡ฒ
with distances being smaller than a threshold ๐‘
๎ฏข
(here,
๐‘
๎ฏข
=100), the matched feature point pair
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
)
will be discarded if max
{
๐‘
๎ฏž
,๐‘
๎ฏž
๏‡ฒ
}
<2. If โ„ณ denotes
the set containing the remaining matched feature point
pairs โˆˆ๐‘„, in this study, a portion of matched pairs for
two consecutive local regions will be used to appraise
an affine matrix. First, a matched feature point pair
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
)
โˆˆโ„ณ is randomly selected, then all the
neighboring matched feature points closed to ๐‘ƒ
๎ฏž
and
๐‘ƒ
๎ฏž
๏‡ฒ
are recorded as ๐ธ
๎ฏž
and ๐ธ
๎ฏž
๏‡ฒ
, respectively, i.e.,
๐ธ
๎ฏž
=๎ต›๐‘ƒ
๎ธซโˆ€๐‘ƒ
โˆˆ
โ„ณ
,๐ธ๐ท๎ตซ๐‘ƒ
,๐‘ƒ
๎ฏž
๎ตฏ<๐‘
๎ฏช
๎ตŸ,
(16)
๐ธ
๎ฏž
๏‡ฒ
=๎ต›๐‘ƒ
๎ฏค
๎ธซโˆ€๐‘ƒ
๎ฏค
โˆˆ
โ„ณ
,๐ธ๐ท๎ตซ๐‘ƒ
๎ฏค
,๐‘ƒ
๎ฏž
๏‡ฒ
๎ตฏ<๐‘
๎ฏช
๎ตŸ,
(17)
where ๐‘
๎ฏช
denotes a hyper-parameter (๐‘
๎ฏช
=100) and
๐ธ๐ท(โˆ™) returns the Euclidean distance. Let โ„ณ
๎ฏž
denote
the set containing all the matched feature point pairs
close to
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
)
โˆˆโ„ณ. Then, RANSAC algorithm
(Gonzalez and Woods, 2018) is employed to estimate
homography ๐ป
๎ฏž
between the correspondences of
matched feature point pairs in โ„ณ
๎ฏž
.
To delete incorrect homography estimations, a
homography verification and inlier selection approach
using guiding direction ๐œƒ
๎ฏž
obtained in SIFT feature
point extraction is employed. The guiding direction
difference ๐œƒ
๎ฏž
๏‡ฒ
โˆ’๐œƒ
๎ฏž
should be consistent with the
estimated affine homography ๐ป
๎ฏž
for each proper
matched feature point pair
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
)
. The matched
feature point pair
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
)
should be discarded, if
๐‘”
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
,๐ป
๎ฏž
)
=
|
๐œƒ
๎ฏž
๏‡ฒ
โˆ’๐œƒ
๎ฏž
โˆ’๐œƒ
๎ฏ
|
โ‰ค๐‘
๎ฐ
,
(18)
where ๐œƒ
๎ฏ
is the estimated rotation calculated from ๐ป
๎ฏž
and ๐‘
๎ฐ
denotes a threshold (here, ๐‘
๎ฐ
=15). Let โ„ณ
๎ทก
๎ฏž
denotes the set containing the remaining matched
feature point pairs in โ„ณ
๎ฏž
after RANSAC homography
verification. A matched feature point pair ๐‘ƒ
๎ฏž
(
๐‘ฅ
๎ฏž
,๐‘ฆ
๎ฏž
)
and ๐‘ƒ
๎ฏž
๏‡ฒ
(
๐‘ฅ
๎ฏž
๏‡ฒ
,๐‘ฆ
๎ฏž
๏‡ฒ
)
, will be related by
๏‰Œ
๐‘ฅ
๎ฏž
๏‡ฑ
๐‘ฆ
๎ฏž
๏‡ฑ
1
๏‰โ‰ˆ๐ป
๎ฏž
๎ตญ
๐‘ฅ
๎ฏž
๐‘ฆ
๎ฏž
1
๎ตฑ.
(19)
Using guiding information, set โ„ณ
๎ฏ
is defined as
โ„ณ
๎ฏ
=
{
โŒฉ
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
โŒช|โ€–
๐ป
๎ฏž
๐‘ƒ
๎ฏž
โˆ’๐‘ƒ
๎ฏž
๏‡ฒ
โ€–
๎ฌถ
๎ฌถ
<๐œ–,๐‘”
(
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
๏‡ฒ
,๐ป
๎ฏž
)
โ‰ค๐‘
๎ฐ
}
.
(20)
The improved homography ๐ป
๎ทก
๎ฏž
is defined as
๐ป
๎ทก
๎ฏž
=argmin
๎ฏ
๎ณ–
๎ทข
๎ท๎ธฎ๐ป
๎ทก
๎ฏž
๐‘ƒ
๎ฏž
โˆ’๐‘ƒ
๎ฏž
๏‡ฒ
๎ธฎ
๎ฌถ
๎ฌถ
โŒฉ
๎ฏ‰
๎ณ–
,๎ฏ‰
๎ณ–
๏‡ฒ
โŒช
.
(21)
In this study, a dense field forgery location
algorithm (Li and Zhou, 2019) is employed. For each
feature point in โ„ณ
๎ฏ
, local circular dubious field is
defined as
๐‘Ÿ
๎ฏž
=๐œ๐œŽ
๎ฏž
,
(22)
where ๐œ denotes a paremeter (here, ๐œ=16). Two
dubious regions ๐‘† and ๐‘†
๏‡ฑ
are established for matched
feature point pairs in โ„ณ
๎ฏ
. Dubious regions are refined
IMPROVE 2021 - International Conference on Image Processing and Vision Engineering
156
by color information, and each feature point in ๐‘† is
defined as
๐‘ƒ
โˆ—
=๐ป
๎ทก
๎ฏž
๐‘ƒ
๎ฏž
,๐‘ƒ
๎ฏž
โˆˆ๐‘†.
(23)
In Equation (23), if the color vectors of ๐‘ƒ
๎ฏž
and ๐‘ƒ
โˆ—
are close, they might be copy-move feature points, i.e.,
๐‘ƒ
๎ฏž
is the original feature point and ๐‘ƒ
โˆ—
is a copy-move
forgery feature point. Let ๐’ฌ
๎ฌต
be the set containing all
the matched feature points in ๐‘†, i.e.,
๐’ฌ
๎ฌต
={๐‘ƒ
๎ฏž
,๐‘ƒ
โˆ—
|
max(|๐‘…
(
๐‘ƒ
๎ฏž
)
โˆ’๐‘…
(
๐‘ƒ
โˆ—
)
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
|,
๎ธซ๐บ
(
๐‘ƒ
๎ฏž
)
โˆ’๐บ
(
๐‘ƒ
โˆ—
)
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ธซ,๎ธซ๐ต
(
๐‘ƒ
๎ฏž
)
โˆ’๐ต
(
๐‘ƒ
โˆ—
)
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ธซ)<๐‘
๎ฏฅ๎ฏš๎ฏ•
},
๐‘ƒ
๎ฏž
โˆˆ๐‘†,
(24)
๐‘Š
(
๐‘ƒ
โˆ—
)
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
๎ดค
=๎ท๐‘Š(๐‘ƒ
๎ฏž
)/๐‘‰
๎ฏ‰
๎ณ–
โˆˆ๐œด(๎ฏ‰
๎ณ–
)
, ๐‘Šโˆˆ
{
๐‘…,๐บ,๐ต
}
,
(25)
where ๐‘…(๐‘ƒ
๎ฏž
), ๐บ
(
๐‘ƒ
๎ฏž
)
, and ๐ต(๐‘ƒ
๎ฏž
) denote the RGB
values of feature point ๐‘ƒ
๎ฏž
,๐‘‰ denotes a normalization
factor, ฮฉ(๐‘ƒ
๎ฏž
) denotes a 3ร—3 patch centered at ๐‘ƒ
๎ฏž
,
and ๐‘
๎ฏฅ๎ฏš๎ฏ•
denotes a parameter (here, ๐‘
๎ฏฅ๎ฏš๎ฏ•
=10).
On the other hand, each point in ๐‘†
๏‡ฑ
is defined as
๐‘ƒ
โˆ—
๏‡ฑ
=๐ป
๎ทก
๎ฏž
๎ฌฟ๎ฌต
๐‘ƒ
๎ฏž
๏‡ฒ
,๐‘ƒ
๏‡ฒ
โˆˆ๐‘†
๏‡ฑ
.
(26)
Similarly, let ๐’ฌ
๎ฌถ
be the set containing all the
matched feature points in ๐‘†
๏‡ฑ
. If a feature point
belonging to ๐’ฌ
๎ฌต
โˆช๐’ฌ
๎ฌถ
, this feature point will be
marked as forgery feature point ๐ด
๎ฏ™๎ฏข๎ฏฅ๎ฏš๎ฏ˜๎ฏฅ๎ฏฌ
(๐‘ฅ,๐‘ฆ). The
above procedure is iterated (here, 15 iterations) to find
all the forgery feature points. Then, all the forgery
feature points are grouped as forgery regions. To make
forgery regions more accurately, morphological close
operator is used to obtain the final forgery regions
๐ด
๎ฏ™๎ฏœ๎ฏก๎ฏ”๎ฏŸ
(๐‘ฅ,๐‘ฆ) in the image.
3 EXPERIMENTAL RESULTS
The proposed approach has been implemented on an
Intel Core i7-7700K 4.20 GHz CPU with 32GB main
memory for Windows 10 64-bit platform using
MATLAB 9.4 (R2018a). To evaluate the
effectiveness of the comparison and proposed
approaches, FAU dataset (Christlein, et al., 2012) and
CMH1 dataset (Silva, et al., 2015) are employed. FAU
dataset consists of 48 high-resolution uncompressed
PNG color images, whereas CMH1 consists of 23
copy-move forged images.
In this study, based on the final detected forgery
region map and the ground truth map ๐บ๐‘‡, ๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘›
and ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™ are employed as two performance metrics.
Additionally, based on ๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘› and ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™, ๐‘“
๎ฌต
score computed as
๐‘“
๎ฌต
=2ร—
๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘› ร— ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™
๐‘
๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘› + ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™
, (27)
is employed as the third performance metric.
To evaluate the performance of the proposed
approach, three comparison approaches, namely,
Amerini, et al. (2013), Pun, et al. (2015), and Li, et al.
(2019) are employed. The final detected forgery
region maps of three comparison approaches and the
proposed approach for two images of FAU dataset are
shown in Figures 4 and 5. In terms of average
๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘›, ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™, and ๐‘“
๎ฌต
score, performance
comparisons of the three comparison approaches and
the proposed approach for FAU and CMH1 datasets
are listed in Tables 1 and 2, respectively.
Table 1: In terms of average ๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘›, ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™, and ๐‘“
1
score, performance comparisons of three comparison
approaches and the proposed approach on FAU dataset.
Approaches
๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘› ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™
๐‘“
๎ฌต
score
Amerini, et al.
(2013)
0.359 0.887 0.455
Pun, et al.
(
2015
)
0.966 0.655 0.753
Li, et al.
(
2019
)
0.921 0.773 0.842
Pro
p
ose
d
0.938 0.815 0.859
Table 2: In terms of ๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘›, ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™, and ๐‘“
1
score,
performance comparisons of three comparison approaches
and the proposed approach on CMH1 dataset.
Approaches
๐‘๐‘Ÿ๐‘’๐‘๐‘–๐‘ ๐‘–๐‘œ๐‘› ๐‘Ÿ๐‘’๐‘๐‘Ž๐‘™๐‘™
๐‘“
๎ฌต
score
Amerini, et al.
(2013)
0.942 0.935 0.940
Pun, et al. (2015) 0.929 0.920 0.924
Li, et al.
(
2019
)
0.985 0.960 0.972
Pro
p
ose
d
0.978 0.972 0.975
Based on the experimental results listed in Tables
1 and 2, the proposed approach has good balances
between precision and recall as well as larger ๐‘“
๎ฌต
scores, as compared with three comparison
approaches. Based on the experimental results shown
in Figures 4 and 5, the final detected forgery region
maps of the proposed approach are better than those
of three comparison approaches.
Image Copy-Move Forgery Detection using Color Features and Hierarchical Feature Point Matching
157
(
a
)
(
b
)
(
c
)
(
d
)
(e) (f)
Figure 4: Final detected forgery region maps of
โ€œred_tower_copyโ€ in FAU dataset: (a) original image; (b)
ground truth, (c)-(f) the detected forgery region maps by
Amerini, et al.โ€™s approach (2013), Pun, et al.โ€™s approach
(2015), Li, et al.โ€™s approach (2019), and the proposed
approach.
(
a
)
(
b
)
(
c
)
(
d
)
(e) (f)
Figure 5: Final detected forgery region maps of
โ€œnoise_pattern_copyโ€ in FAU dataset: (a) original image;
(b) ground truth, (c)-(f) the detected forgery region maps by
Amerini, et al.โ€™s approach (2013), Pun, et al.โ€™s approach
(2015), Li, et al.โ€™s approach (2019), and the proposed
approach.
4 CONCLUDING REMARKS
In this study, an image copy-move forgery detection
approach using color features and hierarchical feature
point matching is proposed. Based on the
experimental results obtained in this study, the
performance of the proposed approach is better than
those of three comparison approaches.
ACKNOWLEDGMENTS
This work was supported in part by Ministry of
Science and Technology, Taiwan, ROC under Grants
MOST 108-2221-E-194-049 and MOST 109-2221-E-
194-042.
REFERENCES
Amerini, I., Ballan, L., Caldelli, R., Del Bimbo, A., and
Serra, G., 2011. A SIFT-based forensic method for
copy-move attack detection and transformation
recovery. In IEEE Trans. on Information Forensics and
Security, 6(3), 1099-1110.
Amerini, I., et al., 2013. Copy-move forgery detection and
localization by means of robust clustering with J-
linkage. Signal Processing: Image Communication,
28(6), 659-669.
Bi, X., Pun, C. M., and Yuan, X. C., 2016. Multi-level dense
descriptor and hierarchical feature matching for copy-
move forgery detection. Information Sciences, 345, 226-
242.
Chen, H., Yang, X., and Lyu, Y., 2020. Copy-move forgery
detection based on keypoints clustering and similar
neighborhood search algorithm. IEEE Access, 8, 36863-
36875.
Christlein, V., et al., 2012. An evaluation of popular copy-
move forgery detection approaches. IEEE Trans. on
Information Forensics and Security, 7(6), 1841-1854.
Cozzolino, D., Poggi, G., and Verdoliva, L., 2015. Efficient
dense-field copyโ€“move forgery detection. IEEE Trans.
on Information Forensics and Security, 10(11), 2284-
2297.
Fadl, S. M. and Semary, N. A., 2017. Robust copy-move
forgery revealing in digital images using polar
coordinate system. Neurocomputing, 265, 57-65.
Gonzalez, R. C. and Woods, R. E., 2018. In Digital Image
Processing, Prentice Hall. Upper Saddle River, NJ, 4
th
edition.
Huang, H. Y. and Ciou, A. J., 2019. Copy-move forgery
detection for image forensics using the superpixel
segmentation and the Helmert transformation.
EURASIP Journal on Image and Video Processing, 1,
1-16.
Islam, A., Long, C., Basharat, A., and Hoogs, A., 2020.
DOA-GAN: dual-order attentive generative adversarial
IMPROVE 2021 - International Conference on Image Processing and Vision Engineering
158
network for image copy-move forgery detection and
localization. In IEEE/CVF Conf. on Computer Vision
and Pattern Recognition (CVPR), 4675-4684.
Jin, G. and Wan, X., 2017. An improved method for SIFT-
based copy move forgery detection using non-maximum
value suppression and optimized J-linkage. Signal
Processing: Image Communication, 57, 113-125.
Li, Y. and Zhou, J., 2019. Fast and effective image copy-
move forgery detection via hierarchical feature point
matching. IEEE Trans. on Information Forensics and
Security, 14(5), 1307-1322.
Lin, X., et al., 2016. SIFT keypoint removal and injection
via convex relaxation. IEEE Trans. on Information
Forensics and Security, 11(8), 1722-1735.
Lowe, D. G., 2004. Distinctive image features from scale-
invariant keypoints. Int. Journal of Computer Vision,
60(2), 91-110.
Pun, C., Yuan, X., and Bi, X., 2015. Image forgery detection
using adaptive oversegmentation and feature point
matching. IEEE Trans. on Information Forensics and
Security, 10(8), 1705-1716.
Silva, E., et al., 2015. Going deeper into copy move forgery
detection: exploring image telltales via multi-scale
analysis and voting processes. Journal of Visual
Communication and Image Representation, 29, 16-32.
Warif, N. B. A., et al., 2017. SIFT-symmetry: a robust
detection method for copy-move forgery with reflection
attack. Journal of Visual Communication and Image
Representation, 46, 219-232.
Zhong, J. L. and Pun, C. M., 2020. An end-to-end Dense-
InceptionNet for image copy-move forgery detection.
IEEE Trans. on Information Forensics and Security, 15,
2134-2146.
Zhu, Y., et al., 2020. AR-Net: adaptive attention and
residual refinement network for copy-move forgery
detection. IEEE Trans. on Industrial Informatics,
16(10), 6714-6723.
Image Copy-Move Forgery Detection using Color Features and Hierarchical Feature Point Matching
159