2 THE PROPOSED METHOD
The proposed method for regularization parameter
assignment is conceived as a pre-processing phase
within a general restoration strategy. To make the pa-
per self-contained and to exploit all the ingredients of
the overall strategy adopted in the experimental part
of the work, we briefly outline the salient aspects of a
restoration strategy developed and presented in a pre-
vious study. It consists of a neural iterative method
which uses a gradient descent algorithm to minimize
a local cost function derived from a traditional global
constrained least square measure (Gallo et al., 2008).
In particular, the degradation measure to be min-
imized is a local cost function E(x, y) defined at any
point (x, y) in an M × N image:
E(x, y) =
1
2
g(x, y) − h ∗
ˆ
f (x, y)
2
+ (2)
+
1
2
λ(x, y)
d ∗
ˆ
f (x, y)
2
where h ∗
ˆ
f (x, y) denotes the convolution between a
blur filter h centered in a point (x, y) of the restored
image
ˆ
f and the restored image
ˆ
f itself; d ∗
ˆ
f (x, y)
denotes the convolution between a high-pass filter d
centered in a point (x, y) of the restored image
ˆ
f and
the restored image
ˆ
f itself.
A multilayer perceptron model, trained with
the supervised back propagation learning algo-
rithm (Rumelhart et al., 1986), was adopted to com-
pute the regularization parameter based on specific
local information extracted from the degraded image
g(x, y) previously scaled in a range [0, 1]. The neural
learning task accomplished within the neural training
phase can be formulated as a search for the best ap-
proximation of the function λ(x, y) =Y (S
m
) where S
m
represents a set of statistical measures extracted di-
rectly from the degraded image. The present work
uses S
m
= (S
1
(x, y), S
2
(x, y), S
3
) where S
1
is the lo-
cal variance computed directly on the degraded im-
age and S
2
is the local variance computed on the de-
graded image smoothed with a Gaussian low-pass fil-
ter. In particular we use the variance calculated in a
window measuring 3 × 3 as statistical measure S
1
and
the variance calculated in a window measuring 5 × 5
as statistical measure S
2
.
The joint use of S
1
and S
2
is motivated by the need
to preserve image features during restoration. S
3
is a
constant value derived from the histogram of S
1
. In
particular S
3
is the value of variance corresponding to
the peak value in the histogram. This is an important
feature because it is directly correlated to the amount
of noise in the degraded image and we know that λ
should be proportional to the amount of noise in the
data (Inoue et al., 2003).
The training set presented to the neural network
for the supervised learning task is constituted by
N pairs of elements ((S
1
, S
2
, S
3
),
ˆ
λ
j
)
n
where n =
1, . . . , N. The second component of training examples
ˆ
λ
j
are the expected outputs for the corresponding in-
put components and are constituted by regularization
values obtained from successful restoration processes
as explained in section 2.1.
The trained network is expected to be able to gen-
eralise, i.e. to associate adequate regularization val-
ues with degraded input images never seen during
training.
2.1 Regularization Profile Construction
Representative samples of the function λ(x, y) =
Y (S
m
) are necessary if we want to train a neural net-
work that represents it. Algorithm 1 describes in de-
tails the method used to compute a sampling of this
function while Figure 1 shows an example of tabular
data obtained from the same algorithm. Representa-
tive samples of the function λ(x, y) = Y (S
m
) must be
presented to the network during the training phase for
learning. Algorithm 1 describes the procedure used to
build the sample pairs ((S
1
, S
2
, S
3
),
ˆ
λ
j
)
n
.
Figure 1: The columns list the ISNR values obtained restor-
ing the image with a set of different constant λ values. Each
column identifies a group of pixels with variance included
in a prefixed range. The best λ value for a given range of
variance corresponds to the highest ISNR values obtained
(in bold) .
Our approach compares the improvement in sig-
nal to noise ratio (ISNR) measures calculated on a set
of restored pixels
ˆ
f (x, y), all having a statistical mea-
sure included in an interval I
i
≤ S(x, y) < I
i+1
. Then
we choose the best
ˆ
λ corresponding to the best ISNR.
The result of this approach is an approximation of the
function λ(x, y) = Y (S
m
) representing the regulariza-
tion profile with which to compute the regularization
parameter.
The training set is built applying Algorithm 1 to a
set of images representative of a given domain. To be
exhaustive, each image in turn must be degraded with
different levels of noise and different kinds of blur.
ASSIGNING AUTOMATIC REGULARIZATION PARAMETERS IN IMAGE RESTORATION
75