We have
J
1
(f) = −lnp(g|f ,θ
ε
;M ) − ln p(f|θ
2
;M )
= kg − H fk
2
+ λΩ(f )
(11)
where λ = 1/θ
ε
and Ω(f ) = −ln p(f |θ
2
;M ). Two
family of priors could be distinguished:
separable:
p(f) ∝ exp
"
−θ
f
∑
j
φ( f
j
)
#
(12)
and Markovian
p(f) ∝ exp
"
−θ
f
∑
j
φ( f
j
− f
j− 1
)
#
(13)
where different expressions have been used for the
potential function φ(.), (Bouman and Sauer, 1993;
Green, 1990; Geman and McClure, 1985) with great
success in many applications.
Still, this family of priors can not give a precise
model for the unknown image in many applications,
due to global image homogeneity assumption. For
this reason, we have chosen in paper to use a non-
homogenous prior model which takes into account the
assumption that the unknown image is composed of
ﬁnite number of homogenous materials. This implies
the introduction of a hidden image z = {z(r),r ∈ R }
which associates each pixel f(r) with a label (class)
z(r), R represent the whole space of the image sur-
face. All pixels with the same label z(r) = k share the
same properties. Indeed, we use Potts model to rep-
resent the dependence between hidden variable pix-
els, as we will see in the next section. Meanwhile,
we propose two models for the unknown image f, in-
dependent mixture of Gaussian and a Gauss-Markov
model. However, this choice of prior makes it impos-
sible to get analytical expression for the maximum a
posterior (MAP) or posterior mean (PM) estimator.
Consequently, we will use the variational Bayes tech-
nique to calculate an approximated form of this law.
The rest of this paper is organized as follows. In
section 2, we give more details about the proposed
prior models. In section 3, we employ these priors us-
ing the Bayesian framework to obtain a joint posterior
law of the unknowns (image pixels, hidden variable,
and the hyperparameters including the region statisti-
cal parameters and the noise variance). Then in sec-
tion 4, we will use the variational Bayes approxima-
tion in order to have a tractable approximationof joint
posterior law. In section 5, we show an image restora-
tion example. Finally, we conclude this work in sec-
tion 6.
2 PROPOSED
GAUSS-MARKOV-POTTS
PRIOR MODELS
As we introduced in the previous section, the main
assumption here is the piecewise homogeneity of the
restored image. This model corresponds to number
of application where the studied image is composed
of ﬁnite number of materials, as example, muscle
and bone or gray-white materials in medical images.
Another application, is the non-destructive imaging
testing (NDT) in industrials applications, where stud-
ied materials are, in general, composed of air-metal
or air-metal-composite. This prior model have al-
ready been used in several works for several applica-
tion (Mohammad-Djafari, Humblot and Mohammad-
Djafari, 2006; F´eron et al., 2005).
In fact, this assumption permits to associate a label
(class) z(r) to each pixel of the image f. The ensem-
ble of this labels z form a K color image, where K
corresponds to the number of materials, and R rep-
resents the entire image pixel area. We call this dis-
crete value variable a hidden ﬁeld, which represents
the segmentation of the image.
Moreover, all pixels f
k
= { f(r),r ∈ R
k
} which
have the same label k, share the same probabilistic
parameters (class means µ
k
, and class variances v
k
),
k
R
k
= R . Indeed, these pixels have a spatial struc-
ture while we assume here that pixels from differ-
ent class are a priori independent, which is natural
since they belong to different materials. This will
be a key assumption when introducing Gauss-Markov
prior model of source later in this section.
Using the former assumption, we can give the
prior probability law of a pixel knowing the class as a
Gaussian (homogeneity inside the same class).
p( f(r)|z(r) = k,m
k
,v
k
) = N (m
k
,v
k
) (14)
This will give a Mixture of Gaussians (MoG) model
for the pixel p( f(r)). It can be written as follows:
p( f(r)) =
∑
k
a
k
N (m
k
,v
k
) with a
k
= P(z(r) = k)
(15)
Another important point is the prior modeling of
the spatial interaction between different elements of
prior model. This study is concerned with two in-
teractions, pixels of images within the same class
f = { f(r),r ∈ R } and elements of hidden variable
z = {z(r), r ∈ R }. In this paper, we assign Potts
model for hidden ﬁeld z in order to obtain more ho-
mogeneous classes in the image. Meanwhile, we
present two models for the image pixels f; the ﬁrst
is independent, while the second is Gauss-Markov
model. In the following, we give the prior probability
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
572