tor with the total variation regularization in Aubert
and Aujol (2008). The model facilitates handlin g
multiplicative gamma noise. While effective to some
extent, traditional models may struggle to handle
complex noise patterns and may introduce undesir-
able artifacts in the denoised image s.
Deep learning models CNNs to ada pt the relation
between clean and noisy images. By leveraging
large datasets and learning complex patterns directly
from data, CNNs have demonstrated remarkable
performance in various image denoising Jebur et al.
(2024). These models are widely studied in SAR
image de noising also. Chierchia et al. proposed a
residual-ba sed learning model in Chierchia et al.
(2017), which has a faster convergence. However,
Training involves using a large multitemporal SAR
image to approximate a clean image. A Bayesian
despeckling method in spired by blind-spot denoising
networks and incorporating a TV regularizer is
employed by Molini et al. in Molini et al. (2021).
We conside r a CNN model based on Bayesian MAP
approa c h.
2 DATA FIDELITY TERMS USING
BAYESIAN MAP
According to Bayesian rule,
P(U|V) =
P(V |U )P(U)
P(V )
, (4)
where P(U|V ) is the cond itional probability of the
random variable U given V. Here, we use the ab ove
Bayesian rule and try to restore the image by maxi-
mizing the posterior probability P(x|x
0
) given by
P(x|x
0
) =
P(x
0
|x)P(x)
P(x
0
)
. (5)
That is,
max
x
P(x|x
0
) = max
x
P(x
0
|x)P(x), (6)
The term P(x
0
), the prior probability on x
0
, is a con-
stant w.r.t. x that can be neglected.
Assume that the speckles in SAR images follow
the Poisson noise. Therefore, the posterior probability
function P(x
0
|x) is given as
P(x
0
|x) =
exp(−x)x
x
0
x
0
!
(7)
One can consider th e ima ge (x and x
0
) as a set of
indepen dent pixels of the imag e , say x
i
, (The joint
probability equals the product of the marginal prob-
abilities of each random variable x(x
i
)), therefore, (6)
can be w ritten as
max
x
P(x(x
i
)|x
0
(x
i
)) = max
x
N
∏
i=1
P(x
0
(x
i
)|x(x
i
))P(x(x
i
)),
(8)
where N is the total number of image samples.
Since the function log is a m onoton e function,
maximizing P(x|x
0
) is equivalent to minimizing the
negative log-likelihood, and hence from (7) a nd (8),
we can obtain the following;
min
x
(
N
∑
i=1
x(x
i
)−x
0
(x
i
)log(x(x
i
))−
N
∑
i=1
log(P(x(x
i
)))
)
(9)
where the prior of x, say P(x), follows a regularization
prior. For the sake of simplicity, we eliminate x
i
, thus
we get,
min
x
{−logP(x|x
0
)} = min
x
(
x − x
0
logx + λφ(x)
)
,
(10)
where φ(x) be the pr ior probability function. Many
authors considered φ(x) is the the total variation of x.
3 MAP MODEL WITH HOTV
REGULARIZATION
We implemented a deep lea rning model using a Con-
volutional Neural Network (CNN) arch itecture de-
signed for Higher Orde r Total Variation (HOTV).
Generally, we use th e loss function of the HOTV
model as in (3). In this paper, we designed the model
for the loss function Poisson + HOTV which works
well to restor e the SAR/Ariel images disto rted with
the poisson noise. We consider the objective function
as in (10) with the assumption that the prior prob-
ability φ(x) f ollows HOTV, provide d the noise fol-
lows the Poisson distribution. That is, the fidelity
term is x − x
0
logx and the prior regularizatio n of x is
φ(x) = k∇
2
xk
1
.
Also, we consider the model feature a cu stom loss
function that integrates the HOTV loss function with
other loss functions’ Bayesian approach to address
different noise types. So the model can be ea sily
adapted to han dle other no ise distributions, such as
Gamma, Gaussian, and Rayleigh, by modifyin g the
data fidelity term. We experimented with the model
with three variations of loss functions: Gamma +
HOTV, Gaussian + HOTV, and Rayleigh + HOTV. By
employing the same architecture and dataset of orig-
inal images, we evaluated the performance of these