PSF Smooth Method based on Simple Lens Imaging
Dazhi Zhan, Weili Li, Zhihui Xiong, Mi Wang and Maojun Zhang
College of Information System and Management, National University of Defense Technology, Changsha, 410073, China
zhan938002@gmail.com
Keywords:
Simple Lens Optics, Computational Photography, Chromatic Aberration, Point Spread Function, Image
Deconvolution.
Abstract:
Compared with modern camera lenses, the simple lens system could be more meaningful to use in many scien-
tific applications in terms of cost and weight. However, the simple lens system suffers from optical aberrations
which limits its applicapability. Recent research combined single lens optics with complex post-capture cor-
rection methods to correct these artifacts. In this study, we initially estimate the spatial variability point spread
function (PSF) through blind image deconvolution with total variant (TV) regularization. PSF is optimized
to be more smoothed for enhancing the robustness. A sharp image is then recovered through fast non-blind
deconvolution. Experimental results show that our method is at par with state-of-the-art deconvolution ap-
proaches and possesses an advantage in suppressing ringing artifacts.
1 INTRODUCTION
Single lens optics with spherical surfaces often suf-
fer from optical aberrations, such as geometric distor-
tion, chromatic aberration, spherical aberration, and
coma (Mahajan, 1991). These aberrations dramati-
cally degrade image quality. Thus, modern cameras
combine dozens of different lens elements to com-
pensate aberrations. However, optical aberrations are
inevitable, and the design of lenses always involves a
trade-off among various parameters. A complicated
lens combination has a significant effect on the cost
and weight of camera objectives. With the recent de-
velopment of unmanned aerial vehicles (UAVs) and
motion cameras, such as the GoPro camera, simple
lens systems appear to be achievable. The simple lens
system still exhibits some unavoidable artifacts be-
cause of its structural defect. Recently, an alternative
approach that utilizes single lens elements rather than
a sophisticated lens design was developed through
computational photography for high-quality imaging.
(Heide, 2013) and (Schuler, 2011) utilized a sim-
ple lens system with an image deconvolution method
to achieve the image effect of single lens reflex (SLR)
cameras. Their work provides a solution to the con-
tradiction between image quality and complexity of
imaging equipment. In the methods presented by
Schuler and Heide, no pinhole light source, dark
room, or complex image calibration and experimental
process with high precision requirements is necessary
to estimate PSF.
Computational
Photography
simple lens with chromatic aberration corrected
Image
deblurring
Figure 1: The correct mode based on simple lens optics.
In this work, we directly use a blind deconvolution
method to estimate PSF. Given its spatial variability,
the original image is divided into certain number of
patches, and the PSF of each patch is estimated. Ac-
cording to the similarity of adjacent PSFs, each patch
is processed with filters and rearranged to its previous
arrangement to make the method robust and to sup-
press ringing artifacts. After estimating the spatially
variable PSF, a fast non-blind deconvolution method
is utilized to obtain a sharp image. Our work proves
that acquiring high-quality images through a simple
lens design and a computational photography method
is possible. The simple lens system is potentially
useful for many scientific applications, such as UAV
imaging, astronomical imaging, remote sensing, and
medical imaging.
The remainder of this paper is organized as fol-
lows. Section 2 provides a review of related work.
Section 3 introduces direct PSF estimation by blind
deconvolution, subsequent processing of the PSF fil-
ter, and a fast non-blind deconvolution method to re-
store sharp images. Section 4 presents the experi-
Zhan, D., Li, W., Xiong, Z., Wang, M. and Zhang, M.
PSF Smooth Method based on Simple Lens Imaging.
DOI: 10.5220/0006106100330038
In Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2017), pages 33-38
ISBN: 978-989-758-222-6
Copyright
c
2017 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
33
ments and a comparison of the results of other meth-
ods. Section 5 provides the conclusion of the study.
2 RELATED WORK
2.1 Simple Lens Imaging
The idea of simple optics with computationally cor-
rected aberrations was first proposed (Schuler, 2011).
His work presented an approach to alleviate image
degradations caused by imperfect optics; the ap-
proach can correct optical aberrations. In the cali-
bration step, the optical aberrations are encoded in
a spatially variant PSF in a completely dark room,
with point light sources emitting light through a
sufficiently small aperture. However, repeating the
method is difficult because of the use of a highly com-
plicated and sophisticated device to measure PSF. In
addition, lens aberrations depend to a certain extent
on the settings of the lens (aperture, focus, and zoom),
which cannot be modeled trivially.
(Heide, 2013) referred to the research of Schuler
and improved simple lens imaging. He proposed a
new cross-channel prior for color images. This prior
can handle large and complex blur kernel. Optimal
first-order primal-dual convex optimization was used
to incorporate the prior and guarantee global optimum
convergence. With PSF estimation regarded as a de-
convolution problem, Heide used a calibration pattern
and a TV prior was used to ensure the robustness of
per-channel spatially variant PSF estimation. How-
ever, their method requires highly sophisticated ex-
perimental procedures and a large amount of calcula-
tion time.
(Li, 2015) combined image and sparse kernel pri-
ors to estimate space-variant PSF in blind decon-
volution and applied a fast non-blind deconvolution
method based on the hyper-Laplacian prior to acquire
a final clear image. Nevertheless, their method is un-
suitable for solving chromatic aberrations.
2.2 Image Deconvolution
Although blind deconvolution is ill-posed, the prob-
lem still provides a fertile ground for novel process-
ing methods. Blind image deconvolution approaches
can be classified into two categories: separative and
joint. In the separative approach, PSF is identified
and later used to restore the original image in combi-
nation with a blurred image. This approach can gen-
erally be divided into two stages: blur kernel identifi-
cation or PSF estimation and non-blind image decon-
volution. The other class of existing deconvolution
methods is the joint approach, in which the original
image and blur kernel are identified simultaneously.
Accordingly, several workaround methods, such as
maximum a posterior (Stockham and Cannon, 1975),
Bayesian methods (Lee and Cho, 2013), adaptive cost
functions, alpha-matte extraction, and edge localiza-
tion (Xu, 2013), are required to produce good results.
If PSF has been obtained, the problem is called
non-blind deconvolution, in which image restoration
is based on the problem of the blurred image and
PSF. Compared with blind deconvolution, non-blind
deconvolution solves a problem by employing a high-
quality deconvolution algorithm. Evidently, directly
using blurred images by dividing the kernel in the fre-
quency domain does not work. Although PSF has
been estimated, non-blind deconvolution remains an
ill-posed problem. Ringing artifacts and loss of color
would be observed even if a highly accurate kernel is
provided.
Sparse natural image priors have been utilized to
improve image restoration in non-blind deconvolution
in which PSF is known (Cannon, 1976). Iteratively
reweighted least squares (IRLS) (Levin and Fergus,
2007) and variable substitution schemes (Black and
Rangarajan, 1996) have been employed to constrain
the solution. To suppress ringing artifacts, Yuan et al.
(Yuan and Sun, 2007) proposed a progressive proce-
dure to gradually add back image details.
3 METHOD
In this chapter, we split the image into patches and es-
timate the spatially variant PSF through blind decon-
volution. The blind deconvolution method proposed
(Krishnan, 2011) is used to estimate the PSFs. The
original l
1
regularization of k is a TV prior regulariza-
tion to improve the accuracy of PSFs estimation. The
processes of x and k update are introduced in Section
3.2. After the PSFs are estimated, they are smoothed
by computing the weighted averages of neighboring
patches(introduced in Section 3.3). Finally, a sharp
image is obtained through fast non-blind deconvolu-
tion in Section 3.4.
3.1 Image Deblur Model
The primary challenge in achieving these goals is that
simple lenses with spherical interfaces exhibit aberra-
tions, i.e., high-order deviations from the ideal linear
thin lens model. These aberrations cause rays from
object points to focus imperfectly onto a single im-
age point; Complicated PSFs that vary over the im-
age plane are thus created. These PSFs need to be
ICPRAM 2017 - 6th International Conference on Pattern Recognition Applications and Methods
34
removed through deconvolution. The effect becomes
more pronounced at large apertures where many off-
axis rays contribute to image formation. Many previ-
ous studies assumed that blurring is a spatially invari-
ant convolution process.
B = I K (1)
Where is the convolution operator, I is the orig-
inal image to recover, B is the observed blurred image
and K is the blur kernel (or point spread function).
The process of recovering the original image I from
the blurred image B is the so-called image deblurring
problem. The image degradation model is introduced,
and its general regularization constraint model is:
min
i
||i k b||
2
2
+ λφ(I) (2)
The first term ||ik b||
2
2
is the fidelity term used
to keep the matching degree between the clear image
and the blurred image, so as to ensure the rationality
of the image restoration result; the second term φ(I)
is the regularization term. It contains prior knowledge
on the kernel or original image; this prior information
can guarantee that the characteristic of the restored
image is similar to that of the clear image. λ is the
regularization parameter, to balance the weight rela-
tion between the fidelity term and regularization.
3.2 Blind Kernel Estimation
In contrast to the simple lenses of Heide and Schuler,
some improvements are achieved by adding one or
two lenses, each of which is designed to correct the
chromatic aberrations of a single lens. With the re-
duction in chromatic aberrations, the complexity of
the PSF of the optical system also decreases. Dif-
ferent from PSF estimation with a calibration pat-
tern (Heide, 2013; Brauers, 2010; Joshi, 2008),the
current kernel estimation is performed in the im-
ages high-frequency region through blind deconvo-
lution, in which a large amount of texture informa-
tion of the image is obtained. Given the blurred
and noisy patches of input image y, we generate the
high-frequency version by using two discrete filters,
namely, 5x = [1,1] and 5y = [1, 1]
T
. The cost func-
tion for spatially-invariant blurring is:
min
x,k
||xky||
2
2
+
||x||
1
||x||
2
+µ||5k||
1
s.t. k > 0,
i
k
i
= 1
(3)
The constraint on k that k > 0,
i
k
i
= 1 enforces
a non-negative and energy constraint. Here x is the
unknown sharp image, k is the unknown blur kernel,
and y is the input blurred noisy image. In the cost
function, the first term ||x k y||
2
2
is a data-fitting
term, The second term
||x||
1
||x||
2
is the new regulariza-
tion on x. The third term is a TV regularization of
k. The form of PSF does not exhibit radial symmetry
nor does it resemble a simple Gaussian shape or a disc
shape. Thus, TV regularization is a robust approach
for spatially variant PSF estimation. TV regulariza-
tion helps to reduce the noise in the kernel which has
good convergence properties, and µ is regularization
parameters that balance the weight relation between
the data-fitting term and kernel prior regularization.
The x sub-problem is expressed as follows:
min
x
||x k y||
2
2
+
||x||
1
||x||
2
(4)
The new regularization term
||x||
1
||x||
2
makes the sub-
problem non-convex. Once the denominator from
the previous iteration is fixed, the problem becomes
a convex l
1
regularized problem. Numerous fast al-
gorithms have been employed to solve l
1
regularized
problems in compressed sensing literature, these al-
gorithms include the well-known iterative shrinkage-
thresholding algorithm (ISTA)(Beck and Teboulle,
2009). Krishnan reported that innerouter iteration is
effective for convergence despite the non-convexity of
a problem.
The k update sub-problem is expressed as follows:
min
k
||x k y||
2
2
+ µ|| 5 k||
1
(5)
IRLS is used to solve the non-convexity problem.
This method sets the invalid elements to zero and
renormalizes the value to retain the constraints in the
result of k. We perform IRLS once, and the kernel
weight is computed from the kernel of the previous
k update in the iterations. A low level of solving ac-
curacy is obtained in the inner IRLS system by us-
ing several conjugate gradient (CG) iterations. During
kernel optimization, after recovering the kernel at the
finest level, we threshold small elements of the kernel
to zero to reduce noise (Fergus, 2006).
3.3 Smoothed Spatially-variant PSF
Accurate estimation of PSF is essential to image de-
convolution. An exact PSF can prevent the occur-
rence of the ringing effect in the image in the process
of deconvolution. Previous studies usually assumed
that PSF does not depend on the position in the im-
age (spatially invariable PSF). However, because of
the properties of optical systems, PSF changes as the
position in the image varies (spatially variant PSF).
Fig.2(a) shows an internet protocol (IP) camera
equipped with 1/1.9” inches CMOS and images of
PSF Smooth Method based on Simple Lens Imaging
35
Figure 2: (a) IP camera combined with C-mount (b) self-
made simple lens with three lenses.
1080 × 1920 pixels can be obtained. Fig.2(b) show
our self-made simple lens systems with three lenses
at f/35mm. We capture the observed image with the
simple lens system. Then, the images are split into
6 × 10 patches. Each patch of 180 × 192 pixels is es-
timated through blind convolution.
Figure 3: (a) blurred image captured by three lenses (b) cor-
responding spatially-variant PSFs.
Fig.3 shows the image captured by our simple lens
optics. On the right is PSF for various positions on the
image plane, where each accumulation of points rep-
resents one PSF. As shown in the figure, PSF varies in
different locations. Most of the kernels do not resem-
ble a Gaussian shape, and the distribution does not
exhibit radial symmetry. In addition, the kernels are
highly spatially varying, ranging from disc-like struc-
tures to thin stripes (Brauers, 2010). Therefore, the
shift-variant PSF can be modeled by
b(x,y) = i(x, y) · k(x
0
,y
0
;x,y) + n(x,y) (6)
Where the sharp image is i(x, y) and the blurred
and noised image is b(x, y). The spatially-varying
PSF is expressed by k(x
0
,y
0
;x,y) ,which related to the
image patch location. Additional noise is modeled by
n(x,y).
Given that the image is divided into image
patches, the number of pixels utilized to estimate PSF
is reduced compared with the estimation that consid-
ers the entire image. Several incorrect PSF estima-
tions are shown in Fig.3(b). The non-clustered region
is spread, so the robustness and stability of the esti-
mation are reduced.
We therefore propose a method to smooth the
kernel through the neighboring PSFs. Considering
that the optical system is gradually changing and the
neighboring PSFs are similar, the weighted average of
the blur kernel of different adjacent image patches is
arranged consecutively in a new image. The pixels at
the same position in each PSF are then rearranged into
a new block. Each patch is processed with filters and
rearranged to its previous arrangement. The filtering
includes a 3 × 3 median filter, which reduces stochas-
tic errors between neighboring patches. If the PSF es-
timation fails in one block or produces abnormal pixel
values, the PSF data are obtained from neighboring
PSFs.
The following low-pass filter of the filter kernel is
defined by:
1 2 1
2 4 2
1 2 1
(7)
Next, several invalid values (the pixel values of
PSF are below the dependent threshold) are set to
zero. This procedure reduces the noise near the bor-
der of a PSF, in which only small values are expected.
The function is defined by:
P(x,y) =
0 f or p(x,y) < T (x,y)
p(x,y) f or p(x,y) T (x,y)
(8)
And the threshold function:
T (x, y) = 1 H(x)H(y) (9)
H(d) =
(
1 f or 0 d α
R
2
1 [
dα
R
2
2(1α)
R
2
]
2
f or α
R
2
< d
R
2
(10)
Where P(x, y)is the smooth PSF value of current
pixel, the d is the distance between the pixel and the
center of the blur kernel, and the R is the size of ker-
nel. Parameter α(0 < α < 1) control the intensity of
the noise reduction.
3.4 Fast Non-blind Deconvolution
Once the final kernel k is estimated, the problem
changes to non-blind deconvolution that recovers the
sharp image x from y. Krishnan opted to use the non-
blind deconvolution method from [12]. This algo-
rithm uses a continuation method to solve the follow-
ing cost function:
min
x,k
||x k y||
2
2
+ µ|| 5
g
y||
0.8
(11)
Where 5
g
is the horizontal and vertical derivative
filters:5
x
= [1,1] and 5
y
= [1,1]
T
. It can also be
ICPRAM 2017 - 6th International Conference on Pattern Recognition Applications and Methods
36
useful to include second order derivatives, or the more
sophisticated filters (Roth and Black, 2005). The l
0.8
norm is set to distribute derivatives equally over the
image, and a sparse prior is selected to concentrate
derivatives at a small number of pixels, thus leaving
the majority of image pixels constant (Levin and Fer-
gus, 2007). This condition produces sharp edges, re-
duces noise, and helps remove unwanted image arti-
facts, such as ringing.
The state-of-the-art single image deblurring
method proposed by Xu et al(Xu, 2013). They
propose a generalized and mathematically sound l
0
sparse expression. In next chapter our test results are
then compared with Xu’s to demonstrate the process
and determine improvements for our method
4 RESULTS
This section presents detailed comparisons of decon-
volution methods to correct current single lens aber-
ration. The actual tested images are all obtained at
ISO 100 and auto exposure. We implement our al-
gorithm and conduct experiments in MATLAB. All
experiments are performed on a computer with dual-
core Intel Core i5 CPU with 2.7 GHz and 8 GB RAM.
Test images with a size of 1080 × 1920 were cap-
tured with an IP camera using our simple lens system
with three individual elements. The PSF estimated
with Xus method is space-invariant, and its size is ap-
proximated by an algorithm. Meanwhile, the PSF size
in our method is fixed to 21. The parameters in Xu’s
method are set to the combination that produces the
best deconvolution result. The computing time of our
method is 338 seconds and Xu’s method is 29 sec-
onds.
The image captured by the simple lens system
is shown in Fig.4, the Fig.4(a) is original image ex-
hibiting texture loss because of chromatic aberra-
tion. After the deconvolution process, Xu’s method
can restore most of the contours of the image and
can correct chromatic aberration. However, as ob-
served in the left close-up window in Fig.4(b), Xu’s
method allows ringing artifacts, particularly in the
high-frequency region. For example, around the red
character in the yellow background, undue transverse
corrugations between two characters are generated.
By comparison, our method (Fig.4(c)) is clean and
preserves more details. Ringing artifacts usually oc-
cur because of the inaccurate estimation of PSF. By
estimating the spatially varying PSF and smooth-
ing, our method ensures robustness and restores the
blurred image while suppressing ringing artifacts dur-
ing deconvolution.
(a)Input image
(b)Xus method
(c)Our method
Figure 4: Deconvolution result comparison of real images.
(a) Input blurred images captured by simple lens camera
with three lenses. (b) The deblurred result(Xu, 2013). (c)
The deblurred result by our approach.
To increase the credibility of the results, we com-
pare another set of images under a different light con-
dition. The same conclusions are obtained. Compar-
ison of the characters on the bottle and eyes of the
yellow minion shows that slight out-of-focus blur and
ringing artifacts still exist (Fig.5(b)). With regard to
the hair and flower region, our method restores small
edges and preserves texture details because of the ac-
curate estimation of PSF. Our method outperforms
previous single image deblurring methods in terms of
total visual quality and details.
5 CONCLUSION
By adding one or two individual optics to optimize the
lens design and implementing sufficient blind decon-
volution, we proved that single lens imaging, which
suffers from optical aberrations, can eliminate chro-
matic aberrations and restore a sharp image.
We estimated spatially variant PSFs through blind
deconvolution with a TV prior and smoothed the PSFs
PSF Smooth Method based on Simple Lens Imaging
37
(a)Input image
(b)Xus method
(c)Our method
Figure 5: Deconvolution result comparison of another set
of images. (a) Input blurred images captured by simple
lens camera with three lenses. (b) The deblurred result (Xu,
2013). (c) The deblurred result by our approach.
by using neighboring image patches. This method of
PSF estimation is robust and highly efficient. Then,
we restored a sharp image through non-blind decon-
volution, and the results are comparable with those
of state-of-the-art deconvolution approaches. Our
method has an advantage in suppressing ringing ar-
tifacts and recovering edge details.
Further improvement can be explored by estab-
lishing a more reasonable means of image division,
such as dividing the image according to the texture
or frequency domain. Another aspect is the current
2D-PSF estimation; future work can estimate 3D-PSF
with image depth of field.
REFERENCES
Beck, A. and Teboulle, M. (2009). Fast iterative shrinkage-
thresholding algorithm for linear inverse problems. In
Siam Journal on Imaging Sciences, 2 (1), 233-240.
Black, M. and Rangarajan, A. (1996). The unification of
line processes, outlier rejection and robust statistics
with applications to early vision. In International
Journal of Computer Vision.
Brauers, J. (2010). Direct psf estimation using a random
noise target. In SPIE Electronic Imaging, July.
Cannon, M. (1976). Blind deconvolution of spatially invari-
ant image blurs with phase. In IEEE Trans. Acoust.
Speech, Signal Processing, vol. 24, p. 5863.
Fergus, R. (2006). Camera shake from a single photograph.
In Acm Transactions on Graphics, 25 (3), 787-794.
Heide, F.and Rouf, M. (2013). High-quality computational
imaging through simple lenses. In Acm Transactions
on Graphics, 32 (5), 13-15.
Joshi, N. (2008). Psf estimation using sharp edge predic-
tion. In IEEE Conference on Computer Vision and
Pattern Recognition, CVPR, 18 23-28.
Krishnan, D. (2011). Blind deconvolution using a normal-
ized sparsity measure. In IEEE Conference on Com-
puter Vision Pattern Recognition, 233-240.
Lee, S. and Cho, S. (2013). Recent advances in image de-
blurring. In SIGGRAPH Asia, 1-108.
Levin, A. and Fergus, R. (2007). Deconvolution using nat-
ural image priors. In 26 (3).
Li, W.and Liu, Y. (2015). Computational photography algo-
rithm for quality enhancement of single lens imaging
deblurring. In Optik-International Journal for Light
and Electron Optics, 126 (21), 2788-2792.
Mahajan, V. (1991). Aberration Theory Made Simple.
Canada.
Roth, S. and Black, M. (2005). A frame- work for learning
image priors. In In CVPR.
Schuler, C.J.and Hirsch, M. (2011). Non-stationary correc-
tion of optical aberrations. In International Confer-
ence on Computer Vision, 32, 659-666.
Stockham, T. and Cannon, T. (1975). Blind deconvolution
through digital signal processing. In Proceedings of
the IEEE, 63 (4), 678-692.
Xu, L. (2013). Unnatural l0 sparse representation for natural
image deblurring. In IEEE Conference on Computer
Vision Pattern Recognition, 1107-1114.
Yuan, L. and Sun, J. (2007). Image deblurring with
blurred/noisy image pairs. In ACM SIGGRAPH.
vol.26, pp.1.
ICPRAM 2017 - 6th International Conference on Pattern Recognition Applications and Methods
38