Variable Exposure Time Imaging for Obtaining HDR Images
Saori Uda, Fumihiko Sakaue and Jun Sato
Department of Computer Science and Engineering, Nagoya Institute of Technology,
Gokiso, Showa, Nagoya 466-8555, Japan
Keywords:
Computational Photography, HDR Image, Exposure Time.
Abstract:
In this paper, we propose a novel imaging method called variable exposure time imaging for obtaining HDR
images from single image capture. In this method, we control exposure time pixel by pixel. Thus, each pixel
in an image taken by this imaging method is obtained under different exposure time. We call this image
variable exposure image. By using the variable exposure image, we can synthesize a high dynamic range
image efficiently, since we can optimize the exposure time pixel by pixel according to the input intensity
at each pixel. Experimental results from the proposed method show the efficiency of the proposed imaging
method.
1 INTRODUCTION
Obtaining high dynamic range images is very impor-
tant in many computer vision applications. However,
the dynamic range in natural scenes is much wider
than the dynamic range of ordinary cameras. If the
maximum intensity of the input scene is over or un-
der the dynamic range of the camera, over exposure
or under exposure occurs as shown in Fig.1.
In order to avoid the over exposure and under
exposure problems, several methods have been pro-
posed for obtaining high dynamic range (HDR) im-
ages from ordinary cameras (Burt and Kolczynski,
1993; Debevec and Malik, 1997; Mann and Picard,
1995; Aggarwal and Ahuja, 2001; Schechner and Na-
yar, 2001). In these methods, multiple images are
taken by an ordinary cameras under different expo-
sure parameters, and these images are combined so
that a single HDR image is obtained. Although these
methods are useful, several image captures are re-
quired for obtaining a single HDR image. Thus, they
are not appropriate for obtaining HDR images in dy-
namic scenes, where objects move during multiple
image captures.
Another way to obtain HDR images is to modify
the imaging system of cameras, so that we can ob-
tain HDR images from a single image capture. For
this objective, some new imaging methods were pro-
posed recently. In these methods, the exposure of
imaging systems is controlled pixel by pixel by us-
ing special devices, such as LCD and LCoS (Nayar
and Mitsunaga, 2000; Nayar et al., 2003; Mannnami
(a) Over exposed image (b) Under exposed image
Figure 1: The over exposure and under exposure.
et al., 2007). By using these methods, over exposure
and under exposure can be avoided, even if the dy-
namic range of input scene is very wide. Although
these methods can obtain HDR images from a single
shot, the systematic delay of exposure control exists
in these methods, since they compute the exposure
pattern of the current image frame by using the image
obtained in the previous image frame. As a result,
rapid changes of dynamic range in the scene cannot
by suppressed in these methods.
In this paper, we propose a new imaging method
which we call variable exposure time imaging. In this
imaging method, each pixel in an image sensor stops
its exposure when the integrated intensity in the pixel
becomes higher than a threshold value. Thus, the ex-
posure time of each pixel varies according to the input
intensity. In this method, not only an observed inten-
sity, but also an exposure time are recorded pixel by
pixel. Therefore, we can obtain information of input
light not only from observed intensity, but also from
exposure time in each pixel. From the obtained expo-
sure time and intensity in each pixel, an HDR image
120
Uda, S., Sakaue, F. and Sato, J.
Variable Exposure Time Imaging for Obtaining HDR Images.
DOI: 10.5220/0005719101180124
In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) - Volume 3: VISAPP, pages 120-126
ISBN: 978-989-758-175-5
Copyright
c
2016 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
can be obtained efficiently from a single image cap-
ture.
2 EXPOSURE MODEL
We first consider the exposure model of digital cam-
eras. Considering the light field in the scene, cam-
eras can be regarded as recorders of light rays. Let
L(u, v,t) be a 3-dimensional continuous light field,
where (u, v) denotes 2D position on the image plane
and t denotes time. Note that, the light field in gen-
eral consists of the position, orientation and time of
light rays, and thus it is 5D. However, we in this pa-
per consider ordinary 2D cameras, and thus we do not
consider orientation of light rays. Hence, L(u, v, t) in-
dicates integrated intensity of a set of light rays which
go through a point (u, v) on the image plane at time t.
By using the light field L(u, v,t), an intensity
I(x, y) obtained by the camera at pixel (x, y) in a single
frame time can be described as follows:
I(x, y) =
Z
T
0
Z
y+
1
2
y
1
2
Z
x+
1
2
x
1
2
L(u, v,t)dudvdt (1)
where T is an exposure time in a single frame. If
L(u, v,t) is constant during the exposure time, e.g. ob-
served scene is static, Eq.(1) can be rewritten as fol-
lows:
I(x, y) = T
Z
y+
1
2
y
1
2
Z
x+
1
2
x
1
2
L(u, v,0)dudv (2)
Then, Eq.(2) can be described as follows:
I(x, y) = TE(x, y) (3)
where, E(x, y) is an intensity obtained in a unit time
as follows:
E(x, y) =
Z
y+
1
2
y
1
2
Z
x+
1
2
x
1
2
L(u, v,0)dudv. (4)
Hence, E(x, y) is considered as the intensity of input
light. In an ordinary camera, the exposure time T is
constant for all the pixels, and thus observed image
intensity I(x, y) is proportional to the input light in-
tensity E(x, y) as follows:
I(x, y) E(x, y) (5)
By changing the exposure time T, the range of inten-
sity in obtained image changes. For example, if T is
small, a dark image is obtained as shown in Fig.2 (a),
and a bright image is obtained if T is large as shown
in Fig.2 (b). In Fig.2 (a), the intensities of indoor part
are very dark, and they do not include enough infor-
mation. In Fig.2 (b), the intensities of outdoor part are
(a) Image taken under short
exposure time
(b) Image taken under long
exposure time
Figure 2: Difference of obtained image under different ex-
posure time.
saturated, and they also do not have enough informa-
tion. This is because the brightness of outdoor scene
in fine day is more than 100,000 lx, while the bright-
ness of indoor scene is just about 500 lx. As shown in
these images, it is difficult to obtain sufficient infor-
mation from an image taken by a constant exposure
time. Thus, we next propose a new imaging method,
which controls the exposure time pixel by pixel.
3 VARIABLE EXPOSURE
IMAGING
As described in the previous section, the observed im-
age intensity is proportional to the input light intensity
in the ordinary cameras. In this section, we propose
a new imaging model, in which the observed image
intensity is not proportional to the input light inten-
sity. In this imaging method, the exposure time for
each pixel changes adaptively in order to obtain the
magnitude of input light intensity accurately without
suffering from over exposure and under exposure.
In this imaging model, observed intensity can be
described as follows:
I(x, y) =
Z
T
0
Z
y+
1
2
y
1
2
Z
x+
1
2
x
1
2
R(x, y,t)L(u, v,t)dudvdt
(6)
where R(x, y,t) indicates a transmittance of each
pixel. If R(x, y,t) = 1, the input light is accumulated
to the image pixel, and if R(x, y,t) = 0, the input light
is not accumulated to the image pixel. We control
R(x, y,t) pixel by pixel according to the input light
L(u, v,t) of each pixel in a single exposure time T,
i.e. in a single frame. By using the variable exposure
imaging, we can control temporal exposure pattern at
each pixel in each frame.
3.1 Variable Exposure Time Imaging
We next consider variable exposure imaging for ob-
taining HDR images from single shot images. In
Variable Exposure Time Imaging for Obtaining HDR Images
121
(a) Ordinary camera
(b) Variable exposure time
camera
Figure 3: The relationship between the input intensity E and
the measurements in the ordinary camera and the proposed
variable exposure time camera. The blue lines show image
intensity I in (a) and exposure time T in (b). The red line
in (a) shows the resolution
dI
dE
of image intensity I in the
ordinary camera, and the red line in (b) shows the resolution
dT
dE
of exposure time T in the proposed camera.
this imaging method, we accumulate input light at
each pixel, so that the accumulated image intensity
at the pixel becomes a certain constant value. This is
achieved by controlling the transmittance R(x, y, t) at
pixel (x, y) as follows:
R(x, y,t) =
1 I(x, y,t) < I
θ
0 otherwise
(7)
where I
θ
is a threshold, and I(x, y,t) is the accumu-
lated image intensity at pixel (x, y) up to time t as fol-
lows:
I(x, y, t) =
Z
t
0
Z
y+
1
2
y
1
2
Z
x+
1
2
x
1
2
R(x, y,t)L(u, v,t)dudvdt.
(8)
Then, we record the time t as the exposure time
T(x, y) of pixel (x, y), when R(x, y, t) changes from
1 to 0. In this imaging method, we control the ex-
posure time T(x, y) of each pixel, so that the accu-
mulated image intensity I(x, y) is under the threshold
value I
θ
. Thus, the over exposure does not occur in
this method. The intensity of input light is measured
as the exposure time unlike the standard camera. That
is the exposure time T(x, y) is inversely proportional
to the intensity of input light E(x, y) as follows:
T(x, y) =
I
θ
E(x, y)
(9)
Thus, unlike the ordinary cameras shown in Eq.(5),
the proposed imaging model measures the input light
intensity E(x, y) according to the following relation-
ship:
T(x, y)
1
E(x, y)
(10)
The blue lines in Fig. 3 (a) and (b) show the re-
lationship between the input light intensity E and the
measurement I in the ordinary camera and the mea-
surement T in the proposed camera. While the mea-
surement I is proportional to the input E in the or-
dinary camera, the measurement T is nonlinear to the
(a) Exposure time image (b)Variable exposure image
Figure 4: Output of variable exposure camera: (a)Exposure
time image and (b)variable exposure image.
input E in the proposed camera. The red lines in Fig. 3
(a) and (b) show the resolution of measurements in
both cameras, that is
dI
dE
in the ordinary camera and
dT
dE
in the proposed camera. As shown in these im-
ages, the resolution of the proposed camera is large
for small input intensity, and is small for large input
intensity, while the resolution of the ordinary camera
is constant regardless of input intensity. This property
of the proposed camera enables us to capture small
difference in low input intensity avoiding the satura-
tion of high input intensity.
In the proposed method, the observedimage inten-
sity I(x, y) becomes constant I
θ
in an ideal case. How-
ever, it is actually not, since the exposure time is finite
and both the exposure time and the image intensity
have quantization errors. Therefore, we record both
the observed image intensity I(x, y) and the exposure
time T(x, y) simultaneously. From these recorded
informations, we can reconstruct HDR images effi-
ciently as we describe in the next section.
Fig. 4 shows an example output from the variable
exposure imaging. The left image shows exposure
time and the right image shows observed image in-
tensity which we call a variable exposure image. In
the exposure time image, the brightness of each pixel
shows the exposure time of the pixel. In the variable
exposure image, almost all pixels have similar inten-
sity to I
θ
, which indicates the exposure time for each
pixel is controlled appropriately. In addition, bright
area in exposure time image corresponds to dark area
in variable exposure image. It indicates that exposure
time becomes large, when the power of input light ray
is small.
3.2 HDR Image Reconstruction
We next consider the recovery of HDR images from
the exposure time image T(x, y) and the variable ex-
posure image I(x, y).
The HDR image reconstruction is equivalent to
the estimation of image E(x, y) obtained in a unit
VISAPP 2016 - International Conference on Computer Vision Theory and Applications
122
time. In a static scene, the relationship among the
HDR image E(x, y), the exposure time image T(x, y)
and the variable exposure image I(x, y) can be de-
scribed as follows:
I(x, y) = T(x, y)E(x, y). (11)
Therefore, the HDR image E(x, y) can be estimated
as follows:
E(x, y) =
I(x, y)
T(x, y)
(12)
Although the proposed method is similar to the
existing pixelwise exposure control methods (Nayar
et al., 2003; Mannnami et al., 2007), the actual be-
havior of the proposed method is very different from
that in the existing methods especially in dynamic
scenes. In the existing pixelwise exposure control
methods (Nayar et al., 2003; Mannnami et al., 2007),
the exposure time of each pixel is constant, and the
reflectance of LCoS or transparency of LCD is con-
trolled in each pixel. On the contrary, the proposed
method controls the exposure time of each pixel. As
a result, the exposure time of the proposed method is
smaller than that of the existing methods. Thus, the
motion blur of the proposed method is much smaller
than that of the existing method in dynamic scenes.
Also, the existing methods have systematic delay of
controlling LCoS or LCD. That is, the exposure of
each image pixel is controlled one frame after the im-
age capture. This systematic delay problem is espe-
cially serious when the observed image is saturated.
If the observed intensity is saturated, we need several
frames to control LCoS or LCD for obtaining unsat-
urated image, since we do not know how large is the
input intensity if the observed image is saturated. On
the contrary, the proposed imaging method requires
just a single capture for obtaining HDR images, since
the exposure time of each image pixel is controlled,
so that the accumulated image intensity of each pixel
becomes I
θ
. These properties of the proposed method
enable us to decrease the motion blur and the delay
of intensity control in observed images, when we ob-
serve moving objects. Thus, the proposed method
provides us better results in dynamic scenes.
3.3 Structure of Variable Exposure
Camera
In order to realize our new imaging model, we com-
bine an LCoS (Liquid Crystal on Silicon) device with
two image sensors, i.e. main image sensor and mea-
suring image sensor, as shown in Fig.5. The LCoS de-
vice can control the input light of the main image sen-
sor, and thus by controlling the LCoS device we can
control the exposure time of each image pixel. The
Figure 5: Variable exposure camera by combining digital
micro-mirror device and image sensor. The DMD control
exposure time of the image sensor pixel by pixel.
Figure 6: Variable exposure camera. It consists of an LCoS
device and two image sensors.
exposure time is controlled according to the measure-
ment results of the measuring image sensor. The ob-
servation result of the main image sensor is the vari-
able exposure image.
We next explain the detail of our variable expo-
sure camera. At first, input light rays pass through an
image plane and a relay lens. After that, the rays are
split into two ways as shown in Fig. 5. One is the di-
rection to the measuring image sensor and the other
is the direction to the LCoS device. The light rays
directed to the image sensor are received by the mea-
suring image sensor. Observation results of this sen-
sor are reflected to the displaying patterns of the LCoS
device. Each pixel on the LCoS device, main image
sensor, and measuring image sensor corresponds to
each other. Therefore, the exposure time of a pixel on
the main image sensor can be controlled by changing
the reflectance of a corresponding pixel on the LCoS
device. Then, the observed result of the measuring
image sensor is used for controlling the LCoS device.
Note that the sampling frequency of the measur-
ing image sensor and the LCoS device is much higher
than the main image sensor. Therefore, we can con-
Variable Exposure Time Imaging for Obtaining HDR Images
123
(a) (b)
Figure 7: Observed images taken by an ordinary camera
with long exposure time (a) and short exposure time (b).
(a)
(b)
Figure 8: Variable exposure image (a) and exposure time
image (b) taken by the variable exposure camera.
trol the exposure time of the main image sensor with
a sub-frame speed.
Fig. 6 shows a prototype of the variable exposure
camera. In this camera, the frame rate of the measur-
ing image sensor is 10 fps, and it is much higher than
the frame rate of the main image sensor which is 1
fps. The two sensors are synchronized to each other,
and the image acquisitions of these two sensors start
at the same time.
4 EXPERIMENTAL RESULTS
In this section, we show experimental results from the
proposed HDR imaging method.
We first show results from a static scene. Fig. 7 (a)
and (b) show images taken by an ordinary camera. As
shown in this figure, the long exposure time is good
for indoor scenes, but it is over exposure for outdoor
scenes. On the contrary, the short exposure time is
good for outdoor scenes, but it is under exposure for
indoor scenes. Thus, we cannot observe both indoor
scenes and outdoor scenes simultaneously in ordinary
cameras.
Fig. 8 (a) and (b) show the variable exposure im-
age and the exposure time image obtained from our
variable exposure camera. As shown in the exposure
time image, the exposure time of each pixel is con-
trolled according to the input light intensity at each
pixel. The HDR image obtained from these obser-
vations of the variable exposure camera is shown in
Fig. 9. As shown in this figure, we can observe both
Figure 9: HDR image obtained from the variable exposure
camera.
Figure 10: Experimental environment.
indoor and out door scenes in the image obtained from
the variable exposure camera.
We next show results from a dynamic scene,
where we have moving objects in the scene. Fig. 10
shows experimental settings. The target object is set
on a moving stage and it was moved horizontally by
the moving stage. In order to widen the dynamic
range of input scene, a strong light source illuminated
the scene partially. As a result, the observed intensity
changed drastically depending on the position of the
moving object. Fig.11 (a) and (b) show images taken
by an ordinary camera. The left image was taken with
long exposure time and the right image was taken with
short exposure time. In these images, dynamic range
of the camera is not sufficient, and thus, over exposure
and under exposure occurred. In addition, motion blur
exists in the case of long exposure time as shown in
Fig. 11 (a).
In order to obtain HDR image of the scene, we
took images by using the proposed variable expo-
(a) (b)
Figure 11: Observed images taken by an ordinary camera
with long exposure time (a) and short exposure time (b).
VISAPP 2016 - International Conference on Computer Vision Theory and Applications
124
(a)
(b)
Figure 12: Variable exposure image (a) and exposure time
image (b) taken by the variable exposure camera
Figure 13: The HDR image obtained from the variable ex-
posure camera.
sure time camera. For comparison, the reflectance
of LCoS was controlled as proposed in (Mannnami
et al., 2007) to obtain HDR image with constant ex-
posure time.
Fig. 12 shows observed images taken by the vari-
able exposure time camera. The left image is a vari-
able exposure image and the right image is an expo-
sure time image. As shown in Fig. 12 (b), the expo-
sure time is short at large input intensity pixels, and
the exposure time is long at small input intensity pix-
els. As a result, the over exposure is suppressed in the
captured image as shown in Fig. 12 (a). From these
two images, an HDR image was computed by the pro-
posed method. The obtained HDR image is shown in
Fig.13. For representing HDR images, we used tone
mapping with logarithm operation in this paper.
For comparison, the HDR image was obtained by
using the existing method (Mannnami et al., 2007),
in which the reflectance of LCoS is controlled pixel
by pixel in a constant exposure time. The exposure
time is same as that of the main camera of the pro-
posed method, i.e. 1fps. Fig. 14 (a) shows an ob-
served image and (b) shows an LCoS reflectance im-
age obtained from the previous image frame. From
these images, an HDR image was computed as shown
in Fig.15. As shown in Fig.15, although over expo-
sure and under exposure are suppressed, the motion
blur occurs at a target object, since the exposure time
is constant and long. On the contrary, the motion blur
is suppressed in our proposed method as shown in
Fig.13, since the exposure time is controlled in our
method and thus it becomes shorter than that of the
existing method. These results show that the proposed
(a)
(b)
Figure 14: Observations in the existing method (Mannnami
et al., 2007). (a) shows observed image, and (b) shows re-
flectance of LCoS at each pixel, which is obtained from the
previous image frame in sequential images.
Figure 15: The HDR image obtained from the existing
method (Mannnami et al., 2007).
method can suppress not only over/under exposure,
but also motion blur in dynamic scenes.
5 CONCLUSION
In this paper, we proposed a novel imaging method
for obtaining HDR images from single shot image
capturing, which we call variable exposuretime imag-
ing. In this image method, we control exposure time
of each image pixel according to the accumulated in-
put light intensity at each pixel in a single frame.
While the ordinary camera captures input scene inten-
sity as the intensity of observed image, the proposed
imaging method captures input scene intensity as the
exposure time of each image pixel. We built the vari-
able exposure time camera by using an LCoS and two
cameras. The experimental results show that the pro-
posed variable exposure time imaging can suppress
not only over exposure and under exposure, but also
motion blur in dynamic scenes.
REFERENCES
Aggarwal, M. and Ahuja, N. (2001). High dynamic range
panoramic imaging. In Peoc. of International Confer-
ence on Computer Vision, pages 2–9.
Burt, P. and Kolczynski, R. (1993). Enhanced image capture
through fusion. In Proc. of International Conference
on Computer Vision, pages 173–182.
Variable Exposure Time Imaging for Obtaining HDR Images
125
Debevec, P. and Malik, J. (1997). Recovering high dynamic
range radiance maps from photographs. In Proc. SIG-
GRAPH1997, pages 369–378.
Mann, S. and Picard, R. (1995). On being ‘undigital’ with
digital cameras: Extending dynamic range by combin-
ing differently exposed pictures. In Proc. of IS & T,
pages 442–448.
Mannnami, H., Sagawa, R., Mukaigawa, Y., Echigo, T., and
Yagi, Y. (2007). High dynamic range camera using
reflective liquid crystal. In Proc. ICCV2007, pages
1–8.
Nayar, S., Branzoi, V., and Boult, T. (2003). Adaptive dy-
namic range imaging: optical control of pixel expo-
sures over space and time. In Proc. of International
Conference on Computer Vision, pages 436–443.
Nayar, S. and Mitsunaga, T. (2000). High dynamic range
imaging: Spatially varying pixel exposure. In Proc.
of IEEE Conference on Computer Vision and Pattern
Recognition, pages 472–479.
Schechner, Y. and Nayar, S. (2001). Generalized mosaicing.
In Proc. of International Conference on Computer Vi-
sion, pages 17–24.
VISAPP 2016 - International Conference on Computer Vision Theory and Applications
126