Restoration of Temporal Image Sequence from a Single Image Captured by a Correlation Image Sensor

Kohei Kawade, Akihiro Wakita, Tastuya Yokota, Hidekata Hontani, Shigeru Ando

Abstract

We propose a method that restores a temporal image sequence, which describes how a scene temporally changed during the exposure period, from a given still image captured by a correlation image sensor (CIS). The restored images have higher spatial resolutions than the original still image, and the restored temporal sequence would be useful for motion analysis in applications such as landmark tracking and video labeling. The CIS is different from conventional image sensors because each pixel of the CIS can directly measure the Fourier coefficients of the temporal change of the light intensity observed during the exposure period. Given a single image captured by the CIS, hence, one can restore the temporal image sequence by computing the Fourier series of the temporal change of the light strength at each pixel. Through this temporal sequence restoration, one can also reduce motion blur. The proposed method improves the performance of motion blur reduction by estimating the Fourier coefficients of the frequencies higher than the measured ones. In this work, we show that the Fourier coefficients of the higher frequencies can be estimated based on the optical flow constraint. Some experimental results with images captured by the CIS are demonstrated.

References

  1. Agrawal, A. and Raskar, R. (2009). Optimal single image capture for motion deblurring. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 2560-2567. IEEE.
  2. Ando, S. and Kimachi, A. (2003). Correlation image sensor: Two-dimensional matched detection of amplitude-modulated light. Electron Devices, IEEE Transactions on, 50(10):2059-2066.
  3. Ando, S., Nakamura, T., and Sakaguchi, T. (1997). Ultrafast correlation image sensor: concept, design, and applications. In Solid State Sensors and Actuators, 1997. TRANSDUCERS'97 Chicago., 1997 International Conference on, volume 1, pages 307-310. IEEE.
  4. Ayvaci, A., Raptis, M., and Soatto, S. (2012). Sparse occlusion detection with optical flow. International Journal of Computer Vision, 97(3):322-338.
  5. Cai, J.-F., Ji, H., Liu, C., and Shen, Z. (2012). Frameletbased blind motion deblurring from a single image. Image Processing, IEEE Transactions on, 21(2):562- 572.
  6. Deshpande, A. M. and Patnaik, S. (2014). Uniform and nonuniform single image deblurring based on sparse representation and adaptive dictionary learning. The International Journal of Multimedia & Its Applications (IJMA), 6(01):47-60.
  7. Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T., and Freeman, W. T. (2006). Removing camera shake from a single photograph. ACM Transactions on Graphics (TOG), 25(3):787-794.
  8. Field, D. J. (1994). What is the goal of sensory coding? Neural Computation, 6(4):559-601.
  9. Gupta, A., Joshi, N., Zitnick, C. L., Cohen, M., and Curless, B. (2010). Single image deblurring using motion density functions. In Computer Vision-ECCV 2010, pages 171-184. Springer.
  10. Heide, F., Hullin, M. B., Gregson, J., and Heidrich, W. (2013). Low-budget transient imaging using photonic mixer devices. ACM Transactions on Graphics (ToG), 32(4):45.
  11. Hontani, H., Oishi, G., and Kitagawa, T. (2014). Local estimation of high velocity optical flow with correlation image sensor. In Computer Vision-ECCV 2014, pages 235-249. Springer.
  12. Jia, J. (2007). Single image motion deblurring using transparency. In Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on , pages 1- 8. IEEE.
  13. Joshi, N., Kang, S. B., Zitnick, C. L., and Szeliski, R. (2010). Image deblurring using inertial measurement sensors. In ACM Transactions on Graphics (TOG), volume 29, page 30. ACM.
  14. Kadambi, A., Whyte, R., Bhandari, A., Streeter, L., Barsi, C., Dorrington, A., and Raskar, R. (2013). Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles. ACM Transactions on Graphics (TOG), 32(6):167.
  15. Levin, A. (2006). Blind motion deblurring using image statistics. In Advances in Neural Information Processing Systems, pages 841-848.
  16. McCloskey, S., Ding, Y., and Yu, J. (2012). Design and estimation of coded exposure point spread functions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(10):2071-2077.
  17. Nayar, S. K. and Ben-Ezra, M. (2004). Motion-based motion deblurring. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(6):689-698.
  18. Raskar, R., Agrawal, A., and Tumblin, J. (2006). Coded exposure photography: motion deblurring using fluttered shutter. ACM Transactions on Graphics (TOG), 25(3):795-804.
  19. Shan, Q., Jia, J., and Agarwala, A. (2008). High-quality motion deblurring from a single image. In ACM Transactions on Graphics (TOG), volume 27, page 73. ACM.
  20. Shan, Q., Xiong, W., and Jia, J. (2007). Rotational motion deblurring of a rigid object from a single image. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1-8. IEEE.
  21. Tai, Y.-W., Du, H., Brown, M. S., and Lin, S. (2010). Correction of spatially varying image and video motion blur using a hybrid camera. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(6):1012-1028.
  22. Velten, A., Wu, D., Jarabo, A., Masia, B., Barsi, C., Joshi, C., Lawson, E., Bawendi, M., Gutierrez, D., and Raskar, R. (2013). Femto-photography: capturing and visualizing the propagation of light. ACM Transactions on Graphics (TOG), 32(4):44.
  23. Wei, D., Masurel, P., Kurihara, T., and Ando, S. (2009). Optical flow determination with complex-sinusoidally modulated imaging. relation, 7(8):9.
  24. Whyte, O., Sivic, J., Zisserman, A., and Ponce, J. (2012). Non-uniform deblurring for shaken images. International journal of computer vision, 98(2):168-186.
  25. Xu, L. and Jia, J. (2010). Two-phase kernel estimation for robust motion deblurring. In Computer Vision-ECCV 2010, pages 157-170. Springer.
  26. Xu, L., Zheng, S., and Jia, J. (2013). Unnatural l0 sparse representation for natural image deblurring. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 1107-1114. IEEE.
Download


Paper Citation


in Harvard Style

Kawade K., Wakita A., Yokota T., Hontani H. and Ando S. (2017). Restoration of Temporal Image Sequence from a Single Image Captured by a Correlation Image Sensor . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-227-1, pages 181-191. DOI: 10.5220/0006115301810191


in Bibtex Style

@conference{visapp17,
author={Kohei Kawade and Akihiro Wakita and Tastuya Yokota and Hidekata Hontani and Shigeru Ando},
title={Restoration of Temporal Image Sequence from a Single Image Captured by a Correlation Image Sensor},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={181-191},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006115301810191},
isbn={978-989-758-227-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017)
TI - Restoration of Temporal Image Sequence from a Single Image Captured by a Correlation Image Sensor
SN - 978-989-758-227-1
AU - Kawade K.
AU - Wakita A.
AU - Yokota T.
AU - Hontani H.
AU - Ando S.
PY - 2017
SP - 181
EP - 191
DO - 10.5220/0006115301810191