Bio-inspired Model for Motion Estimation using an Address-event Representation

Luma Issa Abdul-Kreem, Heiko Neumann

2015

Abstract

In this paper, we propose a new bio-inspired approach for motion estimation using a Dynamic Vision Sensor (DVS) (Lichtsteiner et al., 2008), where an event-based-temporal window accumulation is introduced. This format accumulates the activity of the pixels over a short time, i.e. several μs. The optic flow is estimated by a new neural model mechanism which is inspired by the motion pathway of the visual system and is consistent with the vision sensor functionality, where new temporal filters are proposed. Since the DVS already generates temporal derivatives of the input signal, we thus suggest a smoothing temporal filter instead of biphasic temporal filters that introduced by (Adelson and Bergen, 1985). Our model extracts motion information via a spatiotemporal energy mechanism which is oriented in the space-time domain and tuned in spatial frequency. To achieve balanced activities of individual cells against the neighborhood activities, a normalization process is carried out. We tested our model using different kinds of stimuli that were moved via translatory and rotatory motions. The results highlight an accurate flow estimation compared with synthetic ground truth. In order to show the robustness of our model, we examined the model by probing it with synthetically generated ground truth stimuli and realistic complex motions, e.g. biological motions and a bouncing ball, with satisfactory results.

References

  1. Adelson, E. and Bergen, J. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America, 2(2):90-105.
  2. Adelson, E. and Movshon, J. (1982). Phenomenal coherence of moving visual pattern. Nature, 300(5892):523-525.
  3. Bayerl, P. and Neumann, H. (2004). Disambiguating visual motion through contextual feedback modulation. Neural Computation, 16(10):2041-2066.
  4. Benosman, R., Leng, S., Clercq, C., Bartolozzi, C., and M., S. (2012). Asynchronous framless event-based opticlal flow. Neural Networks, 27:32-37.
  5. Bouecke, J., Tlapale, E., Kornprobst, P., and Neumann, H. (2010). Neural mechanisms of motion detection, integration, and segregation: from biology to artificial image processing systems. EURASIP Journal on Advances in Signal Processing.
  6. Brosch, T. and Neumann, H. (2014). Computing with a canonical neural circuits model with pool normalization and modulating feedback. Neural Computation (in press).
  7. Brox, T., Bruhn, A., Papenberg, N., and Weickert, J. (2004). High accuracy optical flow estimation based on a theory for warping. In Proc.8th European Conference on Computer Vision, Springer LNCS 3024, T. Pajdle and J. Matas(Eds), (prague, nRepublic), pages 25-36.
  8. Caplovitz, G., Hsieh, P., and Tse, P. (2006). Mechanisms underlying the perceived angular velocity of a rigidly rotating object. Vision Reseach., 46(18):2877-2893.
  9. Carandini, M. and Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13:51-62.
  10. Challinor, K. L. and Mather, G. (2010). A motionenergy model predicts the direction discrimination and mae duration of two-stroke apparent motion at high and low retinal illuminance. Vision Research, 50(12):1109-1116.
  11. Dayan, P. and Abbot, L. F. (2001). Theoretical neuroscience. MIT Press,Cambridge, Mass, USA,.
  12. De Valois, R., Cottarisb, N. P., Mahonb, L. E., Elfara, S. D., and Wilsona, J. A. (2000). Spatial and temporal receptive fields of geniculate and cortical cells and directional selectivity. Vision Research, 40(27):3685- 3702.
  13. Delbruck, T. and Lichtsteiner, P. (2008). Fast sensory motor control based on event-based hybrid neuromorphicprocedural system. IEEE International Symposiom on circuit and system, pages 845 - 848.
  14. Drulea, M. and Nedevschi, S. (2013). Motion estimation using the correlation transform. IEEE Transaction on Image Processing, 22(8):1057-7149.
  15. Emerson, R. C., Bergen, J. R., and Adelson, E. H. (1992). Directionally selective complex cells and the computation of motion energy in cat visual cortex. Vision Research, 32(2):203-218.
  16. Grossberg, S. (1988). Nonlinear neural networks: principles, mechanisms, and architectures. Neural Networks, 1(1):17-61.
  17. Horn, B. and Schunck, B. (1981). Determining optical flow. Artificial Intelligence, 17:185-203.
  18. Lichtsteiner, P., Posch, C., and Delbruck, T. (2008). A 128 × 128 120 db 15 µs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2).
  19. Litzenberger, M., Belbachir, A. N., Donath, N., Gritsch, G., Garn, H., Kohn, B., Posch, C., and Schraml, S. (2006a). Estimation of vehicle speed based on asynchronous data from a silicon retina optical sensor. 6 IEEE Intelligent Transportation Systems ConferenceToronto, Canada, pages 17-20.
  20. Litzenberger, M., Posch, C., Bauer, D., Belbachir, A. N., Schon, P., Kohn, B., and Garn, H. (2006b). Embedded vision system for real-time object tracking using an asynchronous transient vision sensor. IEEE DSPW, 12th - Signal Processing Education Workshop, pages 173-178.
  21. Lucas, B. D. and Kanade, T. (1981). An iterative image registration technique with and application to stereo vision. In Proceedings of Imaging Understanding Workshop, pages 121-130.
  22. Lyu, S. and Simoncelli, E. P. (2009). Nonlinear extraction of independent components of natural images using radial gaussianization. Neural Computation, 21:1485- 1519.
  23. Ni, Z., Pacoret, C., Benosman, R., Ieng, S., and Regnier, S. (2011). Asynchronous event-based high speed vision for microparticle tracking. Journal of Microscopy, 43(2):1365-2818.
  24. Oldham, K., Myland, J., and Spanier, J. (2010). An Atlas of Functions, Second Edition. Springer Science and Business Media.
  25. Perrone, J. and Thiele, A. (2001). Speed skills: measuring the visual speed analyzing properties of primate mt neurons. Nature Neuroscience, 4(5):526532.
  26. Ringach, D. L. (2002). Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Neurophysiol, 88(1):455-463.
  27. Silver, R. A. (2010). Neuronal arithmetic. Nature Reviews Neuroscience, 11:474489.
  28. Simoncelli, E. (1999). Handbook of computer vision and applications, chapter 14, Bayesian multi-scale differential optical flow. Academic Press.
  29. Strout, J. J., Pantle, A., and Mills, S. L. (1994). An energy model of interframe interval effects in single-step apparent motion. Vision Research, (34):3223-3240.
  30. Tschechne, S., Brosch, T., Sailer, R., Egloffstein, N., Abdul-kreem, L. I., and Neumann, H. (2014a). On event-based motion detection and integration. 8th International Conference on Bio-inspired Information and CommunicationsTechnologies, accepted.
  31. Tschechne, S., Sailer, R., and Neumann, H. (2014b). Bio-inspried optic flow from event-based neuromorphic sensor input. ANNPR, Montreal, QC, Canada, Springer LNAI 8774, pages 171-182.
  32. Yo, C. and Wilson, H. (1992). Perceived direction of moving two-dimensional patterns depends on duration, contrast and eccentricity. Vision Research., 32(1):135-147.
Download


Paper Citation


in Harvard Style

Abdul-Kreem L. and Neumann H. (2015). Bio-inspired Model for Motion Estimation using an Address-event Representation . In Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2015) ISBN 978-989-758-091-8, pages 335-346. DOI: 10.5220/0005311503350346


in Bibtex Style

@conference{visapp15,
author={Luma Issa Abdul-Kreem and Heiko Neumann},
title={Bio-inspired Model for Motion Estimation using an Address-event Representation},
booktitle={Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2015)},
year={2015},
pages={335-346},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005311503350346},
isbn={978-989-758-091-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 3: VISAPP, (VISIGRAPP 2015)
TI - Bio-inspired Model for Motion Estimation using an Address-event Representation
SN - 978-989-758-091-8
AU - Abdul-Kreem L.
AU - Neumann H.
PY - 2015
SP - 335
EP - 346
DO - 10.5220/0005311503350346