A Generative Traversability Model for Monocular Robot Self-guidance

Michael Sapienza, Kenneth Camilleri

2012

Abstract

In order for robots to be integrated into human active spaces and perform useful tasks, they must be capable of discriminating between traversable surfaces and obstacle regions in their surrounding environment. In this work, a principled semi-supervised (EM) framework is presented for the detection of traversable image regions for use on a low-cost monocular mobile robot. We propose a novel generative model for the occurrence of traversability cues, which are a measure of dissimilarity between safe-window and image superpixel features. Our classification results on both indoor and outdoor images sequences demonstrate its generality and adaptability to multiple environments through the online learning of an exponential mixture model. We show that this appearance-based vision framework is robust and can quickly and accurately estimate the probabilistic traversability of an image using no temporal information. Moreover, the reduction in safe-window size as compared to the state-of-the-art enables a self-guided monocular robot to roam in closer proximity of obstacles.

References

  1. Al-Athari, F. (2008). Estimation of the mean of truncated exponential distribution. Journal of Mathematics and Statistics, 4(4):284-288.
  2. Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for human detection. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 886-893.
  3. Davidson, J. and Hutchinson, S. (2003). Recognition of traversable areas for mobile robotic navigation in outdoor environments. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 297-304.
  4. DeSouza, G. and Kak, A. (2002). Vision for mobile robot navigation: A survey. IEEE Trans. Pattern Analysis and Machine Intelligence, 24(2):237-267.
  5. Felzenszwalb, P. and Huttenlocher, D. (2004). Efficient graph-based image segmentation. Int. Journal of Computer Vision, 59(2):167-181.
  6. Garthwaite, P., Jolliffe, I., and Jones, B. (2002). Statistical Inference. Oxford University Press, Inc., second edition.
  7. Hadsell, R., Sermanet, P., Ben, J., Erkan, A., Scoffier, M., Kavukcuoglu, K., Muller, U., and LeCun, Y. (2009). Learning long-range vision for autonomous off-road driving. Journal of Field Robotics, 26(2):120-144.
  8. Hoiem, D., Efros, A., and Hebert, M. (2007). Recovering surface layout from an image. Int. Journal of Computer Vision, 75(1):151-172.
  9. Katramados, I., Crumpler, S., and Breckon, T. (2009). Realtime traversable surface detection by colour space fusion and temporal analysis. In Int. Conf. Computer Vision Systems, volume 5815, pages 265-274.
  10. Kim, D., Oh, S., and Rehg, J. (2007). Traversability classification for UGV navigation: a comparison of patch and superpixel representations. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 3166-3173.
  11. Kosaka, A. and Kak, A. (1992). Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 2177-2186.
  12. Lorigo, L., Brooks, R., and Grimson, W. (1997). Visuallyguided obstacle avoidance in unstructured environments. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 373-379.
  13. Mäenpää, T., Turtinen, M., and Pietikäinen, M. (2003). Real-time surface inspection by texture. Real-Time Imaging, 9(5):289-296.
  14. Meng, M. and Kak, A. (1993). NEURO-NAV: A neural network based architecture for vision-guided mobile robot navigation. In IEEE Int. Conf. on Robotics and Automation, pages 750-757.
  15. Michels, J., Saxena, A., and Ng, A. (2005). High speed obstacle avoidance using monocular vision and reinforcement learning. In Proceedings 22nd Int. Conf. on Machine Learning, pages 593-600.
  16. Mitchell, T. (1997). Machine Learning. The McGraw-Hill Companies, Inc., first edition.
  17. Murali, V. and Birchfield, S. (2008). Autonomous navigation and mapping using monocular low-resolution grayscale vision. In IEEE Workshop on Computer Vision and Pattern Recognition, pages 1-8.
  18. Ohno, T., Ohya, A., and Yuta, S. (1996). Autonomous navigation for mobile robots referring pre-recorded image sequence. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 672-679.
  19. Prince, S. (2011). Computer vision models.
  20. Roning, J., Taipale, T., and Pietikainen, M. (1990). A 3-d scene interpreter for indoor navigation. In IEEE Int. Workshop on Intelligent Robots and Systems, pages 695-701.
  21. Santosh, D., Achar, S., and Jawahar, C. (2008). Autonomous image-based exploration for mobile robot navigation. In IEEE Int. Conf. on Robotics and Automation, pages 2717-2722.
  22. Sofman, B., Lin, E., Bagnell, J., Cole, J., Vandapel, N., and Stentz, A. (2006). Improving robot navigation through self-supervised online learning. Journal of Field Robotics, 23(11-12):1059-1075.
  23. Ulrich, I. and Nourbakhsh, I. (2000). Appearance-based obstacle detection with monocular color vision. In AIII Conf. on Artificial Intelligence, pages 866-871.
Download


Paper Citation


in Harvard Style

Sapienza M. and Camilleri K. (2012). A Generative Traversability Model for Monocular Robot Self-guidance . In Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO, ISBN 978-989-8565-22-8, pages 177-184. DOI: 10.5220/0003983701770184


in Bibtex Style

@conference{icinco12,
author={Michael Sapienza and Kenneth Camilleri},
title={A Generative Traversability Model for Monocular Robot Self-guidance},
booktitle={Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,},
year={2012},
pages={177-184},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003983701770184},
isbn={978-989-8565-22-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,
TI - A Generative Traversability Model for Monocular Robot Self-guidance
SN - 978-989-8565-22-8
AU - Sapienza M.
AU - Camilleri K.
PY - 2012
SP - 177
EP - 184
DO - 10.5220/0003983701770184