MULTISENSORY ARCHITECTURE FOR INTELLIGENT SURVEILLANCE SYSTEMS - Integration of Segmentation, Tracking and Activity Analysis

Francisco Alfonso Cano, José Carlos Castillo, Juan Serrano-Cuerda, Antonio Fernández-Caballero

Abstract

Intelligent surveillance systems deal with all aspects of threat detection in a given scene; these range from segmentation to activity interpretation. The proposed architecture is a step towards solving the detection and tracking of suspicious objects as well as the analysis of the activities in the scene. It is important to include different kinds of sensors for the detection process. Indeed, their mutual advantages enhance the performance provided by each sensor on its own. The results of the multisensory architecture offered in the paper, obtained from testing the proposal on CAVIAR project data sets, are very promising within the three proposed levels, that is, segmentation based on accumulative computation, tracking based on distance calculation and activity analysis based on finite state automaton.

References

  1. Ayers, D. and Shah, M. (2001). Monitoring human behavior from video taken in an office environment. Image and Vision Computing, 19(12):833-846.
  2. Davis, J. and Sharma, V. (2007). Background-subtraction in thermal imagery using contour saliency. International Journal of Computer Vision, 71:161-181.
  3. Delgado, A., L ópez, M., and Fernández-Caballero, A. (2010). Real-time motion detection by lateral inhibition in accumulative computation. Engineering Applications of Artificial Intelligence, 23:129-139.
  4. Fernández-Caballero, A., Castillo, J., Martínez-Cantos, J., and Martínez-Tomás, R. (2010). Optical flow or image subtraction in human detection from infrared camera on mobile robot. Robotics and Autonomous Systems, 58:1273-1281.
  5. Fernández-Caballero, A., Castillo, J., Serrano-Cuerda, J., and Maldonado-Bascón, S. (2011). Real-time human segmentation in infrared videos. Expert Systems with Applications, 38:2577-2584.
  6. Gascuen˜a, J. and Fernández-Caballero, A. (2011). Agentoriented modeling and development of a personfollowing mobile robot. Expert Systems with Applications, 38(4):4280-4290.
  7. Isard, M. and Blake, A. (1998). Condensation - conditional density propagation for visual tracking. International Journal of Computer Vision, 29:5-28.
  8. Koller, D., Danilidis, K., and Nagel, H.-H. (1993). Modelbased object tracking in monocular image sequences of road traffic scenes. International Journal of Computer Vision, 10:257-281.
  9. Lavee, G., Rivlin, E., and Rudzsky, M. (2009). Understanding video events: a survey of methods for automatic interpretation of semantic occurrences in video. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 39(5):489-504.
  10. Lézoray, O. and Charrier, C. (2009). Color image segmentation using morphological clustering and fusion with automatic scale selection. Pattern Recognition Letters, 30:397-406.
  11. Maldonado-Bascón, S., Lafuente-Arroyo, S., Gil-Jiménez, P., Gómez-Moreno, H., and L ópez-Ferreras, F. (2007). Road-sign detection and recognition based on support vector machines. IEEE Transactions on Intelligent Transportation Systems, 8(2):264-278.
  12. Masoud, O. and Papanikolopoulos, N. (2001). A novel method for tracking and counting pedestrians in realtime using a single camera. IEEE Transactions on Vehicular Technology, 50(5):1267-1278.
  13. McCane, B., Galvin, B., and Novins, K. (2002). Algorithmic fusion for more robust feature tracking. International Journal of Computer Vision, 49:79-89.
  14. Moreno-Noguer, F., Sanfeliu, A., and Samaras, D. (2008). Dependent multiple cue integration for robust tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30:670-685.
  15. Natarajan, P. and Nevatia, R. (2008). View and scale invariant action recognition using multiview shape-flow models. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8.
  16. Neumann, B. and Möller, R. (2008). On scene interpretation with description logics. Image and Vision Computing, 26:82-101.
  17. Regazzoni, C. and Marcenaro, L. (2000). Object detection and tracking in distributed surveillance systems using multiple cameras. Kluwer Academic Publishers.
  18. Ulusoy, I. and Bishop, C. (2005). Generative versus discriminative methods for object recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 258-265.
  19. Yilmaz, A., Shafique, K., and Shah, M. (2003). Target tracking in airborne forward looking infrared imagery. Image and Vision Computing, 21(7):623-635.
Download


Paper Citation


in Harvard Style

Alfonso Cano F., Carlos Castillo J., Serrano-Cuerda J. and Fernández-Caballero A. (2011). MULTISENSORY ARCHITECTURE FOR INTELLIGENT SURVEILLANCE SYSTEMS - Integration of Segmentation, Tracking and Activity Analysis . In Proceedings of the 13th International Conference on Enterprise Information Systems - Volume 2: ICEIS, ISBN 978-989-8425-54-6, pages 157-162. DOI: 10.5220/0003477101570162


in Bibtex Style

@conference{iceis11,
author={Francisco Alfonso Cano and José Carlos Castillo and Juan Serrano-Cuerda and Antonio Fernández-Caballero},
title={MULTISENSORY ARCHITECTURE FOR INTELLIGENT SURVEILLANCE SYSTEMS - Integration of Segmentation, Tracking and Activity Analysis},
booktitle={Proceedings of the 13th International Conference on Enterprise Information Systems - Volume 2: ICEIS,},
year={2011},
pages={157-162},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003477101570162},
isbn={978-989-8425-54-6},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 13th International Conference on Enterprise Information Systems - Volume 2: ICEIS,
TI - MULTISENSORY ARCHITECTURE FOR INTELLIGENT SURVEILLANCE SYSTEMS - Integration of Segmentation, Tracking and Activity Analysis
SN - 978-989-8425-54-6
AU - Alfonso Cano F.
AU - Carlos Castillo J.
AU - Serrano-Cuerda J.
AU - Fernández-Caballero A.
PY - 2011
SP - 157
EP - 162
DO - 10.5220/0003477101570162