Modeling Human Motion for Predicting Usage of Hospital Operating Room

Ilyes Sghir, Shishir Shah

2016

Abstract

In this paper, we present a system that exploits existing video streams from a hospital operating room (OR) to infer OR usage states. We define OR states that are relevant for assessing OR usage efficiency. We adopt a holistic approach that involves the combination of two meaningful human motion features: gestures or upper body movements computed using optical flow and whole body movements computed through motion trajectories. The two features are independently modeled for each of the defined OR usage states and eventually fused to obtain a final decision. Our approach is tested on a large collection of videos and the results show that the combination of both human motion features provide significant discriminative power in understanding usage of an OR.

References

  1. Arbab-Zavar, B., Carter, J. N., and Nixon, M. S. (2014). On hierarchical modelling of motion for workflow analysis from overhead view. Machine vision and applications, 25(2):345-359.
  2. Association, H. F. M. et al. (2003). Achieving operating room efficiency through process integration. Healthcare financial management: journal of the Healthcare Financial Management Association, 57(3):suppl-1.
  3. Bardram, J. E., Hansen, T. R., and Soegaard, M. (2006). Awaremedia: a shared interactive display supporting social, temporal, and spatial awareness in surgery. In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work, pages 109- 118. ACM.
  4. Behera, A., Cohn, A., and Hogg, D. (2014). Real-time activity recognition by discerning qualitative relationships between randomly chosen visual features. In Proceedings of the British Machine Vision Conference. BMVA Press.
  5. Bhatia, B., Oates, T., Xiao, Y., and Hu, P. (2007). Real-time identification of operating room state from video. In AAAI, volume 2, pages 1761-1766.
  6. Bishop, C. M. et al. (2006). Pattern recognition and machine learning, volume 4. springer New York.
  7. Bowman, A. W. and Azzalini, A. (2004). Applied smoothing techniques for data analysis. Clarendon Press.
  8. Brox, T. and Malik, J. (2011). Large displacement optical flow: descriptor matching in variational motion estimation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(3):500-513.
  9. Ciechanowicz, S. and Wilson, N. (2011). Delays to operating theatre lists: observations from a uk centre. The Internet Journal of Health, 12(2).
  10. Criminisi, A., Reid, I., and Zisserman, A. (2000). Single view metrology. International Journal of Computer Vision, 40(2):123-148.
  11. Does, R. J., Vermaat, T. M., Verver, J. P., Bisgaard, S., and Van den Heuvel, J. (2009). Reducing start time delays in operating rooms. Journal of Quality Technology, 41(1):95-109.
  12. Ester, M., Kriegel, H.-P., Sander, J., and Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD, volume 96, pages 226-231.
  13. Felzenszwalb, P. F., Girshick, R. B., and McAllester, D. (2010a). Discriminatively trained deformable part models, release 4.
  14. Felzenszwalb, P. F., Girshick, R. B., McAllester, D., and Ramanan, D. (2010b). Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627-1645.
  15. Hartley, R. and Zisserman, A. (2003). Multiple View Geometry in Computer Vision. Cambridge University Press.
  16. Hoiem, D., Efros, A. A., and Hebert, M. (2008). Putting objects in perspective. International Journal of Computer Vision, 80(1):3-15.
  17. Kodali, B. S., Kim, D., Bleday, R., Flanagan, H., and Urman, R. D. (2014). Successful strategies for the reduction of operating room turnover times in a tertiary care academic medical center. Journal of Surgical Research, 187(2):403-411.
  18. Lange, P. M., Nielsen, K. L. G., Petersen, S. T., and Bardram, J. E. (2010). Phase recognition in an operating room using sensor technology. IT-university of Copenhagen.
  19. Lea, C., Facker, J., Hager, G., Taylor, R., and Saria, S. (2013). 3d sensing algorithms towards building an intelligent intensive care unit. AMIA Summits on Translational Science Proceedings, 2013:136.
  20. Macario, A. (2010). What does one minute of operating room time cost? Journal of clinical anesthesia, 22(4):233-236.
  21. Nara, A., Izumi, K., Iseki, H., Suzuki, T., Nambu, K., and Sakurai, Y. (2011). Surgical workflow monitoring based on trajectory data mining. In New Frontiers in Artificial Intelligence, pages 283-291. Springer.
  22. Niu, Q., Peng, Q., El Mekkawy, T., Tan, Y. Y., Bruant, H., and Bernaerdt, L. (2011). Performance analysis of the operating room using simulation. Proceedings of the Canadian Engineering Education Association.
  23. Padoy, N., Mateus, D., Weinland, D., Berger, M.-O., and Navab, N. (2009). Workflow monitoring based on 3d motion features. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, pages 585-592. IEEE.
  24. Pentico, D. W. (2007). Assignment problems: A golden anniversary survey. European Journal of Operational Research, 176(2):774-793.
  25. Schuster, M., Pezzella, M., Taube, C., Bialas, E., Diemer, M., and Bauer, M. (2013). Delays in starting morning operating lists: An analysis of more than 20 000 cases in 22 german hospitals. Deutsches Ïrzteblatt International, 110(14):237.
  26. Veres, G., Grabner, H., Middleton, L., and Van Gool, L. (2011). Automatic workflow monitoring in industrial environments. In Computer Vision-ACCV 2010, pages 200-213. Springer.
  27. Voulodimos, A., Kosmopoulos, D., Veres, G., Grabner, H., Van Gool, L., and Varvarigou, T. (2011). Online classification of visual tasks for industrial workflow monitoring. Neural Networks, 24(8):852-860.
  28. Xiao, Y., Hu, P., Hu, H., Ho, D., Dexter, F., Mackenzie, C. F., Seagull, F. J., and Dutton, R. P. (2005). An algorithm for processing vital sign monitoring data to remotely identify operating room occupancy in realtime. Anesthesia & Analgesia, 101(3):823-829.
Download


Paper Citation


in Harvard Style

Sghir I. and Shah S. (2016). Modeling Human Motion for Predicting Usage of Hospital Operating Room . In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016) ISBN 978-989-758-175-5, pages 328-335. DOI: 10.5220/0005678003280335


in Bibtex Style

@conference{visapp16,
author={Ilyes Sghir and Shishir Shah},
title={Modeling Human Motion for Predicting Usage of Hospital Operating Room},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)},
year={2016},
pages={328-335},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005678003280335},
isbn={978-989-758-175-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)
TI - Modeling Human Motion for Predicting Usage of Hospital Operating Room
SN - 978-989-758-175-5
AU - Sghir I.
AU - Shah S.
PY - 2016
SP - 328
EP - 335
DO - 10.5220/0005678003280335