Skip to main content

Advertisement

Log in

Automatic knowledge-based recognition of low-level tasks in ophthalmological procedures

International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

Surgical process models (SPMs) have recently been created for situation-aware computer-assisted systems in the operating room. One important challenge in this area is the automatic acquisition of SPMs. The purpose of this study is to present a new method for the automatic detection of low-level surgical tasks, that is, the sequence of activities in a surgical procedure, from microscope video images only. The level of granularity that we addressed in this work is symbolized by activities formalized by triplets <action, surgical tool, anatomical structure> .

Methods

Using the results of our latest work on the recognition of surgical phases in cataract surgeries, and based on the hypothesis that most activities occur in one or two phases only, we created a light-weight ontology, formalized as a hierarchical decomposition into phases and activities. Information concerning the surgical tools, the areas where tools are used and three other visual cues were detected through an image-based approach and combined with the information of the current surgical phase within a knowledge-based recognition system. Knowing the surgical phases before the activity, recognition allows supervised classification to be adapted to the phase. Multiclass Support Vector Machines were chosen as a classification algorithm.

Results

Using a dataset of 20 cataract surgeries, and identifying 25 possible pairs of activities, a frame-by-frame recognition rate of 64.5 % was achieved with the proposed system.

Conclusions

The addition of human knowledge to traditional bottom-up approaches based on image analysis appears to be promising for low-level task detection. The results of this work could be used for the automatic indexation of post-operative videos.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Neumuth T, Trantakis C, Eckhardt F, Dengl M, Meixensberger J, Burgert O (2007) Supporting the analysis of inter-vention courses with surgical process models on the example of fourteen microsurgical lumbar discectomies. Int J Comput Assist Radiol Surg 2(1): 436–438

    Google Scholar 

  2. Payne PRO, Mendonca EA, Johnson SB, Starren JB (2007) Conceptual knowledge acquisition in biomedicine: a methodological review. J Biomed Inform 40(5): 582–602

    Article  PubMed  Google Scholar 

  3. Katic D, Sudra G, Speidel S, Castrillon-Oberndorfer G, Eggers G, Dillmann R (2010) Knowledge-based situation interpretation for context-aware augmented reality in dental implant surgery. Med Imaging Augment Real LNCS 6326: 531–540

    Article  Google Scholar 

  4. Kragic D, Hager GD (2003) Task modelling and specification for modular sensory based human-machine cooperative systems. Intell Robots Syst 3: 3192–3197

    Google Scholar 

  5. Speidel S, Sudra G, Senemaud J, Drentschew M, Müller-stich BP, Gun C, Dillmann R (2008) Situation modeling and situation recognition for a context-aware augmented reality system. Prog Biomed Opt Imaging 9(1): 35

    Google Scholar 

  6. Agarwal S, Joshi A, Finin T, Yesha Y, Ganous T (2007) A pervasive computing system for the operating room of the future. Mobile Netw Appl 12(2,3): 215–228

    Article  Google Scholar 

  7. Houliston BR, Parry DT, Merry AF (2011) TADAA: towards automated detection of anaesthetic activity. Methods Inf Med 50(5): 464–471

    Article  PubMed  CAS  Google Scholar 

  8. James A, Vieira D, Lo BPL, Darzi A, Yang, G-Z (2007) Eye-gaze driven surgical workflow segmentation. In: Proceedings of medical image computing and computer-assisted intervention (MICCAI), pp 110–117

  9. Miyawaki F, Masamune K, Suzuki S, Yoshimitsu K, Vain J (2005) Scrub nurse and timed-automata-based model for surgery. IEEE Ind Electron Trans 5(52): 1227–1235

    Article  Google Scholar 

  10. Nara A, Izumi K, Iseki H, Suzuki T, Nambu K, Sakurai Y (2011) Surgical workflow monitoring based on trajectory data mining. New Frontiers in Artificial Intelligence, LNCS 6797, 283–291

  11. Yoshimitsu K, Masamune K, Iseki H, Fukui Y, Hashimoto D, Miyawaki F (2010) International symposium on development of scrub nurse robot (SNR) systems for endoscopic and laparoscopic surgery. Micro-NanoMechatron Hum Sci:83–88

  12. Hu P, Ho D, MacKenzie CF, Hu H, Martz D, Jacobs J, Voit R, Xiao Y (2006) Advanced visualization platform for surgical operating room coordination. Distributed video board system. Surg Innov 13(2): 129–135

    Article  PubMed  Google Scholar 

  13. Xiao Y, Hu P, Moss J, de Winter JCF, Venekamp D, MacKenzie CF, Seagull FJ, Perkins S (2008) Opportunities and challenges in improving surgical work flow. Cogn Technol 10(4): 313–321

    Google Scholar 

  14. Sandberg WS, Daily B, Egan MT, Stahl JE, Goldman JM, Wiklund RA, Rattner D (2005) Deliberate perioperative systems design improves operating room throughput. Anesthesiology 103: 406–418

    Article  PubMed  Google Scholar 

  15. Munchenberg J, Brief J, Raczkowsky J, Wörn H, Hassfeld S, Mühling J (2001) Operation planning of robot supported surgical interventions. International conference on intelligent robots and systems IEEE/RSJ, Takamatsu, Japan, pp 547–552

  16. Ko SY, Kim J, Lee WJ, Kwon DS (2007) Surgery task model for intelligent interaction between surgeon and laparoscopic assistant robot. Int J Assist Robot Mechatron 8(1): 38–46

    Google Scholar 

  17. Lin HC, Shafran I, Yuh D, Hager GD (2006) Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions. Comput Aided Surg 11(5): 220–230

    PubMed  Google Scholar 

  18. Reiley CE, Lin HC, Varadarajan B, Khudanpur S, Yuh DD, Hager GD (2008) Automatic recognition of surgical motions using statistical modeling for capturing variability. MMVR 132: 396–401

    Google Scholar 

  19. Voros S, Hager GD (2008) International symposium on towards “real-time” tool-tissue interaction detection in robotically assisted laparoscopy. Biomed Robot Biomechatron: 562–567

  20. Blum T, Padoy N, Feussner H, Navab N (2008) mining for visualization and analysis of surgeries. Int J Comput Assist Radiol Surg 3(5): 379–386

    Article  Google Scholar 

  21. Lo B, Darzi A, Yang G (2003) Episode classification for the analysis of tissue-instrument interaction with multiple visual cues. International conference on medical image computing and computer-assisted intervention

  22. Klank U, Padoy N, Feussner H, Navab N (2008) Automatic feature generation in endoscopic images. Int J Comput Assist Radiol Surg 3(3–4): 331–339

    Article  Google Scholar 

  23. Padoy N, Blum T, Feuner H, Berger MO, Navab N (2008) On-line recognition of surgical activity for monitoring in the operating room. In: Proceedings of the 20th conference on innovative applications of artificial intelligence

  24. Lalys F, Riffaud L, Bouget D, Jannin P (2012) A framework for the recognition of high-level surgical tasks from video images for cataract surgeries. IEEE Trans Biomed Eng 59(4): 966–976

    Article  PubMed  CAS  Google Scholar 

  25. Bhatia B, Oates T, Xiao Y, Hu P (2007) Real-time identification of operating room state from video. In:AAAI, vol 2, pp 1761–1766

  26. Suzuki T, Sakurai Y, Yoshimitsu K, Nambu K, Muragaki Y, Iseki H (2010) Intraoperative multichannel audio-visual information recording and automatic surgical phase and incident detection. 32nd annual international conference of the IEEE EMBS, pp 1190–1193

  27. Neumuth T, Strauß G, Meixensberger J, Lemke HU, Burgert O (2006) Acquisition of process descriptions from surgical interventions. In DEXA 2006: Proceedings of 17th international conference on database and expert systems applications, pp 602–611

  28. Bouget D, Lalys F, Jannin P (2012) Surgical tools recognition and pupil segmentation for cataract surgery modelling. In: Proceedings of MMVR

  29. Bay H, Tuytelaars T, Van Gool, Luc (2006) SURF: speeded up robust features. Comput Vis ECCV 404–417

  30. Hough VC (1959) Machine analysis of bubble chamber pictures. In: Proceedings of international conference high energy accelerators and instrumentation

  31. Crammer K, Singer Y (2001) On the algorithmic implementation of multi-class SVMs. JMLR 2: 265–292

    Google Scholar 

  32. Laptev I, Lindeberg T (2006) Local descriptors for spatio-temporal recognition. Spatial coherence for visual motion analysis. Springer, Berlin

    Google Scholar 

  33. Beauchemin SS, Barron JL (1995) The computation of optical flow. ACM, New York

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florent Lalys.

Electronic Supplementary Material

The Below is the Electronic Supplementary Material.

ESM 1 (JPG 146 kb)

ESM (MPG 27,581 kb)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lalys, F., Bouget, D., Riffaud, L. et al. Automatic knowledge-based recognition of low-level tasks in ophthalmological procedures. Int J CARS 8, 39–49 (2013). https://doi.org/10.1007/s11548-012-0685-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-012-0685-6

Keywords

Navigation