A user interface for VR-ready 3D medical imaging by off-the-shelf input devices

https://doi.org/10.1016/j.compbiomed.2010.01.006Get rights and content

Abstract

The distinctiveness of clinical environments demands specific solutions in the design of both usable and practical user interfaces for 3D medical imaging. In this work, a novel user interface to provide a direct interaction in 3D space by off-the-shelf input devices is proposed. The interface, which has been implemented and integrated into an open-source medical image viewer, features a depth-enhanced mouse pointer and a novel rotation technique that uses the object's geometry as the rotation handle. The usability of the proposed approach is evaluated to show its effectiveness for use in professional 3D imaging applications.

Introduction

During past decades, non-invasive imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI) have stood out as important tools for medical investigations. Since both CT and MRI scanners generate multiple 2D cross-sections (slices) of tissues, radiologists have learnt how to inspect the human body by considering 2D images and performing a mental reconstruction of anatomical structures to understand their morphology (the so-called “abstract 2D thinking” [1]). Although several techniques to reconstruct 3D objects of anatomical structures have been available for some time, most radiologists continued to prefer the 2D approach. In all likelihood, there were two main reasons for this preference: firstly, visualization of 3D data used to be a time-consuming process that did not allow an interactive exploration of data [2], whereas exploration of 2D images was a fast and reliable process; secondly, 3D interaction techniques used to be less practical than 2D ones, and the results of manipulation were hard to interpret when 3D objects were visualized on flat, 2D screens.

Today, things are changing. The traditional slice-by-slice way of reviewing is becoming too cumbersome to interpret the thousands of images that modern scanners can produce every day. For optimal evaluation, a volumetric acquisition requires a 3D visualization of the retrieved information [3]. Moreover, thanks to recent advances in the programmability of graphics processors, parallel implementations of volume rendering algorithms allow a real-time interaction with fully detailed 3D medical data [4]. In accordance with this trend, the adoption of virtual reality (VR) visualization technologies in daily clinical practice may be worthwhile. Medical image interpretation usually follows a model for visual search that postulates a preattentive global analysis that requires contextual information [5]. In the focusing phase, where the understanding of the spatial relationships between structures is of primary importance, VR-enhanced visualization can greatly improve medical image analysis by increasing the radiologist's perception of both the shape and position of anatomical structures.

Nonetheless, virtual reality is not only a visualization technology but also, and most significantly, a communication interface based on interactive 3D visualization. Interacting with 3D worlds is more complex than with 2D WIMP (window, icon, menu, pointing device) interfaces. 3D interaction tasks require the user to manipulate the object's position and orientation involving six degrees of freedom (DOF). Additionally, for specific clinical use, the obtrusiveness of input devices and the time-consuming training required to learn how to use them have also to be considered as main impediments [6]. Radiology is highly practical. Radiologists execute image interpretation daily, rapidly and with great skill. To fully profit from stereo visualization, they need to focus on the diagnostic task rather than figure out how to interact with the display [7]. Therefore, the challenge is to design specific 3D tools that work just as well as commonly used 2D tools and, above all, that are as easy to use.

These considerations drove us to design a user interface tailored for 3D medical data exploration by using well known, off-the-shelf input devices, which provides techniques for a seamless interaction in both desktop and semi-immersive virtual environments (VEs) (see Fig. 1). In particular, the user interface features a depth-enhanced cursor to point in 3D space just like on 2D images and a geometry-based rotation technique that takes advantage of the 3D cursor to perform accurate rotations. Besides a common mouse, a Wiimote, the controller of the Nintendo WiiTM console, can be used to interact at a distance on large displays. The goal is to enhance the practicality of VR-enhanced 3D image analysis by providing radiologists with: (i) easy-to-learn interaction techniques that can be effectively used both in desktop and semi-immersive VEs; (ii) non-cumbersome, off-the-shelf input devices suitable for use in medical facilities.

The remainder of the paper is structured as follows: First, in Section 2, we introduce the guidelines we followed in the design of interaction tools and widgets suitable for 3D medical imaging. In Section 3, we present an overview of the existing methods for mapping 2D input to 3D control. Details of the proposed interaction techniques are presented in Section 4. Results of a user study showing the effectiveness of the proposed interaction techniques are provided and discussed in Section 5. A brief summary concludes the paper in Section 6.

Section snippets

Non-cumbersome VR add-ons

Radiology physicians need to be free to quickly stop a medical image analysis every time a more pressing task require their attention. Therefore, for optimal use in medical facilities, VR equipment must not be cumbersome. Furthermore, with the exception of particular applications (such as virtual endoscopy or colonoscopy), radiologists inspect a single 3D object and not an environment. Therefore, they are not disadvantaged by a limited field of vision. Accordingly, semi-immersive environments,

Overview of existing methods

Most often, interaction techniques for manipulating 3D objects are of four kinds [12]. In view-based techniques, usually three different views of the object, corresponding to xy, xz and yz projections, are presented to the user. The user can control the orientation of the object on one or two dimensions in each view through a controller, commonly a slider. In controller-based techniques, users can rotate each dimension with a controller in a single view; controllers can overlap the object to be

User interface description

In this section we describe in detail the depth-enhanced mouse pointer and the geometry-based rotation technique. Subsequently, we discuss their benefits and limitations and consider why their adoption may be useful in the specific field of 3D medical image analysis. The user interface hereafter described has been integrated in MITO (Medical Imaging TOolkit) [23], a PACS-integrated medical image viewer that is currently under evaluation at the Second Polyclinic of Naples. With the exception of

Evaluation

We were interested to assess whether or not users experienced in using graphical manipulators of medical image viewers would be able to perform 3D rotations more effectively by using the geometry-based rotation technique. We were also particularly interested to discover if the geometry-based rotation technique performs better when applied to the rotation of 3D objects with a complex, not convex shape.

Among virtual trackballs, we compared the geometry-based rotation technique with the Two-Axes

Summary

Thanks to recent advances in both imaging devices and graphics processors, medical image analysis is gradually moving toward direct 3D exploration of data. The enhanced visualization techniques provided by virtual reality technologies have the potential to ease the work of radiologists, by increasing their perception of both the shape and position of anatomical structures. However, the adoption of virtual reality technologies for 3D image interpretation also introduces certain challenges in the

Conflict of interest statement

The authors declare that they have no competing interests.

Acknowledgments

The authors wish to thank Bruno Alfano, director of the Biostructure and Bioimaging Institute of the Italian National Research Council, Arturo Brunetti, professor at the Diagnostic Imaging Medical School of the University of Naples Federico II, and Marco Salvatore, professor of Diagnostic Imaging and director of the department of Biomorphological and Functional Studies of the University Federico II of Naples, for the essential contributions they have given to this work. The Renal Angio CT data

References (33)

  • B. Myers et al.

    Present, and future of user interface software tools

    ACM Transactions on Computer–Human Interactions

    (2000)
  • K. Henriksen et al.

    Virtual trackballs revisited

    IEEE Transactions on Visualization and Computer Graphics

    (2004)
  • M. Chen et al.

    A study in interactive 3-D rotation using 2-D control devices

    SIGGRAPH Computer Graphics

    (1988)
  • K. Shoemake, ARCBALL: a user interface for specifying three-dimensional orientation using a mouse, in: Proceedings of...
  • G. Bell, Bell's Trackball, 1988...
  • D.A. Bowman et al.

    3D User Interfaces: Theory and Practice

    (2004)
  • Cited by (37)

    • Evaluation of haptic virtual reality user interfaces for medical marking on 3D models

      2021, International Journal of Human Computer Studies
      Citation Excerpt :

      Through comparing with the traditional 2D interface in the medical marking task, the competitive performance of the two haptic VR interfaces in task completion time, especially the better performance using the vibrotactile VR interface in the difficult tasks, demonstrated their practical usability for 3D manipulation tasks. Unlike the two haptic VR interfaces which required the user to reach the model to mark the positions, the traditional 2D interface used ray-casting to estimate the point of marking (Gallo et al., 2010). During the marking process, the participants commonly made displacement errors between the intended marking position and the actual marking position on the 2D screen while using the traditional 2D interface.

    • An Adaptive Interface Design (AID) for enhanced computer accessibility and rehabilitation

      2017, International Journal of Human Computer Studies
      Citation Excerpt :

      Among other factors, the system will be tested in regard to ‘learnability’, in order to assess how challenging it is to first-time users. Similar systems comprising of eye tracker and data glove components have been applied to in the visualization of 3D images for physicians and other professionals in the biomedical field (Zudilova-Seinstra et al., 2010; Gallo et al., 2010; Krabichler et al., 1998). The work by Krabichler et al. can be seen as a pioneering example of the possibilities at hand.

    • An automatic tool to facilitate the statistical group analysis of DTI

      2014, Computers in Biology and Medicine
      Citation Excerpt :

      However, a usability study with end users is missing to objectively evaluate the tool׳s real ease of use. Usability testing is essential for determining whether a solution works with target users [25–28]. Moreover, the authors of PANDA do not detail their design methodology.

    • Virtual reality aided visualization of fluid flow simulations with application in medical education and diagnostics

      2013, Computers in Biology and Medicine
      Citation Excerpt :

      There are many indicators that show that VR technology will be used in preoperative planning, medical education and research [5]. Today many VR simulations are developed and applied, including whole-patient anatomy models [6], interactive 3D visualization of medical images [7], skills simulators [8,9] and simulations of several simple surgical procedures, like leg surgery [10], cataract surgery [11] or laparoscopic surgery [12]. The traditional training and learning methods in medicine are based on the available cases and patients that come with certain anomalies.

    • Design and comparative evaluation of Smoothed Pointing: A velocity-oriented remote pointing enhancement technique

      2012, International Journal of Human Computer Studies
      Citation Excerpt :

      The experiments took place in a laboratory equipped with a 3 m×2.25 m front-projected screen, with a 1400×1050 resolution at a refresh rate of 60 Hz. In the laser-style pointing modality (see Fig. 4(a)), a Wiimote was used as a wand, tracking a fixed IR light source (Gallo et al., 2010b); in this configuration, the B button of the Wiimote was used to select the targets in the target selection task. In the image–plane pointing modality (see Fig. 4(b)), the users wore a data glove equipped with additional IR LEDs, which were tracked by a fixed Wiimote (Gallo and Ciampi, 2009); in this configuration, to select the targets users had to stick out their thumb.

    View all citing articles on Scopus
    View full text