1 Introduction

Personal protective equipment (PPE) is an important aspect of medical procedures that protects healthcare workers and patients from infectious diseases [1]. PPE protects against droplet, contact, and airborne transmissions of pathogens [2]. When used consistently, PPE reduces transmissions and protects both the healthcare workers and patients [3]. However, PPE must be correctly donned and doffed to minimize the risk of exposure. It has been shown that failure to adhere to PPE protocols provide opportunities for infections to be transmitted [1].

Despite its importance, researchers have observed that PPE compliance is modest and practiced procedures vary greatly among healthcare workers. Mitchell et al. [1] observed that only 34 % of healthcare workers donned all required PPE. Using eye protection was particularly an issue, with only 37 % of workers complying. In another study, Zellmer et al. [2] observed that less than half (43 %) of healthcare workers removed PPE in the correct order. In their meta-analysis of many studies, Erasmus et al. [4] found that only 40 % of healthcare workers complied with hand hygiene protocols, including glove use. Similarly, Manian and Ponzillo [5] observed that only 41 % of surgical healthcare workers properly complied with gown use.

Considering these issues, training interventions are needed to improve PPE use and compliance with protocols [1]. In fact, 9 % of healthcare workers have reported never receiving formal training on PPE [6]. However, there are some challenges for such interventions. First, different institutions and educators use heterogeneous methods and curriculums, which result in education and training experiences of variable quality and content [7]. To further complicate varying interventions, Puro and Nicastri [8] have revealed that separate medical organizations recommend conflicting PPE protocols. Another challenge for training interventions is the need for periodic refreshers, in order to maintain knowledge and competence [9]. Some hospitals have developed eLearning modules to address these challenges [10], but research indicates that these eLearning interventions might only be effective for healthcare workers with less than one year of experience [11]. Furthermore, most psychomotor skills cannot be learned from online interventions, only from physical practice [12].

Virtual reality (VR) provides the opportunity to develop unique interventions that overcome the challenges of training. Like eLearning modules, VR can provide consistent computer-based lessons that do not vary in quality or content [13] and afford automated assessments [14]. However, unlike other computer-based interventions, VR provides the opportunity to practice gross psychomotor skills [15], and with force-feedback devices, fine-motor skills [16]. Additionally, by leveraging the sense of “being there” (i.e., presence [17]), VR has been used to arouse greater levels of fear [18], which afford better memory recall [19]. This could be beneficial to experienced healthcare workers receiving periodic refreshers. Finally, VR also provides benefits compared to real-world simulations, such as more training opportunities due to the ability to instantly reset a training simulation and negligible recurring costs, as virtual supplies are free compared to real-world ones.

Considering the benefits of VR, we have developed a prototype of a VR system for training surgical PPE protocols. The system hardware consists of an Oculus Rift head-mounted display (HMD), a full-body portable tracking system comprised of inertial measurement units (IMUs), two Nintendo Wii Remotes for bimanual interactions, and a high-performance laptop worn in a backpack. Together, these hardware components make the system completely portable and easy to store, which are important requirements for hospitals with limited space [14]. The prototype’s software consists of an instructional module and a practice module; both focused on hand hygiene and the donning of PPE. The instructional module employs error-avoidant training techniques that ensure trainees are only exposed to correct actions [20]. However, the practice module employs an error-management approach, which uses errors as opportunities to learn and encourages experimentation [21]. The practice module also supports automated assessments based on the actions of the trainee.

After discussing related interventions, we discuss the requirements that we identified for using VR as a training intervention for PPE. We then describe how the system hardware and software for our initial prototype addresses these requirements. We conclude with our plans to evaluate the prototype with subject-matter experts and how the prototype can be extended to become a viable training intervention for PPE.

2 Related Work

Two types of interventions have been used for training PPE use and its importance. In situ simulations, which involve replicating real-world events through simulated scenarios, have been used to assess the use of PPE by hospital staff. Phin et al. [22] conducted an in situ simulation of an influenza pandemic and observed the PPE use of the hospital staff. While they concluded that the simulation increased the confidence and preparedness of the staff, they noted that it required large quantities of PPE and generated a large amount of clinical waste. As mentioned, VR would avoid such issues.

A second type of intervention has been the use of eLearning modules. These modules often focus on the cognitive skills associated with donning and doffing PPE, such as identifying the correct sequence. In one example [23], trainees use a mouse to click and drag available PPE to the appropriate body parts of a virtual avatar while the simulation monitored incorrect placements and sequences. However, prior research indicates that these eLearning modules may only benefit healthcare workers with little experience [11]. Furthermore, psychomotor skills cannot be learned this way [12].

3 Intervention Requirements

The goal of our research was to develop a prototype of a VR system for training PPE protocols. As different healthcare professions have different PPE protocols [5], we decided to limit the scope of our prototype to surgical technologists. Surgical technologists are responsible for preparing operating rooms (ORs) for surgery, managing supplies and equipment during operations, and cleaning up the OR [24]. The Association of Surgical Technologists (AST) has clearly defined standards for PPE, including surgical attire [24] and surgical scrub (i.e., the process of removing microorganisms from the nails, hands, and forearms) [25]. Using the AST standards as a requirements document, we identified the donning and scrub training tasks seen in Table 1.

Table 1. Donning and surgical task requirements based on AST standards

In addition to training-task requirements, there are several logistical requirements for making a VR intervention feasible and practical to use. First, the VR system must be affordable and not cost-prohibitive [26]. Second, the VR system should be located within or near the hospital’s skills lab for accessibility [27]. Third, the system should be compact or easy to store, considering the limited space that most hospitals struggle with [14]. Finally, it should be easy to operate and require little maintenance [28].

4 System Hardware

After considering the identified intervention requirements, we decided to use a full-body, portable VR system similar to the one described in [29]. The system consists of four major components. An Oculus Rift Development Kit 2 (DK2) HMD provides an immersive visual display for the user to view the virtual world. IMU sensors strapped to the user’s major body segments provide full-body tracking capabilities. Two Nintendo Wii Remotes allow for bimanual interactions. Finally, a high-performance laptop runs the training modules and renders the graphics to the HMD. Together, these components provide a full-body VR system that is portable and affordable.

4.1 Head-Mounted Display

Our original portable VR system used an Oculus Rift DK1 as an HMD [29]. Since then, Oculus released the DK2, which uses a low-persistence display to reduce simulator sickness [30]. The DK2 provides a 100° diagonal field of view (FOV) with a display resolution of 960 × 1080 pixels per eye. It also has a goggle-like form factor with elastic straps and weighs approximately 440 g. In addition to being the visual display, we use the DK2’s internal IMU to track the orientation of the user’s head.

4.2 Full-Body, Portable Tracking System

The full-body, portable tracking system uses 16 wireless YEI 3-Space IMUs strapped to the user’s hands, arms, upper arms, shoulders, feet, calves, thighs, waist, and chest. Again, head tracking is provided by the DK2’s internal IMU. Each IMU includes a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer. Using sensor fusion, the accelerometer’s gravity vector and the magnetometer’s compass vector are used to correct most of the drift (i.e., error accumulation) incurred by the gyroscope’s angular velocity readings. This allows each IMU to accurately report the orientation of the body segment that it is attached to.

Because the IMUs can only report acceleration and not accurate translations, we use a rigid-body skeleton to track the relative movements of the user’s limbs. By applying the reported orientations of the user’s body segments to the corresponding joints, the rigid-body skeleton mimics the relative motions of the user due to forward kinematics. However, this does not provide an absolute position for the skeleton.

As explained in [29], we use an algorithm based on the kinematics of the heels during gait to provide absolute tracking of the skeleton’s position. Whenever a heel joint strikes the ground, we identify it as the new anchor for the rigid-body skeleton. As the user moves, the skeleton’s segments move relative to the pelvis, which serves as the origin of the tracking hierarchy. This results in the anchor no longer being in the same position, as the heels are constantly moving relative to the pelvis during gait. By translating the skeleton’s pelvis to restore the heel to its original position when it became the anchor, the skeleton is propelled through the virtual world in the same manner that the user is moving in the real world.

Another important aspect to our full-body tracking system is the use of anthropometrics (i.e., the study of human measurements) to improve the accuracy of the absolute tracking. As seen in Fig. 1, the lengths of the avatar’s body segments have a large impact on the overall position of the limbs, despite having the same joint orientations. Hence, in order for the avatar’s movements to absolutely match the user’s physical movements, the body segments of the avatar must be scaled to match the lengths of the user’s actual body segments. To avoid measuring each of the user’s body segments, we use anthropometric proportions identified by Drillis et al. [31] to estimate the segment lengths given only the user’s total height and hip height. This improves the accuracy of the tracking system without lengthening the setup time.

Fig. 1.
figure 1

While our system uses forward kinematics to track relative movements, anthropometrics are used to ensure absolute movements that match the user’s real-world movements.

4.3 Bimanual Handheld Controllers

While our full-body tracking system reports the positions of the user’s hands, it is not capable of determining if the user is grasping or interacting with a virtual object. To afford bimanual interactions, the system includes two Nintendo Wii Remotes. These wireless handheld controllers provide discrete button inputs via Bluetooth for creating 3D interaction techniques. While pressing a button to grab an object is not as realistic as closing one’s fist, we have found that discrete button events are much more usable than noisy input streams, such as electromyography devices (e.g., the Myo armband) and computer vision techniques (e.g., the Leap Motion). However, to mimic the biomechanics of making a fist, we require both the thumb and index finger buttons are pressed to grab and hold a virtual object.

4.4 High-Performance Laptop

We use an Alienware 15 laptop for computing power in our VR system. In addition to running our training applications, the laptop processes the input data of the DK2, YEI IMUs, and Wii Remotes. It also renders the graphics for the DK2. To keep the system portable, we place the 3.2 kg laptop in a mesh backpack that the user wears.

4.5 Advantages and Limitations

Our VR system hardware offers several advantages in consideration of the intervention requirements. The DK2 allows users to immersively view the virtual world, PPE, and their avatar, which should afford greater presence [32]. The full-body tracking system and Wii Remotes provide opportunities to practice gross psychomotor skills, such as prewashing the hands (see Fig. 2). Altogether, the components cost less than $7.5K, which is attractive compared to many medical simulators [7]. The system is also self-contained and wireless, which makes it portable and easy to store.

Fig. 2.
figure 2

Left: user using the full-body tracking system to interact with the virtual world. Top right: a third-person perspective of the user’s avatar. Bottom right: the user’s first-person perspective of the virtual world and avatar.

However, there are limitations to our current system. Foremost is the inability to practice fine-motor skills, such as scrubbing between the fingers and using the cuffs of the gown while donning the sterile gloves. In many cases, we have been able to design interactions for simulating these fine-motor skills, such as grabbing near the top of the mask to simulate contouring the pliable noseband. But the lack of precise finger tracking and accurate haptic feedback is prohibitive to developing some psychomotor skills, such as using the cuffs to don gloves.

Another limitation of our system is a lengthy setup time. Altogether, it takes approximately 10 min for a user to be measured, strap on the IMUs, put on the backpack and HMD, and calibrate the tracking system. Additionally, this requires at least one technician to aid the user during setup. However, most medical simulators require technicians to facilitate, so this does not invalidate our system’s feasibility.

5 System Software

We used Unity 5, a game engine development platform, to create the software for our VR training prototype. Unity 5 has built-in support for certain VR devices, such as the DK2. However, we had to create custom software packages to integrate data from the YEI IMUs and the Nintendo Wii Remotes. Using these packages and Unity, we have prototyped an instructional module and a practice module for training interventions.

5.1 Instructional Module

Our instructional module serves as an introduction to the training tasks. It employs an error-avoidant training approach, in which the trainee is only allowed to perform correct actions [20]. First, the tasks are presented in order to ensure trainees learn the proper sequence of PPE. For each task, a series of steps is presented to the trainee via textual popup windows and corresponding audio (see Fig. 3). The trainee must properly complete each step before the module will move onto the next step. Additionally, interactions not involved with the current step are disabled. For example, the trainee cannot prewash the hands and forearms before donning the safety eyewear. This ensures introductory trainees are not exposed to incorrect actions.

Fig. 3.
figure 3

The instructional module uses textual popup windows and audio to explain the tasks

5.2 Practice Module

After completing the instructional module, the practice module provides the trainee opportunities to rehearse the sequence and donning of PPE and performing the surgical scrub. It uses an error-management approach, in which trainees are permitted to experiment with their decisions and actions, in order to actively learn from their mistakes [21]. Unlike the instructional module, the practice module does not proactively provide task information to the trainee. Additionally, all of the task interactions are available at all times. Hence, the trainee can prewash the hands and forearms before donning the safety eyewear. Meanwhile, the system software records the actions of the trainee, including their sequence. This tracked information is used at the end of the module to provide an automated assessment (see Fig. 4). This assessment includes feedback on incorrect or missed actions so that the trainee may correct them.

Fig. 4.
figure 4

The practice module uses tracked actions to provide an automated assessment

5.3 Trigger-Based Interactions

As mentioned, both the instructional module and the practice module provide numerous interactions, such as prewashing the hands and forearms or donning the safety eyewear. During development, we had to choose between using physics-based and trigger-based interactions. For example, in a physics-based implementation, donning the safety eyewear would require precisely positioning the eyewear to rest on the ears and nose of the avatar. While easier to implement, as a developer can rely on Unity’s physics engine, these types of interactions are difficult to accomplish and are usually unrealistic, despite being based on physics. Instead, we developed a series of triggered actions and joint-based deformations for each PPE. For instance, if the trainee grabs one end of the surgical mask and releases it near the avatar’s ear, the mask end will be automatically jointed to the avatar’s ear.

5.4 Current Limitations

As a prototype, the system’s current software has limitations. Due to limited resources and time, we have not prototyped all of the tasks, PPE, and locations identified in the intervention requirements. Because the first three training tasks (i.e., donning the head cover, scrub suit, and shoe covers) usually occur only once a day, we decided to focus on the remaining tasks that should occur each time the healthcare worker enters the OR. This avoided having to model a third environment (i.e., the changing room) and eliminated some of the more complex trigger-based interactions (e.g., tucking the scrub shirt into the pants).

6 Refining the Prototype

In the near future, we are planning to use the Rapid Iterative Testing and Evaluation (RITE) method [33], with medical subject-matter experts, to evaluate the potential usefulness of such a VR training intervention and to improve upon our prototype. We have identified a local hospital and medical center to collaborate with on the RITE process. We will be recruiting medical students, residents, and surgical staff for it.

7 Conclusions and Future Work

PPE protocols are important, as they protect both healthcare workers and patients. However, research indicates that PPE compliance is modest and that training interventions are needed. In situ and eLearning interventions have been investigated, but VR provides new opportunities and advantages over these prior methods. To explore these potential benefits, we have developed a full-body, portable VR prototype for training PPE protocols to surgical staff. We first used standards defined by the Association of Surgical Technologists as intervention requirements, and then developed system hardware and software components to address those requirements. The hardware includes an HMD, an IMU-based tracking system, handheld controllers for bimanual interactions, and a high-performance laptop for portable computing power. The software includes an instructional module for introducing the training tasks and a practice module for rehearsing skills and learning from mistakes. In the near future, we will use the RITE approach to evaluate and improve upon the prototype using subject-matter expert feedback.