Keywords

1 Digital Human Models for Ergonomic Assessment

The use of Digital Human Models (DHM) in major manufacturing companies like the automotive industry is nowadays considered to be state of the art. The type of DHMs depends on the use cases, e.g. ergonomic assessment of assembly stations or new cockpit designs. The primary goal of DHMs is a 3D representation of the human body [1]. Implementations of body parts, muscles and degrees of freedom differ in their level of detail between the DHMs. All DHMs share one advantage they can be used to analyze both, human postures and motions. The DHM IMMA, which we have focused on in this paper, provides a technological approach, which allows “automatic path planning, to find collision free motions of moving objects” [2]. This function increases the usability of the DHM-software regarding instruction expense.

Nonetheless, assessments with DHMs need to include the digital product, its environment and the human model. Typically the user has to use a mouse and keyboard to instruct the DHMs in order to be able to assess the relevant task sequence. Especially the configuration of different grip types is complex and not straightforward. The user has to make various decisions such as whether to use one or two hands and where the grip points on the object are. Furthermore the user has to determine the grip type and the hand aperture. Illustrating one example, the IMMA manikin provides eleven grip types. In fact, basic research has even identified 16 different grip types [3]. A task analysis with IMMA has revealed that it requires at least nine interaction steps considering a best case scenario, 18 steps in a worst case scenario and furthermore eight decisions are necessary, to define how to grasp the object.

1.1 Objectives

Until now IMMA is instructed via mouse and keyboard. Using these traditional input devices is very ineffective and requires well-trained users that are experienced in working with the software. Especially the instruction of grasping requires experience over time because we need to know how human grasping works [4]. This way the so-called “right” grasp can only be determined through trial-and-error method. Therefore, we need to develop an easier instruction concept for the grasp configuration to instruct DHMs.

This paper introduces a new concept, which enables users to intuitively instruct the digital human model through interacting in a Virtual Reality (VR) environment. The Virtual Reality (VR) environment displays the objects on which the ergonomic assessment is performed. In order to interact with the virtual environment, the software user can either work with a head mounted display (HMD) or a professional multi-sided CAVE (Cave Automatic Virtual Environment [5]). Within these environments a tracking device traces the users’ fingers and hands. The user performs the different grip types in relation to the necessary objects.

Subsequently the different performed tasks are exported to an exchange file, which the DHM can read. As a result the manikin simulates the according movements and furthermore analyses them based on standardized ergonomic criteria.

In the following sections of the paper we introduce the statement of problem in more detail. After that we propose a solution, which includes Virtual Reality and the finger tracking device. Furthermore we discuss the recognition of virtual grasps, our suggested architecture and finally we will provide a short outlook, which includes possible steps for the future integration of this technology into working environments.

2 Interactions in Virtual Environments to Instruct the DHM

2.1 Virtual Reality Environments

Head Mounted Displays as well as a professional CAVE provide a VR-environment. There are differences from a technological perspective. While users wear HMDs on their heads like glasses with the display right in front of the eyes, a CAVE can be characterized as a room, which people can enter. In here projectors screen the virtual scene on the five calibrated walls and users can move within the room while wearing active shutter glasses and thus ensuring a stereoscopic view. Additionally an optical tracking system tracks the marker on the glasses for calculating the position and direction of the users’ view. This enables the system to render each frame for the individual perspective of the user. For our research we have access to HMDs as well as a five-sided CAVE.

2.2 Hand and Finger Tracking Device

In order to be able to detect different grip types it is necessary to gain information about the hand and finger positions as well as rotational joint information in the VR environment. Our solution uses a Leap Motion with the Orion Beta 3.1.1 software to track the hand and finger position. It is a low-cost optical tracking system, which provides rotational information about the joints as well as translational information about the position of the real hand and fingers. It is also able to map this information in the virtual reality environment (Fig. 1 left side). Furthermore Leap Motion offers an asset for Unity3D, a game development engine, to easily access the hand and finger information in the virtual environment directly.

Fig. 1.
figure 1

Left: Leap Motion virtual hands in Unity3D. Top, right: The different grip-types, which are implemented from left to right: Finger Pinch; Powergrap, Lateral Finger Pinch. Bottom, right: Lateral finger pinch grabbing cylinder in VR.

2.3 Virtual Grasping

There are several works, which describe approaches for virtual grasping [68]. Whereas their main objective was to optimize virtual grasping, we want to shift towards an approach, which describes how we can implement virtual grasping as a method to instruct a DHM. To detect the different grip types, we use a pattern-matching algorithm. Currently three different grip types are integrated into the application. It is able to detect a power grip, a finger pinch as well as a lateral finger pinch (Fig. 1 top, right). In order to detect the different grip types we use the following data sets: (1) The distance between the surface of the hands as well as fingers and the objects surface to detect a contact. (2) The orientation of each finger joint.

For example, the algorithm to detect the lateral finger pinch (Fig. 1 bottom, right) uses both sets of data. If the distance between the surface of the fingers and surface of the object is smaller than a certain, definable length and if the angle between the tip of the thumb and the tip index finger is bigger than 80°, the algorithm recognizes the lateral finger pinch.

2.4 Software Architecture

We use Unity3D V5.3 as an authoring tool, which the Leap Motion is connected with. Unity3D also has a native HMD (Oculus Rift CV) integration and thus it is possible to display it in the CAVE (Fig. 2).

Fig. 2.
figure 2

Architecture with a shared document, which is accessible for both Unity3D as well as the IMMA software. The recognized grip types and the relevant object are written into the xml file.

An HMD as well as a CAVE can visualize the virtual scene. Furthermore, the finger tracking device Leap Motion is connected to Unity3D. Our grip type detection algorithms are implemented in Unity3D. In order to be able to use the developed software with different DHMs we chose an XML file as a shared document. Unity3D as well as the IMMA software are able to access and manipulate the content of the XML file. When the user performs tasks in the VR-environment, the different grip types as well as gripped objects are detected and written into the XML file in the high level language (HLL) [9]. The HLL generates the different tasks and in turn instruct the IMMA manikin.

3 Outlook

Wischniewski [10] suggests that in order to enable engineers without specific knowledge to use DHMs, their usability has to be improved. This approach tries to minimize the complexity to define the different grip types and its configurations for assembly instructions of manikins. Yet, gaps remain in several areas of research.

The software works well for the integrated grip types. Nevertheless the finger tracking device is not accurate and stable enough due to the fact that it is an optical tracking system. This means occlusions occur when all fingers are in the same plane from the tracking device. This leads to partially incorrect finger positions, joint rotations and a twitching of fingers. In addition we have noticed problems with the correct thumb recognition. Thus, the software cannot fully satisfy our demands.

In order to be able to completely use the VR environment to instruct the IMMA manikin, we need to integrate the algorithms for the seven remaining grip types into Unity3D. Furthermore it is necessary to develop a user interface, which avoids forcing the user to switch between the VR environment and the traditional 2D display.

Additionally we will integrate a feedback system for the user when the software detects a virtual grip. Therefore we plan future research on different feedback systems such as haptic or visual feedback [11]. As already explained the Leap Motion does not provide enough accuracy regarding finger and hand position data. Thus, we will integrate other tracking devices such as data gloves, which should provide more accurate and stable data.