Next Article in Journal
A Robust Method for Finding the Automated Best Matched Genes Based on Grouping Similar Fragments of Large-Scale References for Genome Assembly
Previous Article in Journal
Chiral Stationary Phases for Liquid Chromatography Based on Chitin- and Chitosan-Derived Marine Polysaccharides
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment

Department of Software, Catholic University of Pusan, Busan 46252, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2017, 9(9), 189; https://doi.org/10.3390/sym9090189
Submission received: 7 August 2017 / Revised: 29 August 2017 / Accepted: 6 September 2017 / Published: 11 September 2017

Abstract

:
This research proposes a gaze pointer-based user interface to provide user-oriented interaction suitable for the virtual reality environment on mobile platforms. For this purpose, a mobile platform-based three-dimensional interactive content is produced to test whether the proposed gaze pointer-based interface increases user satisfaction through the interactions in a virtual reality environment based on mobile platforms. The gaze pointer-based interface—the most common input method for mobile virtual reality content—is designed by considering four types: the visual field range, the feedback system, multi-dimensional information transfer, and background colors. The performance of the proposed gaze pointer-based interface is analyzed by conducting experiments on whether or not it offers motives for user interest, effects of enhanced immersion, provision of new experience, and convenience in operating content. In addition, it is verified whether any negative psychological factors, such as VR sickness, fatigue, difficulty of control, and discomfort in using contents are caused. Finally, through the survey experiment, this study confirmed that it is possible to design different ideal gaze pointer-based interface in mobile VR environment according to presence and convenience.

1. Introduction

Virtual reality (VR) is a technology that provides immersive experiences similar to reality to users in virtual environments by stimulating the user’s senses in virtual spaces. VR technologies including computer graphics have been rapidly developing, and a wide variety of new VR equipment have become available, such as the head-mounted display (HMD), hardware from Leap Motion and Virtuix Omni, and 360 VR-ready cameras. These VR technologies are used in education and various other fields, including health care, and contents that can increase user satisfaction and immersion have been developed [1]. With high-end smart devices becoming more developed and more easily accessible to the general user, many mobile platforms for VR are becoming available. As a result, VR contents for mobile environments have been developed in a large number of areas such as education, tourism, and medicine.
A display device such as the HMD is required for users to view VR contents’ visual information with a realism that allows user immersion. In the case of mobile platforms, the display is a binocular parallax screen and content is transmitted in such a way to a mobile-specific HMD as to create a scene with stereoscopic perception. A key advantage of a mobile device is that it allows a general user to obtain an HMD easily and immediately experience a variety of visual information, anywhere. At present, to expand the user’s activity range and roles, and to facilitate immersive interaction, new input processing techniques are increasingly necessary. As the input processing techniques of a VR system is similar to that of game interfaces, a gamepad such as the Xbox 360 Controller (Microsoft, Redmond, WA, USA) can be readily incorporated as the input technique into 3D VR systems. When an input technique optimized for VR content is required, an exclusive controller such as Samsung’s Rink (Suwan, Korea) is used. In addition, a variety of VR equipment, ranging from the Leap Motion (Leap Motion, Inc., San Francisco, CA, USA) hardware to the data glove that controls the finger movements of users to the treadmill that maximizes the user’s immersive experience based on motion perception, are used as input processing techniques. When viewed from the portability aspect of mobile platforms, an additional device may serve as a pressure element, such as a high price, specification, and integrated development environment. For this reason, most mobile VR applications use gaze as the default input method. Lee et al. [2] also found that the input method, using gaze in the mobile platform VR, helps improve the user’s immersion through the survey experiment comparing with the controller such as gamepad. In terms of convenience to the users, the gamepad has advantages, but if its application in VR considers the element of presence based on immersion, then it is necessary to use the gaze pointer appropriately. However, in most applications and studies, gaze is a simple input function that selects menus or objects, but does not consider them as important factors that improve immersion and presence in VR. In other words, it is necessary to design an interface that is specific to the mobile platform on the basis of gaze, and to study the presence of VR with positive or negative influence on the actual user’s psychology.
Several researchers (including Hayhoe et al. [3]) have demonstrated that gaze is the most important factor for people and virtual objects interacting in a virtual space. In addition, studies have been conducted to prove that the gaze model enhances the user’s concentration and presence in a virtual environment [4,5]. Therefore, gaze must be considered as a primary factor in the design of an input processing technique suitable for mobile VR environments. Thus, this study focuses on interaction using only gaze, and proposes a method to provide users with highly immersive mobile VR environments. In this study, we design an interface by classifying the process from the user’s perception of the environment and using the gaze pointer to achieve the objective through the stated behavior in the mobile VR environment. The proposed interface is classified by interacting with the virtual object or the environment from the range of recognizing the virtual space through gaze pointer, and displaying the information generated in the process. This study also adds the color that is continuously transmitted through the gaze pointer as an interface element. Based on this, the proposed gaze pointer-based interface is designed with four types of interfaces: visual field range, feedback, information transfer and background color. The four types of user interfaces that constitute the proposed gaze interaction are detailed as follows:
  • The effects of the user’s gaze range in the interaction with the VR content are analyzed.
  • A feedback system is designed that selects a virtual object by dividing it into the four components of sound, transparency, circular slider (e.g., gazefuse), and button user interface (UI) based on the gaze pointer-based user interface.
  • The information transfer of the virtual space determines the user’s psychology during the interactive process, based on the separation of the process into 2D and 3D.
  • The effects of the background color on the user’s immersion and concentration in a VR environment are analyzed using color psychology.
The proposed four types of user interface for gaze interaction are subdivided into 12 experimental cases. Through a survey experiment, this study conducts a satisfaction analysis for all the cases, considering both the positive and negative factors. In addition, we confirm the meaningful relation between presence, which is an important factor in mobile VR, and the proposed interface. Through this process, we propose a novel direction for the design of optimized gaze interaction in mobile VR environments.
Section 2 presents the related research on the user interfaces for VR, visual element analysis, and study of psychological analysis. In Section 3, the method used for producing interactive VR content needed for the gaze pointer-based user interface designed in this study is explained. In Section 4, the core technique for the gaze pointer-based interface proposed in this study is explained. Section 5 describes the experiments used for the proposed method. Section 6 presents the conclusions and future research directions.

2. Related Works

Several VR technologies that provide a realistic experience by stimulating the user’s senses in a 3D virtual space using interactive content have been designed and developed. In the 1960s, Sutherland [6] conducted a study on an HMD system, suggesting that a user may experience a visual sense, which showed in the study results that a variety of visual effects in the HMD could give users a sense of immersion and realism. Since then, additional research about haptic systems and motion platforms which deliver to users a physical response belonging to a virtual world [7,8,9,10] has been conducted. Research that enables the production of interactive VR content in better and more realistic environments are also ongoing [11,12]. For example, to provide presence in VR by narrowing the gap between the virtual and real, studies that reflect the user’s hand and the user walking have been proposed [13,14]. Moreover, a study [15] has been conducted to analyze the impacts on life in general through various approaches such as psychology, neuroscience, and social phenomena.
Recently, as HMD mobile has become more popular, VR contents using a 360 image provide the user with a stereoscopic perception and new experience [16]. In virtual space, for a user to be able to have an interesting and immersive interaction with virtual objects or the virtual environment, an input process technique that suits the platform used is essential. Even though a user interface of the VR can increase interest and immersion, it can also be a factor that interrupts the immersion instead; hence, the interface should be designed with caution. Richards et al. [17] compared the effect of learning between a 2D simulation tool and the 3D virtual world, and the study result showed that with regard to the “learning effect” produced by information transferred to the user, the objects generated by a 2D model affected better learning than the 3D objects. In addition, it was analyzed and confirmed that in the interface composed of images or menus, buttons that have been generated by 2D resulted in relative ease for user immersion and manipulation compared to data composed of a 3D object [18], and it was confirmed that providing the user with a 2D graphical user interface (GUI) with a window scheme inside the 3D virtual space gave the user faster and more accurate selection and convenience [19].
In the case of mobile VR platforms, mobile devices had to be placed within the HMD, as the methods to process the input is relatively limited. For this reason, the user’s gaze is mainly used in simple VR content. During their study on the eye–hand coordination in object manipulation, Johansson et al. [20] discovered that the user’s perception of the virtual space starts with the gaze feature points. Pelz et al. [21] analyzed the visual perception process by monitoring the participant washing hands. As for the first step where the user starts to interact in a VR environment, it can be seen that it starts from a gaze. Many studies have been conducted to implement the immersive interaction by designing a 3D interface with gaze tracking in a VR environment [22,23]. Thus, gaze is a simple but core element of perception and interaction in a VR environment. In this regard, Kokkinara et al. [24] analyzed the presence of users in VR using various approaches, including viewpoints. In addition, the effects of the gaze model using avatars on user concentration and immersion in the environment were studied and analyzed [4,5]. This study aims to analyze immersion and presence by designing the interaction using gaze from the perspective of the application as a user interface of several steps.
Sometimes in VR environments, factors causing severe resistance to users exist because there is certainly a difference in comparison to the actual environment. Kolasinski [25] studied the factors that generate motion sickness—a core component that interferes with user immersion in the virtual environment. The study analyzed the factors that cause motion sickness by breaking down the user’s individual characteristics, such as age and experience, measured simulation mechanisms such as content refresh rate and frames per second, and the time and manipulation degree of contents. Since then, studies have been conducted to analyze whether the user’s posture while viewing VR contents or the visual elements transmitted via the HMD are interfering with the immersion experience [26,27,28,29,30]. When approaching with the same concept, the user interface can be an important factor in increasing immersion and convenience; nevertheless, it may even be a factor that causes motion sickness and the like. Kim et al. [31] have studied the fatigue that occurs when the user experiences difficulty while operating the input processing techniques commonly used in currently available VR input devices. The more difficult it is for the user to interact, the more they are fatigued; this can be an interruption to an immersive experience. Therefore, a user interface that provides a compromise between user convenience and user discomfort is needed. Therefore, this study also proposes a mobile platform VR environment which is ideal for users by carrying out a questionnaire experiment to analyze users’ immersion, presence, and VR sickness for all cases of the proposed user interface for gaze interaction.
This study thus implemented an interactive VR content suitable for the mobile environment, and in a variety of methods, designed a gaze pointer-based interface that can easily provide interaction with a virtual object while increasing the user’s interest and immersion during the interaction process. Regarding the four types of the proposed gaze pointer-based interface, experiments and analysis have been carried out with general users as the subjects of study. By considering the factors related to interest, immersion, usability, VR sickness, dizziness, etc., an innovative mobile user interface for a VR environment is presented.

3. Interactive Virtual Reality Contents

This study aims to produce interactive 3D content based on mobile VR, and proposes a gaze pointer-based interface that arouses interest, provides an enhanced immersion, and allows convenient manipulation. For this, interactive 3D content to be used for the experiment is produced first. To analyze the utilization of the proposed gaze pointer-based interface, selecting the content that can appropriately reflect user interaction is necessary. Lee et al. [2] derived the experimental result that it is suitable in terms of concentration and tension to select a board game with memory as VR contents. Thus, in this study, interactive content that can interact with the virtual object using user’s memory in a VR environment is produced.
Progress flow of the content is as follows:
(1)
By randomly selecting one of the 15 objects consisting of three colors and five shapes, we then generate them in the virtual space one-by-one. The user memorizes the object that was generated in order.
(2)
Once the objects are generated in order, 15 objects are arranged randomly on the table in the face of the user (make sure they do not overlap) .
(3)
The user selects the objects one by one in the order in which they appeared using memory, from the 15 randomly arranged objects.
(4)
Determine whether the objects selected in order match the answer and deliver the results.
Figure 1 illustrates flow of the proposed content. The algorithm of Lee et al. [2] was used for (1) the automatic generation method of the memory object and (2) the random placement of non-overlapping objects. While going through the process of (1) to (4), the need for an interface that allows the user to interact with a virtual object and to receive information exists; therefore, it is important to design the optimized interface for user-oriented interactions.
We constructed a development environment by integrating Google VR Software Development Kit (Google VR SDK, Google, Mountain View, CA, USA) with the Unity 3D engine in order to produce the interactive content for use in a VR environment for a mobile device. The Google VR SDK provides a core element (GvrMain) for rendering the binocular parallax scene passed through the HMD based on the user’s head movement to match the Unity 3D environment. As shown in Figure 2, GvrMain prefab helps to render the mobile screen by dividing the image to be transferred to the left eye (Main Camera Left) and right eye (Main Camera Right). Here, as the mobile device is attached to the HMD, the motion of the user’s head represents mobile device motion, and provides the element (Head) that could correspond the mobile location and the rotating motion information with the user’s head motion. The binocular image remains consistent with the moving orientation of the user’s head, and is rendered by the HMD, so that the image has a stereoscopic perception.

4. Gaze Pointer-Based Interface

4.1. Interface Overview

Including the proposed content, in any typical interactive VR content, the interactive process with the user is important in order to send and receive the information necessary to proceed with content. Provide to the user the information necessary to proceed with content, and the user can comprehend the purpose from the information and interact with the virtual objects. During the interactive process, the content determines whether the process result is right or wrong, and then the content provides the user with the process result again. In the mobile VR application, the user recognizes and acts on this sequence of processes, primarily by using the gaze pointer. Therefore, it is necessary to design an interface that is most suitable for each situation in the mobile VR environment and provide this interface to the users. In this study, a gaze pointer-based interface is designed and proposed by dividing it into four types, according to the content’s progress flow. Figure 3 illustrates the proposed interface of four types. VR content provides a user with an experience as if they are present within the virtual space, and the visual field range includes the entire space. At this time, the verification of whether the adjustment of the visual field range assists the user’s interaction should be designed as the first interface type. The second type is the feedback system to ensure that the user’s action is made exactly in the interactive process between the user and the virtual object. The feedback is provided in various methods depending on the purpose of the content. In this study, the feedback system is configured into four components, and it aims to design a system suitable for the VR environment through experiments. The third type is the interface on which the user checks the information, and generally uses a method that places a 2D sprite on the screen. If the information can be confirmed by using a 3D object, multi-dimensional information will be designed to determine whether it can increase the realism of the VR content. Finally, design which considers the effect of the virtual space’s background color on the content’s progress is the fourth type.

4.2. Visual Field Range

In the VR environment, the user’s field of vision wearing the HMD becomes the entire virtual space. Due to this, unlike typical 3D interactive content, VR content can provide the feel of immersion and realism. In general, there is no limitation of space when placing the 3D virtual objects constituting the scene. However, for interface objects that communicate information to the user, the visual field is a major factor. If the information required for processing the content escapes outside the visual field and cannot be identified, then it could become a critical problem.
This study designs the visual field from two viewpoints. First, when the visual field has expanded into the entire virtual space, and if the user does not miss any important information, then we consider whether the interaction may have a positive effect. Next, we consider whether it helps to limit the VR visual field to that of the user’s gaze and that limit supports the user-oriented interaction. Figure 4 illustrates the two proposed visual field ranges.
As in the content progress flow, Figure 4a shows the visual field expanded to the entire virtual space during the process of randomly generating one of the 15 objects in the virtual space used to test the user’s memory. That is, without any correlation to the user’s gaze, a memory object may be created in the entire virtual space. As mentioned earlier, if the object needing to be remembered is not recognized, then this makes a problem with the content progress. Thus, for the smooth progress of the content, the object memory process should be separated step-by-step from the entire visual field range. Thus, Table 1 represents this: the user recognizes the location of the generated memory object by hearing the sound, and proceeds after validation of the generated object with the user’s gaze. With this method, the necessary information cannot be missed.
Figure 4b is to limit the generation location of the object within the gaze range. The user’s gaze is looking at the entire virtual space freely, but the actual location where the memory object is generated is limited to the gaze area. In this case, even though the user is looking at the virtual space, the necessary information is not missed. However, the user may experience less since they are not experiencing the entire space compared with the first case.
In the process of designing the VR content, the maximum visual field range was tested and analyzed by studying whether it could be expanded to provide better and more immersive interactions, or adversely affect interaction (such as VR sickness). Designing a systematic visual range is also an objective of this study.

4.3. Feedback System

The core of the interface is to send and receive the correct information without discomfort to the user during interaction. This requires the interface to configure the feedback system. In general, the process of selecting the menus in the VR content uses a feedback method that uses the gaze and the slider GUI. However, when interacting with the 3D object and not with the menu, diverse approaching is possible. This study aims to design the feedback system based on interactions with the 3D virtual objects by dividing into four components. This is shown in Figure 5. The feedback system is implemented by dividing into the four components of sound, transparency of 3D objects, circular slider (e.g., gazefuse), and button UI to implement.
During the content’s progress flow, once the user remembers the randomly generated objects, the user tries to get the correct answer by selecting one of the 15 objects spread on the table according to the order of memory. At this time, the process to match the correct answer is provided as the four-component feedback system. The feedback system first starts with the process of the user selecting a remembered object through the gaze. The process of selecting an object through the gaze is as follows. First of all, the direction the camera is facing (provided by Google VR SDK)—in other words, the user’s gaze is represented by rays. In addition, the presence or absence of the positioned objects is determined through collision detection of the rays with the object. Set an arbitrary point (reticle) at the point where the rays of gaze reach to express exact orientation of the user’s gaze in virtual space. In addition, by using the Unity 3D function (Physics Raycaster), which automatically calculates the rays heading toward the front of the camera from the position of the camera, accurately represent the point where the gaze is directed.
The user looks at the desired object for a certain duration using gaze, and then receives feedback confirming the choices made. A signal must pass to the user that until the object is selected, the current operation is being conducted normally for a period of time. This study utilizes a sound-effect as the feedback system for this. When the user selects an object through the gaze, the sound is delivered to the user for a certain period of time. Through this process, the user becomes aware that currently, the user is at the process where the object is being selected. The second component is the transparency effect, and at this step the transparency of the selected object is controlled for a defined time from the first point in time the object is viewed. This can be seen as an interface for a realistic interaction applied with the visual effect of 3D objects. The third component is the circular slider. This is a method that allows the user to be aware of time by expressing the passing of time by a circular 2D slider. This method is used frequently in other VR content, and this study aims to verify whether approaches other than this feedback system could help the user interaction. The last component is the button UI method, where without forcing the user to wait for a certain period of time, it lets the user direct the selection. If the user looks at the object of selection, the button gets activated above the object, and when the selected object is correct, the user uses gaze to select by looking at the selected object one more time. This method can remove the waiting process where the user has to wait for a certain period of time, but it may come with the inconvenience of the user making a selection one more time. Algorithm 1 shows the core code for the four-component feedback system.
Algorithm 1 Four-component feedback system.
1:
m o d e f b feedback system mode.
2:
selectTime ← object gazing time record value.
3:
procedure feedback system process( m o d e f b )
4:
    select memory object using gaze.
5:
    if m o d e f b = b u t t o n _ u i then
6:
        above the selected object, button UI activated.
7:
        click the activated button, feedback terminates.
8:
    else
9:
        check gazing time using selectTime.
10:
        if selectTime < defined time then
11:
           if m o d e f b = s o u n d then
12:
               play object selection sound.
13:
           else if m o d e f b = t r a n s p a r e n c y then
14:
               reduce object’s transparency according to selectTime
15:
           else if m o d e f b = c i r c u l a r _ s l i d e r then
16:
               control circular slider clockwise according to selectTime.
17:
           end if
18:
           selectTime accumulate the time value.
19:
        else
20:
           selected object feedback terminates.
21:
        end if
22:
    end if
23:
end procedure

4.4. Multi-Dimensional Information

During the process of selecting the object that the user remembers according to the sequence, the interface is needed so that it carries information of the selected object and then displays it on the screen. Much like in the case of games, 3D interactive content uses primarily 2D sprites to display information such as the item and score on the bottom of the screen (or elsewhere). To determine whether the information transfer methods can also be used as an important factor in the user interaction in VR environment, other than the traditional 2D method, the 3D interface method is also designed, and will be compared in this study. The 2D information interface is a method that uses Unity 3D’s sprite function to find a sprite corresponding to the index of the user’s selected objects and then lists them in order at the location specified. Algorithm 2 shows the process of generating the 2D sprite interface by using the index of the selected object.
Algorithm 2 2D information interface utilizing sprite.
1:
s e l _ i d x index of the selected object by the user.
2:
s e l _ c n t number of selected objects.
3:
store 2D sprite of 15 objects in A r r a y _ 2 D S p r i t e .
4:
store the location of 2D sprite that will be generated in A r r a y _ O b j .
5:
procedure 2D information transfer interface( s e l _ i d x , s e l _ c n t )
6:
     A r r a y _ O b j ( s e l _ c n t ).sprite = A r r a y _ 2 D S p r i t e ( s e l _ i d x )
7:
     s e l _ c n t + +
8:
end procedure
Next is the method to configure the interface using the 3D objects. If the content flow and composition utilized by the 3D objects satisfy the conditions to compose the interface, then we can determine whether the use of 3D objects is effective. Since the interactive content of this study needs to display the selected objects in the user’s memory procedure, it is possible to configure the selected objects for a 3D interface. The 3D interface is implemented in the same way as if the user is directly moving and placing the selected objects on a table. Algorithm 3 shows the procedure of the user’s generation of the selected objects in the determined location of the table in order, and Figure 6 shows the proposed multi-dimensional information interface. The 3D Interface shall be implemented by using Unity 3D’s prefab function and the object duplicate function (instantiate function).
Algorithm 3 3D information interface utilizing virtual object.
1:
s e l _ i d x index of selected object by the user.
2:
s e l _ c n t number of selected objects.
3:
Store the 3D Objects of 15 objects in A r r a y _ 3 D O b j .
4:
p o s _ i n i t initial position of 3D object that will generate (3D vector).
5:
p o s _ o b j position of 3D object that will generate from selected object.
6:
p o s _ o b j = p o s _ i n i t
7:
procedure 3D information transfer interface( s e l _ i d x , s e l _ c n t , p o s _ i n i t , p o s _ o b j )
8:
    list the 3D Object by 2×4 structure.
9:
     p o s _ o b j . x + = regular intervals.
10:
    Instantiate( A r r a y _ 3 D O b j ( s e l _ i d x ) , p o s _ o b j )
11:
    if s e l _ c n t = 4 then
12:
         p o s _ o b j . x = p o s _ i n i t . x
13:
         p o s _ o b j . z + = s t e p .
14:
    end if
15:
end procedure

4.5. Background Color

The final type among the four types of interface of the present study is regarding the background color. The background color is a relatively indirect element compared to the previous three types. However, it can be a factor for controlling the interaction in the VR environment. A variety of studies have been conducted for the color’s impact on the users’ immersion level and psychology when the users experience the content. For example, Banu [32] showed that the bright colors such as green mitigate the negative factors such as eye-fatigue and motion sickness. Moreover, color plays the role of expressing favor and repulsion from people, and it helps people to be aware of their various emotions (e.g., bright colors help to lift the user’s emotion) [33]. As such, because the color affects the user’s emotions, this study aims to implement the background color as one of the types of interface that increases the user’s interaction.
By dividing the interface background colors into four, identifying and organizing the emotions according to each color, and then setting these as background colors for the proposed interactive content, this study aims to analyze the emotion users feel through experiments. Table 2 provides a summary of the type and emotions of the background colors used in this study [32,33].
The results of setting different background colors in the same VR environment are shown in Figure 7.

5. Experimental Results and Analysis

5.1. Environment

For the implementation of immersive 3D content production and VR technology proposed in this study, we used the Unity 3D 5.3.4f1 (Unity Technologies, San Francisco, CA, USA), and Google VR SDK (gvr-unity-sdk, Google, Mountain View, CA, USA), and the PC environment used to develop was equipped with the Intel (R) coretm i5-4690 (Intel Corporation, Santa Clara, CA, USA), 8 GB random access memory (RAM), and a Geforce GTX 960 GPU (NVIDIA, Santa Clara, CA, USA). In addition, the mobile device used for the experiment was the Samsung Galaxy S6 (Suwan, Korea), as well as a Baofeng Mojing 4 (Beijing, China) HMD.
The experiment was performed largely divided into two steps. First, the proposed VR based interactive content was effectively produced, and verified that the gaze pointer-based interface of the four types all operate accurately. The overall process flow of interactive content was the same regardless of the interface. It was configured as the first scene that indicates the start of the content and the game scene where the figures to be memorized are randomly placed for the user to answer correctly, and the process of checking correct answers. During this time, each interface suitable for content progress flow was variously implemented, centering on the user, to reveal their differences. From Figure 8, one can see the progression of the mobile VR-based interactive content (Supplementary Video S1).
In the following section, the effect of the gaze pointer-based interface proposed in this study on the actual user’s psychology is experimented on and analyzed. In this study, the gaze pointer-based interface is divided into the four types of visual field range, feedback system, multi-dimensional information, and background color. Then, each of those four types are segmented into a total of 12 cases. Based on the proposed interactive content, the experiment was carried out through the survey of the user interface. The survey is divided by negative factors such as VR sickness, fatigue, and inconvenience, along with positive factors such as immersion, convenience, and interests. The survey was conducted by selecting a total of 35 participants, whose VR content experience ranged from novice to experienced users who are 12 to 89 years old. Figure 9 shows the operation process of the produced mobile platform-based VR content and the scenes from the users experiencing it. The experiment was conducted in the same environment for all participants. Users who participated in the experiment selected one from the positive and negative responses for each interface, and recorded the score of one to five points. The positive factors were constructed in organized items of the sense of immersion, interest arousal, convenient control, and new experience; the negative factors were constructed in the detailed items of VR sickness, fatigue, discomfort, and inappropriateness for VR content. In addition, when the score was high, the intensity for the item was shown as strong.

5.2. Satisfaction

The first is an experimental comparison of the visual field among the gaze pointer-based interfaces. The visual field was designed and divided into two components: expand to the entire range of space; limited to the range of gaze. Thus, the responses to both visual fields were checked. Figure 10 shows that when the entire space was set up as visual field, 62.8% of respondents indicated that they felt positive; 89.1% gave a positive answer for the limited visual field range. The average satisfaction score of the interaction from people who positively answered was 3.7 points when visual field range was determined as the entire space, and 4.1 points when visual field range was determined as limited within the user’s view volume. Thus, it can be seen that limited visual field’s satisfaction is higher in interaction. Looking at the details, the limited visual field range provided convenience to users. Additonally, when the entire space was set as the visual field, it helped the users to have more interest. However, when the visual field was expanded to the entire space, it caused VR sickness.
Next are the results of the experiment for the feedback system consisting of four components. First of all, the respondents were asked to record their priority rank for the four feedback system components, and then they were asked to check the reactions for each system. As one can see from Figure 11, the circular slider and transparency change received the highest preference, the sound came next, and button UI came last. In terms of statistics, those who answered positive feedback system for the circular slider and transparency change represented 85.7%—the highest ratio. The sound was the highest, with 4.2 points in terms of satisfaction of interaction. When this was analyzed—the circular slider being the generally used feedback method in many VR contents—it gave convenience to the user. In contrast, because transparency uses the 3D object as the feedback, it relatively increased the users’ interest and immersion, and provided a new method. However, transparency was found to be the only method that caused VR sickness. It was determined that the 3D UI could induce higher user interest and immersion, but it caused VR sickness for those users who were not familiar with the VR content. Sound is a different feedback method than the circular slider, and does not demand 3D feedback from the users like a 3D object does, so it can draw out interests and enhance immersion but does not cause VR sickness; thus, it showed higher user satisfaction. Button UI was to provide an environment in which a user can select rather quickly, but received a low satisfactory response due to great user discomfort.
The following experiment is a comparative study for the multi-dimensional information transfer interface. In general, respondents familiar with interactive content (e.g., in games) are familiar with the 2D interface. When viewed in this way, the response result of the 3D interface becomes an important point. It can be seen in Figure 12 that out of all the respondents, 89.1% responded positively to the 2D and 3D information. The satisfaction point of the interactions also reached a score of 3.9 for both. Using both methods, the users felt that the interaction was fully satisfactory. When looking closely, the important point is that the 2D interface was similar to the existing interface, which provides convenience to the user. However, the 3D interface aroused user interest because it was a new method. In addition, the method can generate VR sickness for some users. In addition, it was confirmed that the 3D method could also cause VR sickness in a portion of the users.
Lastly comes the experiment designed to study the effect of background color on the user’s emotion. This experiment was set up with four different sets of color patterns in the background, and the user was asked to rate the color patterns with a point score. The experimental results showed preference in the order of black, blue, green, and lastly white. Black backgrounds were preferred because it caused less eye fatigue, and made it easier to focus. The blue background was placed second because it contrasted well with the content’s objects for smooth content procedure, was easier on the eyes, and was easier to focus with. Green and white were recognized as colors that caused relatively more fatigue of the eye, and hindered focus on the objects (Figure 13).
The proposed gaze pointer-based interface was designed in many ways, and test results show that gaze pointer-based interface design method in mobile platform VR content—which has a great deal of limitations in input processing technique—had different effects on interaction satisfaction rates. The 2D interface that is primarily used in the existing VR content was found to provide convenience, due to its relatively familiar environment. If the interfaces or VR environment were expanded into a 3D VR environment, it induced the user’s interest and enhanced immersion, giving higher satisfaction in comparison to the 2D environment. However, it was found that the interface should be designed carefully because it can cause VR sickness among users who are not familiar with the VR environment. At this time, if sound is properly utilized, it can be the element that compensates for the drawbacks of the 3D virtual interface, and also boost user interest and immersion. Lastly, with focus on the background color, one should consider the overall color pattern constituting the content, so that the user can easily focus and have an immersive experience in a comfortable environment.

5.3. Presence

Next is a questionnaire experiment to derive accurate and reliable results for presence, which is an important element of VR application. The presence questionnaire items from Witmer et al. [34] were used to analyze the presence, convenience, and accuracy of each of the four types of user interface for the proposed gaze interaction, using an objective questionnaire. After the same participants experienced each of the four types of user interface, the answers to the following two questions were recorded as in the previous experiment: first, the question regarding presence ((a) “How much did the visual aspects of the environment involve you?”); and second, the question of convenience and accuracy in the interaction process ((b) “How well could you move or manipulate objects in the virtual environment?”). The responses to the two questions were recorded in the range of 1 (Not at all) to 7 (Completely).
The first survey confirmed the response to the visual field range consisting of the entire space and the limited visual field. Figure 14 shows the results of the surveys on the visual field range. First, analysis of the presence of the visual field range shows that it is better to set the entire space (average: 6.37) than the limited visual field range (average: 5.94) (Figure 14a). However, in terms of convenience and accuracy in recognizing objects in the virtual space, the limited visual field range (average: 6.40) provided better satisfaction (Figure 14b). To obtain statistically significant results, the Wilcoxon test was performed on the basis of the questionnaire, and the significance probabilities (p-value) of the two graphs were calculated as 4.0794 × 10 3 and 2.4733 × 10 5 , respectively. In other words, the null hypothesis was dismissed and the analysis showed that the entire space was convenient in presence, and the limited visual field range was different from the accuracy.
Next, the experimental results for the feedback system of four components are discussed. Figure 15 also showed a high value of transparency (average: 6.31, 6.32) as from the results of the satisfaction surveys (Figure 11). Then, the circular sliders (average: 6.16, 5.78), sound (average: 5.94, 5.62), and button UI (average: 4.80, 4.47) were recorded in order. The Wilcoxon test for the feedback system was compared with the other three components based on transparency. First, for the presence, the significance probability of the transparency and the circular slider is 2.8750 × 10 1 , that of the sound is 5.0001 × 10 3 , and that of the button UI is 2.3044 × 10 6 . That is, although the transparency shows a clear difference from the presence in comparison with sound and button UI, a relatively high value was recorded compared to that of the circular slider; however, the difference between them is not clear. Further, the results for manipulability show clearly improved results with transparency and the circular slider (1.6308 × 10 3 ), sound (1.3069 × 10 4 ), and button UI (8.7560 × 10 7 ).
The third is multi-dimensional information, which is the result of a questionnaire survey on how to display information occurring as a result of an interaction. This is shown in Figure 16, which is similar to the satisfaction analysis (Figure 12). The presence of the 3D interface recorded a relatively high value (average: 6.47); however, the convenience and accuracy of the relatively familiar 2D interface (average: 6.23) was high. In addition, the Wilcoxon test shows that the significance probabilities in both Figure 16a,b are 5.9083 × 10 6 and 1.4843 × 10 3 , which means that the 3D interface clearly improved in presence, and the 2D interface clearly improved in manipulability.
The final questionnaire is about background color. The results of this study are similar to those of the previous satisfaction test. However, in terms of manipulability, background color had no significant effect on users (Figure 17). In addition, the Wilcoxon test calculated the significance probabilities of other colors—using black as a basis—in terms of presence, resulting in the rejection of the null hypothesis by 5.7677 × 10 4 , 6.8254 × 10 5 , and 6.7956 × 10 7 . However, in terms of manipulability, all of the significance probabilities were calculated to be 0.05 or more, thus exhibiting no significant difference.

6. Conclusions

In this study, we proposed various ways for a gaze pointer-based interface design that can improve a number of psychological factors, including arousing the user’s interest and providing enhanced immersion, to produce VR content for the mobile platform. To this end, we developed a memory-based board game as the interactive VR content for the mobile environment, based on previous studies [2] derived from questionnaires. By using the HMD, the proposed interface transmitted a scene that has a sense of space and realism to the user. By inducing concentration with the user’s memory, it was able to provide an environment that enables an immersive VR experience for the user. To forward the necessary information to the user as the content progresses or to control the object within the virtual space, a gaze pointer-based interface was designed as the most appropriate input method for the mobile environment. At this time, the gaze pointer-based interface is divided into four types: visual field range, feedback system, multi-dimensional information transfer, and background color and subdivided into 12 cases to evaluate the satisfaction of interaction and presence in VR in a systematic method. A mobile application was developed with different interfaces applied to the same content, and by letting general users experience them, the test results were derived.
This study analyzes the survey experiment by dividing it into two aspects: the satisfaction of the interaction and the presence in the mobile VR environment in relation to the gaze pointer-based interface. First, as a result of confirming satisfaction, the satisfaction rate was high for the limited visual field range, and the circular slider and the transparency were equally high in the feedback system. A multi-dimensional information transfer showed high satisfaction for both 2D and 3D information. Finally, background color showed high satisfaction for colors black and blue. A point to be noted here is the result of segmenting user’s psychology into positive (interest, immersion, new experience, and convenience) and negative (VR sickness, tiredness, difficulty, and uncomfortable) factors. Depending upon satisfaction, all the cases of the proposed interface were different from each other in terms of convenience and immersion. When approached from the point of convenience to the users, a familiar user interface, similar to the existing form, recorded a high score. For example, limited visual field range and 2D information in the multi-dimensional information transfer have recorded many positive responses in relation to the convenience factor. In particular, the circular slider and the transparency, which recorded the same value in the feedback system, also have a higher convenience selection ratio only in the circular slider. However, when analyzed based on immersion, a high ratio was recorded when the widest possible space and the three-dimensional interface were utilized. Entirety in the visual field range and 3D information in the multi-dimensional information transfer have high immersion rate. In the case of feedback system, unlike convenience, the transparency was recorded with a high immersion ratio. Here, the important point is that the interface cases, with a high ratio of immersion, are frequently selected for both interest and new experience items. These results are the same in the presence survey for the VR environment. When analyzing the results in terms of presence, interface that comprises three-dimensional feedback and the 3D menu, while utilizing space as a whole, is ideal. Conversely, from the point of view of convenience, similar results were obtained. Therefore, in order to provide presence through high immersion in mobile VR environment, 3D-based interface should be considered in various ways. In addition, one of the best ways to use the background color can be the black or blue type, which gives you a sense of concentration or stability. However, it was found that VR sickness, which is a negative factor, was recorded only in interface cases based on perceptional depth and spatial impression. That is, the higher the immersion, the higher the possibility of inducing VR sickness as a negative factor. In conclusion, when designing an interface based on 3D, verification process for VR sickness must be preceded to provide an ideal environment for the user. In other aspects, the sound of the feedback system was relatively low in satisfaction and presence, but immersion, convenience, and interest were evenly recorded. Therefore, designing a multimodality that combines visual factors with auditory factors can be a new approach in mobile VR.
In the future, we will study interaction methods that reduce VR sickness, while at the same time increasing immersion by sharing the interaction with other senses (e.g., auditory sense). Specifically, by utilizing equipment such as that from Leap Motion, and by appropriately combining the interactivity with more direct control with hand movement and sound, the interface can be designed to take advantage of using the eyes, ears, and hands. Besides, a study of the interaction method that gives high satisfaction rates will be conducted by systematically analyzing the proposed interaction’s effect on the user’s sense of immersion and psychological factors affecting VR sickness. In addition, by applying different VR input processing techniques to the interface that can be applied to the mobile platform, we will continue to—from various angles—analyze and validate the performance of input processing techniques that general users will be exposed to.

Supplementary Materials

The following are available online at www.mdpi.com/2073-8994/9/9/189/s1, Video S1: A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. NRF-2017R1D1A1B03030286).

Author Contributions

Mingyu Kim, Jiwon Lee, Changyu Jeon and Jinmo Kim conceived and designed the experiments; Mingyu Kim, Jiwon Lee and Changyu Jeon performed the experiments; Mingyu Kim, Jiwon Lee and Jinmo Kim analyzed the data; Jinmo Kim wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feisst, M.E. Enabling virtual reality on mobile devices: Enhancing students’ learning experience. In Proceedings of the SPIE Sustainable Design, Manufacturing, and Engineering Workforce Education for a Green Future, Strasbourg, France, 28–30 March 2011; SPIE: Bellingham, WA, USA, 2011; Volume 8065. id. 80650P. [Google Scholar]
  2. Lee, J.; Kim, M.; Jeon, C.; Kim, J. A study on gamepad/gaze based input processing for mobile platform virtual reality contents. J. Korea Comput. Graph. Soc. 2016, 22, 31–41. [Google Scholar]
  3. Hayhoe, M.M.; Shrivastava, A.; Mruczek, R.; Pelz, J.B. Visual memory and motor planning in a natural task. J. Vis. 2003, 3, 49–63. [Google Scholar] [CrossRef] [PubMed]
  4. Garau, M.; Slater, M.; Vinayagamoorthy, V.; Brogni, A.; Steed, A.; Sasse, M.A. The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2003, Ft. Lauderdale, FL, USA, 5–10 April 2003; ACM: New York, NY, USA, 2003; pp. 529–536. [Google Scholar]
  5. Vinayagamoorthy, V.; Garau, M.; Steed, A.; Slater, M. An eye gaze model for dyadic interaction in an immersive virtual environment: Practice and experience. Comput. Graph. Forum 2004, 23, 1–11. [Google Scholar] [CrossRef]
  6. Sutherland, I.E. A head-mounted three dimensional display. In Proceedings of the Fall Joint Computer Conference, (Part I AFIPS’68), San Francisco, CA, USA, 9–11 December 1968; ACM: New York, NY, USA, 1968; pp. 757–764. [Google Scholar]
  7. Ortega, M.; Coquillart, S. Prop-based haptic interaction with co-location and immersion: An automotive application. In Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and their Applications, Ottawa, ON, Canada, 1–2 October 2005; IEEE Computer Society: Washington, DC, USA, 2005; p. 6. [Google Scholar]
  8. Schissler, C.; Nicholls, A.; Mehra, R. Efficient HRTF-based spatial audio for area and volumetric sources. IEEE Trans. Vis. Comput. Graph. 2016, 22, 1356–1366. [Google Scholar] [CrossRef] [PubMed]
  9. Lee, J.-W.; Kim, M.-K.; Kim, J.-M. A Study on immersion and VR sickness in walking Interaction for immersive virtual reality applications. Symmetry 2017, 9, 78. [Google Scholar] [CrossRef]
  10. Kim, M.-K.; Jeon, C.-G.; Kim, J.-M. A Study on immersion and presence of a portable hand haptic system for immersive virtual reality. Sensors 2017, 17, 1141. [Google Scholar] [CrossRef]
  11. Schulze, J.P.; Hughes, C.E.; Zhang, L.; Edelstein, E.; Macagno, E. CaveCAD: A tool for architectural design in immersive virtual environments. In Proceedings of the SPIE Electronic Imaging the Engineering Reality of Virtual Reality, San Francisco, CA, USA, 2–6 February 2014; SPIE: Bellingham, WA, USA, 2014; Volume 9012. id. 901208. [Google Scholar]
  12. Jeong, K.-S.; Lee, J.-W.; Kim, J.-M. A Study on new virtual reality system in maze terrain. Int. J. Hum. Comput. Interact. 2017. [Google Scholar] [CrossRef]
  13. Schorr, S.B.; Okamura, A. Three-dimensional skin deformation as force substitution: Wearable device design and performance during haptic exploration of virtual environments. IEEE Trans. Haptics 2017. [Google Scholar] [CrossRef] [PubMed]
  14. Lee, J.; Jeong, K.; Kim, J. MAVE: Maze-based immersive virtual environment for new presence and experience. Comput. Animat. Virtual Worlds 2017, 28, e1756. [Google Scholar] [CrossRef]
  15. Slater, M.; Sanchez-Vives, M.V. Enhancing our lives with immersive virtual reality. Front. Robot. AI 2016, 3. [Google Scholar] [CrossRef]
  16. Hoberman, P.; Krum, D.M.; Suma, E.A.; Bolas, M. Immersive training games for smartphone-based head mounted displays. In Proceedings of the 2012 IEEE Virtual Reality Workshops, Orange County, CA, USA, 4–8 March 2012; IEEE Computer Society: Washington, DC, USA, 2012; pp. 151–152. [Google Scholar]
  17. Richards, D.; Taylor, M. A comparison of learning gains when using a 2D simulation tool versus a 3D virtual world: An experiment to find the right representation involving the Marginal Value Theorem. Comput. Educ. 2015, 86, 157–171. [Google Scholar] [CrossRef]
  18. Coninx, K.; Van Reeth, F.; Flerackers, E. A hybrid 2D/3D user interface for immersive object modeling. In Proceedings of the 1997 Conference on Computer Graphics International, Hasselt-Diepenbeek, Belgium, 23–27 June 1997; IEEE Computer Society: Washington, DC, USA, 1997; p. 47. [Google Scholar]
  19. Andujar, C.; Argelaguet, F. Anisomorphic ray-casting manipulation for interacting with 2D GUIs. Comput. Graph. 2007, 31, 15–25. [Google Scholar] [CrossRef] [Green Version]
  20. Johansson, R.S.; Westling, G.; Bäckström, A.; Flanagan, J.R. Eye–hand coordination in object manipulation. J. Neurosci. 2001, 21, 6917–6932. [Google Scholar] [PubMed]
  21. Pelz, J.; Canosa, R.; Babcock, J.; Barber, J. Visual perception in familiar, complex tasks. In Proceedings of the 2001 International Conference on Image Processing, Thessaloniki, Greece, 7–10 October 2001; IEEE Computer Society: Washington, DC, USA, 2001; pp. 12–15. [Google Scholar]
  22. Antonya, C. Accuracy of gaze point estimation in immersive 3D interaction interface based on eye tracking. In Proceedings of the 2012 12th International Conference on Control Automation Robotics Vision (ICARCV), Guangzhou, China, 5–7 December 2012; IEEE Computer Society: Washington, DC, USA, 2012; pp. 1125–1129. [Google Scholar]
  23. Sidorakis, N.; Koulieris, G.A.; Mania, K. Binocular eye-tracking for the control of a 3D immersive multimedia user interface. In Proceedings of the 2015 IEEE 1st Workshop on Everyday Virtual Reality (WEVR), Arles, France, 23 March 2015; IEEE Computer Society: Washington, DC, USA, 2015; pp. 15–18. [Google Scholar]
  24. Kokkinara, E.; Kilteni, K.; Blom, K.J.; Slater, M. First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking. Sci. Rep. 2016, 6, 28879. [Google Scholar] [CrossRef] [PubMed]
  25. Kolasinski, E.M. Simulator Sickness in Virtual Environments; Technical Report DTIC Document; US Army Research Institute for the Behavioral and Social Sciences: Ft. Belvoir, VA, USA, 1995. [Google Scholar]
  26. Duh, H.B.L.; Parker, D.E.; Philips, J.O.; Furness, T.A. Conflicting motion cues to the visual and vestibular self-motion systems around 0.06 Hz evoke simulator sickness. Hum. Factors 2004, 46, 142–154. [Google Scholar] [CrossRef] [PubMed]
  27. Moss, J.D.; Muth, E.R. Characteristics of head-mounted displays and their effects on simulator sickness. Hum. Factors 2011, 53, 308–319. [Google Scholar] [CrossRef] [PubMed]
  28. Reason, J. Motion sickness adaptation: A neural mismatch model. J. R. Soc. Med. 1978, 71, 819–829. [Google Scholar] [PubMed]
  29. Stoffregen, T.A.; Smart, L.J., Jr. Postural instability precedes motion sickness. Brain Res. Bull. 1998, 47, 437–448. [Google Scholar] [CrossRef]
  30. Han, S.-H.; Kim, J.-M. A Study on immersion of hand interaction for mobile platform virtual reality contents. Symmetry 2017, 9, 22. [Google Scholar] [CrossRef]
  31. Kim, Y.; Lee, G.A.; Jo, D.; Yang, U.; Kim, G.; Park, J. Analysis on virtual interaction-induced fatigue and difficulty in manipulation for interactive 3D gaming console. In Proceedings of the 2011 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 9–12 Januray 2011; IEEE Computer Society: Washington, DC, USA, 2011; pp. 269–270. [Google Scholar]
  32. Manav, B. Color-emotion associations and color preferences: A case study for residences. Color Res. Appl. 2007, 32, 144–150. [Google Scholar] [CrossRef]
  33. Ou, L.C.; Luo, M.R.; Woodcock, A.; Wright, A. A study of colour emotion and colour preference. Part I: Colour emotions for single colours. Color Res. Appl. 2004, 29, 232–240. [Google Scholar] [CrossRef]
  34. Witmer, B.G.; Jerome, C.J.; Singer, M.J. The factor structure of the presence questionnaire. Presence Teleoper. Virtual Environ. 2005, 14, 298–312. [Google Scholar] [CrossRef]
Figure 1. Flow of proposed interactive content: (a) Generate random objects in order and memorize; (b) Arrange objects in non-overlapping method on the table; (c) Select objects as memorized in order; (d) Deliver the result to the user.
Figure 1. Flow of proposed interactive content: (a) Generate random objects in order and memorize; (b) Arrange objects in non-overlapping method on the table; (c) Select objects as memorized in order; (d) Deliver the result to the user.
Symmetry 09 00189 g001
Figure 2. Construction of virtual reality (VR) content development environment based on Google VR software development kit (SDK) and Unity 3D.
Figure 2. Construction of virtual reality (VR) content development environment based on Google VR software development kit (SDK) and Unity 3D.
Symmetry 09 00189 g002
Figure 3. Overview of four types of gaze pointer-based interface.
Figure 3. Overview of four types of gaze pointer-based interface.
Symmetry 09 00189 g003
Figure 4. Design of the gaze pointer-based interface’s visual range: (a) Set the entire space as the visual field range; (b) Limit the visual field range within the user’s view volume.
Figure 4. Design of the gaze pointer-based interface’s visual range: (a) Set the entire space as the visual field range; (b) Limit the visual field range within the user’s view volume.
Symmetry 09 00189 g004
Figure 5. The proposed four-component feedback system: (a) Sound; (b) Transparency; (c) Circular slider; (d) Button UI.
Figure 5. The proposed four-component feedback system: (a) Sound; (b) Transparency; (c) Circular slider; (d) Button UI.
Symmetry 09 00189 g005
Figure 6. Design of multi-dimensional information transfer interface: (a) 2D interface using sprite; (b) 3D interface utilizing 3D object.
Figure 6. Design of multi-dimensional information transfer interface: (a) 2D interface using sprite; (b) 3D interface utilizing 3D object.
Symmetry 09 00189 g006
Figure 7. Four components of background color settings that affect user emotions.
Figure 7. Four components of background color settings that affect user emotions.
Symmetry 09 00189 g007
Figure 8. Production result of mobile VR content including our gaze pointer-based interface: (a) Content’s start screen configuration; (b) Random generation of objects in the defined visual field; (c) Proposed feedback system-based object selection process; (d) Information transfer using the proposed multi-dimensional interface; (e) Correct answers identification process; (f) Scene transition by background color setting.
Figure 8. Production result of mobile VR content including our gaze pointer-based interface: (a) Content’s start screen configuration; (b) Random generation of objects in the defined visual field; (c) Proposed feedback system-based object selection process; (d) Information transfer using the proposed multi-dimensional interface; (e) Correct answers identification process; (f) Scene transition by background color setting.
Symmetry 09 00189 g008
Figure 9. Testing and experiencing environment for the proposed gaze pointer-based interface.
Figure 9. Testing and experiencing environment for the proposed gaze pointer-based interface.
Symmetry 09 00189 g009
Figure 10. Visual field range comparison test results: (a) Distribution of positive and negative factors; (b) Score analysis results.
Figure 10. Visual field range comparison test results: (a) Distribution of positive and negative factors; (b) Score analysis results.
Symmetry 09 00189 g010
Figure 11. User response test results of the four components of the feedback system: (a) Distribution of positive and negative factors; (b) Score analysis results; (c) Ranking point analysis results for feedback system.
Figure 11. User response test results of the four components of the feedback system: (a) Distribution of positive and negative factors; (b) Score analysis results; (c) Ranking point analysis results for feedback system.
Symmetry 09 00189 g011
Figure 12. Results of multi-dimensional information transfer interaction comparison test: (a) Distribution of positive and negative factors; (b) Score analysis results.
Figure 12. Results of multi-dimensional information transfer interaction comparison test: (a) Distribution of positive and negative factors; (b) Score analysis results.
Symmetry 09 00189 g012
Figure 13. User’s emotional change according to background color.
Figure 13. User’s emotional change according to background color.
Symmetry 09 00189 g013
Figure 14. Presence questionnaire survey results regarding visual field range comparison: (a) Presence; (b) Convenience and accuracy.
Figure 14. Presence questionnaire survey results regarding visual field range comparison: (a) Presence; (b) Convenience and accuracy.
Symmetry 09 00189 g014
Figure 15. Four components of feedback system survey results of presence questionnaire: (a) Presence; (b) Convenience and accuracy.
Figure 15. Four components of feedback system survey results of presence questionnaire: (a) Presence; (b) Convenience and accuracy.
Symmetry 09 00189 g015
Figure 16. Multi-dimensional information transfer interaction comparison survey results of presence questionnaire: (a) Presence; (b) Convenience and accuracy.
Figure 16. Multi-dimensional information transfer interaction comparison survey results of presence questionnaire: (a) Presence; (b) Convenience and accuracy.
Symmetry 09 00189 g016
Figure 17. Background color comparison survey results of presence questionnaire: (a) Presence; (b) Convenience and accuracy.
Figure 17. Background color comparison survey results of presence questionnaire: (a) Presence; (b) Convenience and accuracy.
Symmetry 09 00189 g017
Table 1. Recognition and validation process for memory object in the entire virtual space.
Table 1. Recognition and validation process for memory object in the entire virtual space.
Process
(1) Generate random memory objects in one of the hemisphere areas of the entire virtual space.
(2) Output sound from the location of the object generated.
(3) The user recognizes the location of the output sound.
(4) Search the location of the generated object.
(5) Select the generated object by using the gaze.
Once the current object is confirmed, repeat steps (1) to (5) as the number of memory objects.
Table 2. Users’ emotion according to background color.
Table 2. Users’ emotion according to background color.
Background ColorUser’s Emotion
BlackColor of Power. Classical and graceful. Provides users with serenity, quiet, and deep immersion.
WhiteThrough cheerful softness, provides the user with emotions to stimulate their confidence.
GreenRefreshing and healing. Provides comfortable environment that enhances user’s concentration.
BlueQuiet, comfortable, and color of calmness. Leads to comfortable emotions so the user can immerse in the content.

Share and Cite

MDPI and ACS Style

Kim, M.; Lee, J.; Jeon, C.; Kim, J. A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment. Symmetry 2017, 9, 189. https://doi.org/10.3390/sym9090189

AMA Style

Kim M, Lee J, Jeon C, Kim J. A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment. Symmetry. 2017; 9(9):189. https://doi.org/10.3390/sym9090189

Chicago/Turabian Style

Kim, Mingyu, Jiwon Lee, Changyu Jeon, and Jinmo Kim. 2017. "A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment" Symmetry 9, no. 9: 189. https://doi.org/10.3390/sym9090189

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop