Keywords

1 Introduction

Monitor task information is the major resource for pilots to learn about the flight status data, task information as well as threat and security state information. Pilots must grasp distinct, real and complete information of combat situation, so that to have the initiative to the battlefield in hands. Thus, it is clear that the display of information about the battlefield situation is extremely significant. The information display regarding to automatic combat identification system has been studied [1]. This paper discussed how these two groups of forms, uniforms and helmets, be presented to conduct the identification of combat information as well as the information analysis with combination of information reliability. After obtaining the reliable simulated data, there was found that the display formats of mesh chart and integrated data are more adaptive for combat information identification. The researchers have studied the identification performances of the colors, positions and shapes of different symbols and texts, and have obtained a series of valuable results [24]. The layout design of fighter radar situation-interface has been evaluated experimentally and has been analyzed through an objective evaluation technology of eye’s tracker [5, 6]. They also have selected a special scheme of rational layout optimization through the evaluation by eye’s moving data indexes. The researchers simulated the general operation sequence of enemy attack task in avionics system to conduct interface design [79]. The researchers have studied the influence of complex digital interfaces on color and shape codes to explore the identification performances under different time pressures [10].

Feature analysis research includes studies seeking to identify and observe movement and fixation of the eyes. Studies have found that if subjects gaze at one particular feature of a visual information interface for a relatively long time, then more information features can be extracted from subjects in comparison with saccadic eye movements. Yarbus [11] proposed that the more information a feature carried, the longer the eyes rested on it. Therefore, searching tasks on visual information interfaces depend not only on the nature of the physical stimuli (color, shape and location of information, etc.), but also contain information on higher cognitive processes such as attention and intention. Eye tracking can therefore be used to explain the cognitive process underlying visual searches.

The opinion has been held that the occurrence of attention capture basically depends on the significance level of the feature one stimulus relative to other stimuli [12, 13]. Then higher the featured significance level of a stimulus that higher the possibility of its generating attention captures. Through experimental observation, the researchers have found that the first factor is the quantity of icons, which influence to the user’s visual search, the second is the target boundary, and the last one is the quality and resolution of icons [14, 15]. Patrick has applied the experimental paradigm of visual delay search task to comparatively study of the binding experiment of colors, positions as well as colors and positions [16]. These two experiments validated that word gap is a necessary and sufficient condition for visual interference derived from context. The researchers have employed the brightness and flash as the way of highlighting to study the symbol’s shapes and colors, and testified to the influence of this on search time [17].

2 Objective

In order to analyze different information features in a task monitoring interface, such as information layout, information display, task type and potential problems in information extraction, the experimental paradigm of feature analysis and eye tracking technology are applied to study the factors involved during visual searches for information. The experiment focuses on the attention processing feature of human eyes when searching for information, based on the condition that the task monitoring interface displays complex information features. It is designed to check if there is any difference in eye movement indexes for different tasks and information areas, and to explore the relative differences between different searches in different information areas.

3 Methods

3.1 Material

This experiment uses a task monitoring interface of an aircraft, which is composed of four sub-interfaces - a flight data display, a horizontal position display, a radar display and weapons mount. The task monitoring interface carries a great deal of information which was considered in the experiment design, including many icons with different meanings, information represented by different graphs, a variety of data expressions, symbols at various positions, various status updates and also many abbreviations in capital letters. An excellent pilot must possess both professional flight skills and flight experience before he can master such extensive information and store this in his long-term memory as professional knowledge. This experiment uses a real task monitoring interface as the visual search material for eye tracking, and displays the same information interface when subjects are undergoing different tasks.

3.2 Design

This experiment was conducted by adopting two-factor (4 tasks × 9 areas) within the experiment design. Specifically, the tasks are divided into task 1, task 2, task 3 and an independent variable for where no task is set for the subject, based on the main tasks of an operator of the monitoring process. Different tasks to be performed by subjects are as follows:

  1. 1.

    No specified task: random observations of the monitoring interface.

  2. 2.

    Task 1: search for all symbols representing an aircraft (regardless of size) and remember their colors and positions.

  3. 3.

    Task 2: search for information representing threats, both symbols and numbers, and remember their features.

  4. 4.

    Task 3: view different data elements and attempt to identify how many different expressions there are, such as different colors, character sizes, emphasis formats, etc.

The task monitoring interface of complex navigation warfare information is divided into nine different areas according to different information features.

  1. 1.

    Information bar: Left- and right-hand upper information bars are respectively marked as INFO 1 and INFO 2;

  2. 2.

    Sub-interface: The four sub-interfaces - flight data display, horizontal position display, radar display and weapons mount - are respectively marked as INFO 3, INFO 4, INFO 5 and INFO 6;

  3. 3.

    Sub-interface status information bars: The corresponding status information bars of flight data, horizontal position display and weapons mount are respectively marked as INFO 7, INFO 8 and INFO 9.

3.3 Apparatus and Procedure

The experiment was conducted in the eye’s movement tracking laboratory of HHU (Hohai University). The Switzerland-made Tobii1X120 eye tracker with a sample frequency of 120 HZ and gaze location precision of 0.5 degrees was adopted. The computer with a display pixels of 1280 × 1024 (pix), a color quality of 32-bit, a and a head movement range of 30 × 16 × 20 cm was accepted. The sight-line gaze location data of the system were delayed to 3 ms and possessed an ideal gaze and instantaneous display. The system took samples from the eyeballs of subjects every 20 ms to investigate and collect the data of subjects’ eyeball movement.

Subjects were firstly asked to view pictures and read text on the task monitoring interface, with the material background introduced by a specially-assigned person to ensure that subjects were familiar with the navigation system environment. When the experiment began, subjects were invited to check the information interface at random under the “no task” scenario for 6,000 ms. The visual search task then commenced and task 1, task 2 and task 3 were performed sequentially. The subjects were asked to press the space bar once they had completed all tasks, and were also requested to complete a questionnaire after each task had been performed. It took about 30 min for each subject to finish the entire experiment.

4 Result and Discussion

The Tobii1X120 eye tracker recorded the eye movement of subjects while searching for information, and the Tobii Studio Version 3.1.0 software collected relevant data related to the output search path, fixation duration, saccade counts, saccadic incubation period, etc. Based on the collection rate of eye tracking data, information from eight subjects were exported for statistical analysis with SPSS, which excluded two subjects whose collection rates were both less than 70 % (42 % and 26 %).

4.1 Fixation and Saccade

The fixation and saccadic process of subjects when searching for information can be illustrated using the data visualization software Tobii Studio Version 3.1.0. As shown in Fig. 1, random observation by subjects displays an intense fixation and saccade pattern when not performing any specific task. Although subjects observe at random, more fixations are gathered in a certain area in Fig. 1. The length of the saccadic path also shows that subjects have checked the information back and forth, up and down repeatedly.

Fig. 1.
figure 1

Fixation and saccade of no task

The fixation and saccadic process shows a significant difference under each different task. As shown in Fig. 2, the areas where fixations are gathered are quite different. Task 1 requires subjects to search for aircraft symbols, so they focus on icon information related to aircrafts. Task 2 requires subjects to search for information representing threats, so they search for colors, shapes and data to identify if there is any threat. It shows that visual attention moves in accordance with threat correlation and fixations gather frequently at positions where an enemy aircraft is found. It indicates that subjects are undergoing a process of under-standing and identification. Task 3 requires subjects to search for specific data information. A large amount of data information is distributed in different ways at different positions in the task monitoring interface. Saccade counts in task 3 are much higher than in task 1 and task 2 because of the different information layout.

Fig. 2.
figure 2

Fixation and saccade of three tasks

4.2 Comparison of Reaction Time and Eye Tracking Data Under Different Tasks

Mean and SD of reaction time and eye tracking data under each different task are shown in Table 1. The reaction time index represents the speed of the information search. Analysis of variance (ANOVA) of the reaction times indicates that the reaction time differences between different tasks (F = 19.463, P = 0.048, P < 0.05) reaches a statistically significant level. Total fixation duration refers to the total fixation time in minutes to complete the task, and represents the time for visual encoding. ANOVA of the fixation durations shows that the fixation duration differences between different tasks (F = 55.687, P = 0.005, P < 0.01) reaches a statistically significant level. Fixation count refers to the effectiveness of the saccade, which reflects the relative difficulties of target searching. ANOVA of the fixation counts showed that fixation count differences between different tasks (F = 54.918, P = 0.005, P < 0.01) reaches a statistically significant level. Saccade duration means the time spent in the search path, and represents the degree of complexity of processing information. ANOVA of the saccade durations showed that the saccade duration differences of different tasks (F = 55.907, P = 0.005, P < 0.01) reached a statistically significant level. Saccade count index refers to pre-search efficiency. ANOVA of the saccade counts indicates that the main effect of different tasks (F = 55.698, P = 0.005, P < 0.01) reaches a statistically signif-icant level. From above, it can be concluded that all different tasks have a statis-tically significant effect on visual cognition of the information search.

Table 1. Mean and SD of reaction time and eye tracking data under different tasks

As can be seen from the comparison of eye movement indexes, saccade duration is far longer than fixation duration and saccade counts are also far greater than fixation counts. The search efficiency of the interface information can be compared based on the fixation/saccade ratio and the ratio of time between cognitive in-formation processing and information searches. Based on calculation using the formula:

Ratio = time for cognitive information processing (f)/time for information search (s), both the ratio of total fixation duration to saccade duration and the ratio of average fixation duration to saccade duration for each time can be obtained. ANOVA of two types of fixation/saccade ratios indicated that the main effect of different tasks (F = 792.817, P = 0.001, P < 0.01) reaches a statistically significant level.

As shown in Fig. 3, the results show that the fixation/saccade ratio is quite different from the average fixation/saccade ratio at each time point. For the average fixation/saccade ratio for each time point, both the ratios of random observation and task 1 are greater than 1, and those of task 2 and task 3 are also close to 1 (0.967 and 0.931 respectively). The total fixation/saccade ratio is below 0.13. It is difficult to explain the information layout in the task monitoring interface based on the value of search efficiency. For further research, it is necessary to divide the task monitoring interface into different areas and analyze eye movement data in each of these different information areas.

Fig. 3.
figure 3

Fixation/saccade ratio

4.3 Comparison of Eye Tracking Data in Different Monitor Area

Total Fixation Duration. The nine divisions in the monitor task interface are called AOIs (Areas of Interest) in software Tobii Studio, namely, areas of interest for the subjects. Total fixation duration refers to the subject gazing time when performing each different task. ANOVA of the reaction time in the monitor area indicates that the main effect of different areas (F = 7.939, P = 0.045, p < 0.05) reaches a statistically significant level. As shown in Fig. 4, the monitor areas 1 and 2 are in the upper information bars, and fixation duration on the right-hand side is significantly more than on the left-hand side. Monitor areas 3, 4, 5 and 6 represent sub-interfaces of light data display, horizontal position display, radar situation and weapons mount respectively. It shows that fixation duration on the radar situation interface is significantly higher than on any other area. Monitor areas 7, 8 and 9 represent the corresponding status information bars of flight data, horizontal position display and weapons mount respectively. Since there is less information in the horizontal position display interface, the fixation duration is also relatively short. No obvious change of fixation duration has been found for different tasks. Figure 4 shows that the fixation duration on task 1 is generally shorter than on other tasks, which indicates that the task of searching for aircraft icons is easy to perform.

Fig. 4.
figure 4

Total fixation duration in difference monitor area

Duration counts. Duration count refers to the total number of fixations, representing the number of times that subjects search for task information and perform cognitive pro-cessing. ANOVA of the monitor area when undergoing different tasks indicates that the main effect of the monitor areas (F = 7.786, P = 0.039, p < 0.05) reached a statistically significant level. As shown in Fig. 5, the trend of duration counts in nine different monitor areas is basically consistent with total fixation duration.

Fig. 5.
figure 5

Duration counts in difference monitor area

Visit counts. Visit count refers to the process of subjects repeatedly searching for a target, which relates to the complexity of information. ANOVA of the monitor area when undergoing different tasks indicates that the main effect of monitor areas (F = 9.033, P = 0.004, p < 0.01) reaches a statistically significant level. As shown in Fig. 6, visit counts in areas of INFO 3, 4, 5 and 6 are significantly higher than in other areas, especially in INFO 5 (the radar situation interface) whose visit count reached the highest value. It suggests that there are frequent visits to the INFO 5 area during the information search and either that the distribution of information in this area is intensive, or that the target in this area requires repeated searches because of weak identification in this area.

Fig. 6.
figure 6

Visit counts in difference monitor area

5 Conclusion

  1. 1.

    The search path followed by subjects on the task monitoring interface show significantly different subject reaction times and eye movements when undergoing each different task, as the search path is influenced by task-driven cognitive in-formation processing and information search time.

  2. 2.

    Fixation duration, duration count and visit count also show significant differences in each different monitor area; therefore information features distrib-uted in the radar sub-interface can be easily captured, which have been proven to be related to task-driven automatic capture.

  3. 3.

    Information position and features such as colors, shapes and sizes have a significant impact on visual searches as they can easily cause problems with in-formation omission, misreading and misjudgment, missing/ignoring data etc. when undergoing each different task.

The paper concludes that monitoring tasks and the individual information fea-tures within in an interface have a great influence on the visual search, which will guide further research on design of information features in task monitoring interfaces.