Abstract
In Japan, the ratio and the number of elder driver have increased following with the number of traffic accidents which has become a social problem. The problem can easily be solved only if the elders does not need to drive by themselves, although, elders living in the country side or suburban areas still needs to drive for daily activities. Thus, supporting the mobility for the elders and preventing traffic accidents at the same time is very important. Self-driving cars are considered to become one of the solutions to solve this issue, though it is still one of the state-of-art technology still under challenging research. Therefore, this paper proposes a practical approach to test functions and interfaces of the self-driving car for elder drivers. In particular, this paper features use of an immersive virtual reality environment for a human-in-the-loop simulation test.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
Self-driving cars is an emerging technology which is expected to give a huge impact in the transportation and mobility field. However, concerns regarding the various and complex riding situations require the self-driving technology to be highly accurate due to the serious consequences in case of any kind of errors. Hence, implementation of the state-of-art technology has started in a slow but a steady and reliable approach. Especially nowadays, assistive technologies are implemented to assist the driver in various ways which provide practical and convenient functions. On the other hand, technologies to provide autonomous functions are yet not practical enough, though some prototype tests or practical tests have recently started. These technologies are named as Advanced Driver Assistance Systems (ADAS) is under research and tests, although, on-road studies are highly limited to avoid serious accidents. Especially, tests to verify or validate under certain circumstances, such as testing with elder driver, needs to take care of extreme caution. Therefore, this paper proposes a practical approach to conduct safe and reliable tests for such circumstances. In particular, this paper proposes utilizing an affordable and highly customizable immersive virtual reality environment to conduct various human-in-the-loop simulation test.
2 Recent Approach for Advanced Driver Assistance System Tests
Recent developments ofnself-driving cars have reached to a level where an autonomous car can fairly drive through cities in normal traffic condition, though the system still needs special driver operations in case of difficult situations. In the field of self-driving cars, the action of moving the privilege from the driver to the system, or vise-versa, starts with an action named Take-Over-Request (TOR). While the design of TOR carefully respects the principle of Human-Machine Interface (HMI), it is normally done under simulated environment, typically using a driving simulator. Studies use reaction time [1] to measure the performance, though there are various studies using other indexes since there are various types of HMI display. Some studies of HMI display examples use visual display [2], auditory display [3], vibrotactical display [4] and the combination providing multimodal display [5, 6]. It is also to be noted that the TOR occurs continuously while driving, depending on traffic situations or drivers will [7], and the performance of reaction to TOR may differs depending on various aspects like drowsiness [9], alcohol consumption [8] and distractions from non-driving related tasks [10] or related tasks [11]. Nonetheless, the driver’s driving experience and habits are traditionally known to have meaningful relations with crash risks.
While the driver’s characteristics and the crash risks tends to indicate some correlations [12], it typically is biased indicating other fundamental issues underlying [14]. For example, from the perspective of the aging society that Japan is facing, the increase of elder driver’s crash risk is attributed to age-related and disease-related cognitive deficits rather than the term age itself. Therefore, the crash risk may be lowered if the driver focus on maintaining and strengthening situation awareness skills. Although one of the most effective act to strengthen the situation awareness skill is actual driving experience [15, 16], driving on the road without having enough skill and experience is obviously dangerous. Hence, driving simulator is typically used for such kind of training which can expect certain amount of efficiency, even though there are limitations based on the configuration of the simulator [17].
Driving simulators can also be utilized as an environment to understand driving behaviour in general, critical use cases, or to test HMI prototypes. Due to various resource limitations, simulators typically have its resource weight to realize the main purpose of the simulator. Considering the fact that the actual proportion of how much visual information drivers rely to drive seems high as 80% or more [18], immersive virtual reality environments such as CAVE [19] are known to be effective in such simulators. Compared to Head-Mounted Display (HMD) environment, CAVE can easily integrate physical interfaces in the virtual reality environment where it can physically include a driving seat or a full-scale automobile. While CAVE has the ability to realize immersive virtual reality, the configuration scale becomes relatively large and complex which also has significant issues of the very high cost, whereas the original CAVE [20] and CAVE2 [21] are estimated to have cost $2,000,000 and $926,000 respectively. Therefore, the complexity and difficulty to control the simulation increases exponentially when making and deploying scenarios or actors within the driving simulator.
Recent research and commercial development has significantly reduced the cost to construct the CAVE, especially on the software. Low-cost CAVE has been introduced by using recent game technologies such as CryEngine [22] or Unity Engine [23, 24]. There is also free low-level OpenGL library [25, 26]. Low-level implementation has pros of capability with high customization and configuration, although, has cons of requirement of heavy programming when developing an application. Additionally, sensor integration for head tracking and bridging to the application uses many proprietary closed-source hardware and software which consumes high budget and human resource for configuration. Therefore, while trying to realize an immersive virtual reality environment, it is typical to configure the system for such main purpose. Also, it is not always easy to overcome the difficulties in financial and spatial resource to construct an immersive virtual reality environment. Therefore, the process of splitting the software development and usage of running the software within the virtual environment needs to be considered, composed by local software development and remote use.
Immersive virtual reality environment can be considered for use as a driving simulator with the aim to conduct safe and reliable tests of self-driving technology for elder drivers. Regarding that software development and running the application are not always capable in one physical location, systems engineering methods shall be adapted to keep the simulation entity and functions clear which is allocated to whichever software or hardware. Similar approach can be observed within the kinect-based KAVE [24]. However, this paper adopted a slightly different approach due to constraint of realizing a driving simulator application configured for elder driver. Considering the complexity of implementing ADAS functions, this paper utilized a proprietary physics-based simulation platform for driving simulator. In particular, we propose and demonstrate a practical method and approach to develop a CAVE driving simulator as an ADAS HMI test environment for elder driver.
3 Design of Human-in-the-Loop Simulation Using Immersive Virtual Reality Environmenmt
Human consideration in the process of engineering is important especially in human-centered domains [27]. It is commonly referred as human-in-the-loop simulation when a human can operate or interact within the simulation in real time. The concept of using virtual reality environment for human-in-the-loop simulation can be observed in fields of system evaluation within the automobile field. For example, evaluation on steer-by-wire system [28], braking-based rollover prevention [29], and wheel loader [30]. The virtual reality environment can also act as an enabling system, in occasion when there is a different system of interest. This paper proposes the method to enable the virtual reality environment for both occasions with ease of switching in both directions to simulate within or without the system for testing.
3.1 Human-in-the-Loop Simulation for Advanced Driver Assistance System
In order to conduct practical simulation, it is important to understand the basic system architecture operated in the real world. Since this paper focuses on testing ADAS for elderly driver, therefore, understanding the system architecture of ADAS shall be the first step. Figure 1 illustrates the basic system structure of ADAS domain, based on HAVEit (Highly Automated Vehicle for Intelligent Transport) [31, 32] perspectives. Described in a block definition diagram based on SysML (System Modeling Language) [33], the figure indicates that the domain of automobile ADAS is composed of ego vehicle driver, ego vehicle, ADAS, pedestrian, surround mobility, transport system, and physical environments. Since Fig. 1 only indicates the top level hierarchy, it is noted that some blocks shall be specified when necessary, for example, the Surrounding Mobility can be specified into Rear Vehicle, Side Vehicle, Leading Vehicle, and Potential Vehicle. Components such as roads and traffic signals, are also generalized as Transport System, and other objects are generalized as the Physical Environment.
Regarding the basic system architecture, HAVEit address 1 important key functional task repartition between the driver and system, which are described with 3 functional definitions included and 1 functional definition to extend. The 3 included functionals are ensure driver can react properly, ensure driver can react properly, recognize scenery, and the 1 extend functional as perform safe motion control. Figure 2 illustrates this functionality using a use case diagram, based on some early research analysis on HAVEit [34].
According to accident analysis done by HAVEit [32], it addresses the fact that 95% of all accidents are driver related and more than 22% are related especially to lack of driver’s alertness. Therefore, HAVEit especially focus on including the driver as a key element when considering ADAS since there are occasions when ADAS cannot handle the situation and the driver needs to take over. Hence, following HAVEit’s philosophy, the driver is considered included inside the system loop, which means the ADAS and the driver will both need to be included when performing research or development.
Overall, the approach of HAVEit can be illustrated as Fig. 3. Original overall block diagram from HAVEit [32] is described based on four layers within the HAVEit architecture. The four layers are Perception layer, Command layer, Execution layer, and HMI layer, though the layers are only explainable from the function definition and requirements phase, which it too much to describe within this paper. Hence, rather not completely correct illustration, though this paper illustrates Fig. 3 colored and described based on the functionality, described in Fig. 2.
Based on Fig. 3, it is understandable that three main functionality is required within simulation for ADAS, and while one functionality (perform safe motion control) extends the main functional task repartition between the driver and system is not directly necessary. The block diagram also indicates the interactions between the components, which can also explains that the Ego Vehicle Driver is included through a loop consisted by HMI and Driver Monitoring, derived from the ensure driver can react properly. Hence, consideration of the Ego Vehicle Driver is extremelly important, which can be realized by using either a driver model for automated scenario based simulation, or a real human for human-in-the-loop simulation. Meanwhile, for the components other than the driver, preparation for the simulator shall be focused on the three functionality, allocating the funtional to how it shall be virtually realized in the simulation. In particular, within this paper, the issue is to how shall the immersive virtual reality environment can be utilized.
3.2 Configuration for Immersive Virtual Reality Environment
There are several well known definitions or principles for virtual reality environment such as \( I^3 \) (Immersion, Interaction, Imagination) [35] and AIP (Autonomy, Interaction, Presence) [36]. Within the topic of this paper, the key concept can be summarized as the following: iteraction in real-time, immersion in real-time, and environment in realistic 3D. Running the simulation based on human-in-the-loop simulation in real-time will basically provide the real-time-ness. Hence, the key aspects to consider for immersive virtual reality environment are on interaction, immersion, realistic 3D environment. In particular, an operative hardware or devices for interaction, big screens surrounding the driver’s field of view for immersion, and a realistic vehicle model within a realistic simulation world for realistic 3D environment.
For immersion, if the costs are not a big issue, utilizing the CAVE [19] and enabling it as an immersive CAVE simulator shall be on technical approach to consider. This approach has been taken by previous research, for observation of drivers’ behavior on narrow roads [37], and design of motorcycle head-up display [38]. Although both approach use real-time simulation and real driver realizing human-in-the-loop simulation, the loop does not consist ADAS.
For interaction, use of real controller or virtual (game-type) controller can be utilized within virtual reality environments. Although, when virtual reality environments are realized using a ead-mounted display, it normally uses dedicated controllers since the display is directly attached to the head, making the controllers physically non-visible. Therefore, again, CAVE type virtual reality environment has the advantage of capability of placing real or virtual controllers. Some CAVE type simulator, especially dedicated for driving simulator, have a full-scale real model of automobile or a driver seats are placed inside the screens. However, CAVE has many advantages as a virtual reality environment, it also has disadvantages of being a large system consuming both physical space and budget. Normally, trade-off decisions are made based on constraints and try to keep some minimum requirements based on the purpose of using the CAVE [24].
For realistic 3D environment, recent approach typically adapts use of game engines [22,23,24]. While building scenery of 3D environment can be relatively easy compared to making the full world from scratch, normally, the physical model of engine or calculation of the automobile is simplified [37]. In terms of realizing a realistic 3D environment for driving simulator, the precision of automobile model shall be virtually accurate as possible. Hence, this paper proposes to adopt a PLM software intended to test and design real automobiles. In particular, PreScan by Siemens which is a dedicated simulation software for ADAS and active safety was used to develop a realistic 3D environment [39]. PreScan uses Simulink, a well known component of Matlab to run the simulation model. Therefore, implementation of original software system and or hardware components are easily done through Simulink.
3.3 Considerations of Elder Drivers
There are many researches addressing the situations with elder drivers. Unless the individual lives in an urban city with sufficient public transportation, maintaining their mobility is important to support their quality of life [40]. However, it is clear that the spatial cognition ability declines with increase in age indicating that at some point the individual will become not capable to drive by themself. Furthermore, if there is no public transportation available, the loss of the ability to drive will also mean the connection loss of social activities, affecting the individuals’ quality of life. To solve this issue, self-driving cars or automobiles with highly advanced ADAS are expected to contribute. Although, unless the self-driving car can drive on its own completely, issues regarding the Take-Over-Request (TOR) need to consider about the drivers’ ability to drive upon the request. Therefore, understanding the characteristics with elder driver and their behaviours of interaction with the ADAS is important.
To understand the characteristics of elder driver, an approach of using a driving simulator is a typical approach since it is relatively safe. However, it is known that elders acceptance against the simulator is somewhat different with the young [41]. Early research about elder participation in virtual reality environments also argues with the decline of spatial cognition. Many clinical associated with clear evidence of decline in physical functions affecting negatively on general display recognition and 3D recognition [42,43,44]. It also has been reported with the sickness can occur even with simple driving simulators using only one CRT monitor [45, 46]. Although the are arguments that the driving performance can be unaffected [48], though some elderly feel extreme sickness which leads to dropout from the simulation [47]. Therefore, to minimize but ensuring the simulation environments, control of stereoscopic view and or fixing the head-tracking may be necessary in case of utilizing an immersive virtual environment.
4 Findings and Discussion
Based on the approach designed test ADAS interface for elder driver, the simulator to realize an immersive virtual reality environment was configured as follows. First, for designing the scenarios and use of simple virtual reality environment composed of three 65-inch LCD displays (Panasonic, TH-L65WT600), and a computer (Table 1). Using two GPU (Zotac, ZOTAC GAMING GeForce RTX 2080 Blower), the computer has full ability of rendering 60fps 4K for all three display using DisplayPort cables (DisplayPort 1.2a).
While using a proprietary function within PreScan called Remote Viewer plugin, viewports can be easily placed within a vehicle’s driver seat, or an actor sitting or standing in the virtual world (Fig. 4). While the GPU load relies heavely on the modeled world, a basic world with few cars was successfully rendered in real-time. The top view of the modeled world is shown in Fig. 5, whereas it also indicates how easily the objects are realized including the lightings and the shadows. However, as observable in Fig. 4, the stereoscopic view has been disabled in this picture. Furthermore, in case of CAVE type display environment for immersive virtual reality environment, visual port can be placed on human models based near human eye-point for both left eye and right eye. Theoretically, if the display is compatible with 3D view, for example, side-by-side, rendering left eye and right eye on one display may realize a 3D view though it will need additional hardware or software configuration.
Head-tracking can also be integrated through Matlab and Simulink configuration, where it can directly specify the head movements within the simulation loop. While it can be implemented easily for head-mounted display, in case of CAVe type projection, it will need to consider and will need adjustment calculations. The calculation itself is nothing new considering it is already implemented with Uni-CAVE [23] and KAVE [24]. However, this implementation is theoretically possible, it has not yet been tested which shall be done as future work.
5 Conclusion
Regarding the background of the aging society and the upcoming technology of self-driving cars, this paper proposed the use and process of immersive virtual reality environment to test the Advanced Driver Assistance Systems (ADAS). Following the HAVEit system architecture, this paper proposes a simple driving simulator which can easily be adopted to CAVE type immersive virtual reality environment. Moreover, consideration of elderly was taken in within the configuration of the virtual reality environment. Controlling the immersiveness is difficult based on the elderly’s acceptability against virtual reality environment, which will require an easy interface to activate and deactivate the stereoscopic view and head-tracking for comfort. Given the ease and the low requirement of setup for the simulation, extended use with CAVE type display environment, we expect that this paper address some solutions for approach in developing a safe and reliable ADAS system capable for elderly driver.
References
Melcher, V., Rauh, S., Diederichs, F., Widlroither, H., Bauer, W.: Take-over requests for automated driving. In: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, Procedia Manufacturing, vol. 3, pp. 2867–2873 (2015)
You, F., Wang, Y., Wang, J., Zhu, X., Hansen, P.: Take-over requests analysis in conditional automated driving and driver visual research under encountering road hazard of highway. In: Nunes, I. (ed.) Advances in Human Factors and Systems Interaction. AHFE 2017, pp. 230–240. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60366-7_22
Blattner, M.M., Sumikawa, D.A., Greenberg, R.M.: Earcons and icons: their structure and common design principles. Hum.-Comput. Interact. 4(1), 11–44 (1989)
Spence, C., Ho, C.: Tactile and multisensory spatial warning signals for drivers. IEEE Trans. Haptics 1(2), 121–129 (2008)
Lee, J.H., Spence, C.: Assessing the benefits of multimodal feedback on dual-task performance under demanding conditions. In: Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction, vol. 1, pp. 185–192. British Computer Society (2008)
Liu, Y.C.: Comparative study of the effects of auditory, visual and multimodality displays on drivers’ performance in advanced traveller information systems. Ergonomics 44, 425–442 (2001)
Sebastiaan, P., Pavlo, B., Klaus, B., de Joost, W.: Take-over again: investigating multimodal and directional TORs to get the driver back into the loop. Appl. Ergon. 62, 204–215 (2017)
Katharina, W., Frederik, N., Johanna, W., Ramona, K.-M., Yvonne, K., Alexandra, N.: Effect of different alcohol levels on take-over performance in conditionally automated driving. Accid. Anal. Prev. 115, 89–97 (2018)
Frederik, N., Simon, H., Christian, P., Kathrin, Z.: From partial and high automation to manual driving: relationship between non-driving related tasks, drowsiness and take-over performance. Accid. Anal. Prev. 121, 28–42 (2018)
Klauer, S.G., Guo, F., Simons-Morton, B.G., Ouimet, M.C., Lee, S.E., Dingus, T.A.: Distracted driving and risk of road crashes among novice and experienced drivers. New Engl. J. Med. 370, 54–59 (2014)
Zeeb, K., Buchner, A., Schrauf, M.: Is take-over time all that matters? The impact of visual-cognitive load on driver take-over quality after conditionally automated driving. Accid. Anal. Prev. 92, 230–239 (2016)
Maltz, M., Shinar, D.: Eye movements of younger and older drivers. Hum. Factors 41(1), 15–25 (1999)
Mihal, W.L., Barrett, G.V.: Individual differences in perceptual information processing and their relation to automobile accident involvement. J. Appl. Psychol. 61, 229–233 (1976)
Scott-Parker, B., Regt, T.D., Jones, C., Caldwell, J.: The situation awareness of young drivers, middle-aged drivers, and older drivers: same but different?. Case Stud. Transp. Policy. https://doi.org/10.1016/j.cstp.2018.07.004
Crundall, D., Underwood, G., Chapman, P.: Driving experience and the functional field of view. Perception 28, 1075–1087 (1999)
Wu, J., Yan, X., Radwan, E.: Discrepancy analysis of driving performance of taxi drivers and non-professional drivers for red-light running violation and crash avoidance at intersection. Accid. Anal. Prev. 91, 1–9 (2016)
Upahita, D.P., Wong, Y.D., Lum, K.M.: Effect of driving inactivity on driver’s lateral positioning control: a driving simulator study. Transp. Res. Part F Traffic Psychol. Behav. 58, 893–905 (2018)
Sivak, M.: The information that drivers use: is it indeed 90% visual? Perception 25, 1081–1089 (1996)
Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., Kenyon, R.V., Hart, J.C.: The cave: audio visual experience automatic virtual environment. ACM Commun. 6, 64–72 (1992)
Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., Kenyon, R.V., Hart, J.C.: Surround-screen Projection-based virtual reality: the design and implementation of the CAVE. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, New York, pp. 135–142 (1993)
Febretti, A., et al.: CAVE2: a hybrid reality environment for immersive simulation and information analysis. In: The Engineering Reality of Virtual Reality 2013, Proceedings SPIE, vol. 8649, pp. 1–12 (2013)
Juarez, A., Schonenberg, W., Bartneck, C.: Implementing a low-cost CAVE system using the CryEngine2. Entertainment Comput. 1, 157–164 (2010)
Tredinnick, R., Boettcher, B., Smith, S., Solovy, S., Ponto, K.: Uni-CAVE: a Unity3D plugin for non-head mounted VR display systems. In: 2017 IEEE Virtual Reality (VR), pp. 393–394 (2017)
Gonçalves, A., Bermúdez, S.: KAVE: building Kinect based CAVE automatic virtual environments, methods for surround-screen projection management, motion parallax and full-body interaction support. In: Proceedings of the ACM on Human-Computer Interaction, vol. 2, pp. 10:1–10:15 (2018)
Tateyama, Y., Oonuki, S., Sato, S., Ogi, T.: K-Cave demonstration: seismic information visualization system using the OpenCABIN library. In: 18th International Conference on Artificial Reality and Telexistence 2008, pp. 363–364 (2008)
Tateyama, Y., Ogi, T.: Development of applications for multi-node immersive display using opencabin library. In: ASIAGRAPH 2013 Forum in HAWAi’i, pp. 67–70 (2013)
Booher, R.H.: Handbook of Human Systems Integration. Wiley-Interscience, Hobken (2003)
Setlur, P., Wagner, J., Dawson, D., Powers, L.: A hardware-in-the-loop and virtual reality test environment for steer-by-wire system evaluations. In: Proceedings of the 2003 American Control Conference 2003, pp. 2584–2589 (2003)
Chen, B., Peng, H.: Differential-braking-based rollover prevention for sport utility vehicles with human-in-the-loop evaluations. Veh. Syst. Dyn. 4, 359–389 (2001)
Fales, R., Spencer, E., Chipperfield, K., Wagner, F., Kelkar, A.: Modeling and control of a wheel loader with a human-in-the-loop assessment using virtual reality. J. Dyn. Syst. Measur. Control 3, 415–423 (2005)
Hoeger, R., et al.: Highly automated vehicles for intelligent transport: HAVE-it approach. In: 15th World Congress on Intelligent Transport Systems and ITS America’s 2008 Annual Meeting (2008)
HAVEit: The future of driving. Deliverable D61.1 Final Report, September 2011. http://www.haveit-eu.org/
Friedenthal, S., Moore, A., Steiner. R.: A Practical Guide to SYsML: The Systems Modeling Language, 3rd edn. Morgan Kaumann, imprint of Elsevier (2014)
Yun, S., Nishimura, H.: Automated driving system architecture to ensure safe delegation of driving authority. J. Phys. Conf. Ser. 744, 012223 (2016)
Burdea, G.C., Coiffet, P.: Virtual Reality Technology. Wiley-Interscience, London (1994)
Zeltzer, D.: Autonomy, interaction, and presence. Presence Teleoperators Virtual Environ. 1, 127–132 (1992)
Tateyama, Y., et al.: Observation of drivers’ behavior at narrow roads using immersive car driving simulator. In: Proceedings of the 9th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry, pp. 391–496 (2010)
Ito, K., Nishimura, H., Ogi, T.: Motorcycle HUD design of presenting information for navigation system. In: 2018 IEEE International Conference on Consumer Electronics (ICCE), pp. 67–70 (2013)
PreScan Homepage. https://tass.plm.automation.siemens.com/prescan/. Accessed 28 Feb 2019
Suen, S.L., Sen, L.: Mobility options for seniors. In: Transportation in an Aging Society: A Decade of Experience. Transportation Research Board, pp. 97–113 (2004)
McGee, J.S., et al.: Issues for the assessment of visuospatial skills in older adults using virtual environment technology. CyberPsychology Behav. 3, 469–482 (2000)
Kline, D.W.: Optimizing the visability of displays for older observers. Exp. Aging Res. 20, 11–23 (1994)
Cavanaugh, J.C.: Adult Development and Aging, 3rd edn. Brooks/Cole, Pacific Grove (1994)
Yetka, A.A., Pickwell, L.D., Jenkins, T.C.: Binocular vision: age and symptoms. Ophthalmic Physiol. Opt. 9, 115–120 (1998)
Lee, H.C.: The validity of driving simulator to measure on-road driving performance of older drivers. In: 24th Conference of Australian Institutes of Transport Research, pp. 1–14 (2002)
Mouloua, M., Rinalducci, E., Smither, J., Brill, J.C.: Effect of aging on driving performance. In: Proceedings of the Human Factors and Ergonomics Society 48th Annual Meeting, 48, pp. 253–257 (2004)
Park, G.D., Allen, R.W., Fiorentino, D., Rosenthal, T.J., Cook, M.L.: Simulator sickness scores according to symptom susceptibility, age, and gender for an older driver assessment study. In: Proceedings of the Human Factors and Ergonomics Society 50th Annual Meeting, vol. 50, pp. 2702–2706 (2006)
Allen, R.W., Park, G.D., Fiorentino, D., Rosenthal, T.J., Cook, L.M.: Analysis of simulator sickness as a function of age and gender. In: 9th Annual Driving Simulation Conference Europe (2006)
Acknowledgement
The research was partially supported from the Immediate, Ltd. Conflict of Interest: The authors confirm that there is no conflict of interest related to the content of this research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Ito, K., Hirose, M. (2019). Immersive Virtual Reality Environment to Test Interface of Advanced Driver Assistance Systems for Elder Driver. In: Yamamoto, S., Mori, H. (eds) Human Interface and the Management of Information. Information in Intelligent Systems. HCII 2019. Lecture Notes in Computer Science(), vol 11570. Springer, Cham. https://doi.org/10.1007/978-3-030-22649-7_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-22649-7_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22648-0
Online ISBN: 978-3-030-22649-7
eBook Packages: Computer ScienceComputer Science (R0)