skip to main content
10.1145/3382507.3418839acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Job Interviewer Android with Elaborate Follow-up Question Generation

Published:22 October 2020Publication History

ABSTRACT

A job interview is a domain that takes advantage of an android robot's human-like appearance and behaviors. In this work, our goal is to implement a system in which an android plays the role of an interviewer so that users may practice for a real job interview. Our proposed system generates elaborate follow-up questions based on responses from the interviewee. We conducted an interactive experiment to compare the proposed system against a baseline system that asked only fixed-form questions. We found that this system was significantly better than the baseline system with respect to the impression of the interview and the quality of the questions, and that the presence of the android interviewer was enhanced by the follow-up questions. We also found a similar result when using a virtual agent interviewer, except that presence was not enhanced.

Skip Supplemental Material Section

Supplemental Material

3382507.3418839.mp4

mp4

23 MB

icmi1143.mp4

mp4

45.3 MB

References

  1. Muneeb Imtiaz Ahmad, Omar Mubin, and Hiren Patel. 2018. Exploring the potential of NAO robot as an interviewer. In International Conference on HumanAgent Interaction (HAI). 324--326.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Mohammad R. Ali, Dev Crasta, Li Jin, Agustin Baretto, Joshua Pachter, Ronald D. Rogge, and Mohammed E. Hoque. 2015. LISSA-Live interactive social skill assistance. In Affective Computing and Intelligent Interaction (ACII). 173--179.Google ScholarGoogle Scholar
  3. Keith Anderson, Elisabeth André, T. Baur, Sara Bernardini, M. Chollet, E. Chryssafidou, I. Damian, C. Ennis, A. Egges, P. Gebhard, H. Jones, M. Ochs, C. Pelachaud, Ka?ka Porayska-Pomsta, P. Rizzo, and Nicolas Sabouret. 2013. The TARDIS framework: Intelligent virtual agents for social coaching in job interviews. In International Conference on Advances in Computer Entertainment Technology (ACE). 476--491.Google ScholarGoogle ScholarCross RefCross Ref
  4. Tobias Baur, Ionut Damian, Patrick Gebhard, Kaska Porayska-Pomsta, and Elisabeth André. 2013. A job interview simulation: Social cue-based interaction with a virtual character. In International Conference on Social Computing (SocialCom). 220--227.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Zoraida Callejas, Brian Ravenet, Magalie Ochs, and Catherine Pelachaud. 2014. A computational model of social attitudes for a virtual recruiter. In International Conference On Autonomous Agents and Multi-Agent Systems (AAMAS). 93--100.Google ScholarGoogle Scholar
  6. David Cameron, Samuel Fernando, Emily Collins, Abigail Millings, Roger Moore, Amanda Sharkey, Vanessa Evers, and Tony Prescott. 2015. Presence of life-like robot expressions influences children's enjoyment of human-robot interactions in the field. In AISB Convention.Google ScholarGoogle Scholar
  7. Kirby Cofino, Vikram Ramanarayanan, Patrick Lange, David Pautler, David Suendermann-Oeft, and Keelan Evanini. 2017. A modular, multimodal opensource virtual interviewer dialog agent. In International Conference on Multimodal Interaction (ICMI). 520--521.Google ScholarGoogle Scholar
  8. Kevin W Cook, Carol A Vance, and Paul E Spector. 2000. The relation of candidate personality with selection-interview outcomes. Journal of Applied Social Psychology 30, 4 (2000), 867--885.Google ScholarGoogle ScholarCross RefCross Ref
  9. Joana Galvão Gomes da Silva, David J Kavanagh, Tony Belpaeme, Lloyd Taylor, Konna Beeson, and Jackie Andrade. 2018. Experiences of a motivational interview delivered by a robot: Qualitative study. Journal of medical Internet Research 20, 5 (2018), e116.Google ScholarGoogle ScholarCross RefCross Ref
  10. Ionut Damian, Tobias Baur, Birgit Lugrin, Patrick Gebhard, Gregor Mehlmann, and Elisabeth André. 2015. Games are better than books: In-situ comparison of an interactive job interview game with conventional training. In International Conference on Artificial Intelligence in Education (AIED). 84--94.Google ScholarGoogle ScholarCross RefCross Ref
  11. Amanda R Feiler and Deborah M Powell. 2016. Behavioral expression of job interview anxiety. Journal of Business and Psychology 31, 1 (2016), 155--171.Google ScholarGoogle ScholarCross RefCross Ref
  12. Patrick Gebhard, Tobias Baur, Ionut Damian, Gregor Mehlmann, Johannes Wagner, and Elisabeth André. 2014. Exploring interaction strategies for virtual characters to induce stress in simulated job interviews. In International Conference On Autonomous Agents and Multi-Agent Systems (AAMAS). 661--668.Google ScholarGoogle Scholar
  13. Dylan F. Glas, Takashi Minaot, Carlos T. Ishi, Tatsuya Kawahara, and Hiroshi Ishiguro. 2016. ERICA: The ERATO intelligent conversational android. In International Conference on Robot and Human Interactive Communication (ROMAN). 22--29.Google ScholarGoogle ScholarCross RefCross Ref
  14. Mohammed E. Hoque, Matthieu Courgeon, Jean-Claude Martin, Bilge Mutlu, and Rosalind W. Picard. 2013. MACH: My automated conversation coach. In International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP). 697--706.Google ScholarGoogle Scholar
  15. Koji Inoue, Divesh Lala, Kenta Yamamoto, Shizuka Nakamura, Katsuya Takanashi, and Tatsuya Kawahara. 2020. An attentive listening system with android ERICA: Comparison of autonomous and WOZ interactions. In SIGdial Meeting on Discourse and Dialogue (SIGDIAL). 118--127.Google ScholarGoogle Scholar
  16. Koji Inoue, Divesh Lala, Kenta Yamamoto, Katsuya Takanashi, and Tatsuya Kawahara. 2019. Engagement-based adaptive behaviors for laboratory guide in humanrobot dialogue. In International Workshop on Spoken Dialog System Technology (IWSDS).Google ScholarGoogle Scholar
  17. Koji Inoue, Pierrick Milhorat, Divesh Lala, Tianyu Zhao, and Tatsuya Kawahara. 2016. Talking with ERICA, an autonomous android. In SIGdial Meeting on Discourse and Dialogue (SIGDIAL). 212--215.Google ScholarGoogle ScholarCross RefCross Ref
  18. Carlos T. Ishi, Hiroshi Ishiguro, and Norihiro Hagita. 2012. Evaluation of formantbased lip motion generation in tele-operated humanoid robots. In International Conference on Intelligent Robots and Systems (IROS). 2377--2382.Google ScholarGoogle Scholar
  19. Carlos T. Ishi, Chaoran Liu, Jani Even, and Norihiro Hagita. 2016. Hearing support system using environment sensor network. In International Conference on Intelligent Robots and Systems (IROS). 1275--1280.Google ScholarGoogle ScholarCross RefCross Ref
  20. Tatsuya Kawahara. 2018. Spoken dialogue system for a human-like conversational robot ERICA. In International Workshop on Spoken Dialog System Technology (IWSDS).Google ScholarGoogle Scholar
  21. Takahiro Kobori, Mikio Nakano, and Tomoaki Nakamura. 2016. Small talk improves user impressions of interview dialogue systems. In SIGdial Meeting on Discourse and Dialogue (SIGDIAL). 370--380.Google ScholarGoogle ScholarCross RefCross Ref
  22. Hirokazu Kumazaki, Taro Muramatsu, Yuichiro Yoshikawa, Blythe A Corbett, Yoshio Matsumoto, Haruhiro Higashida, Teruko Yuhi, Hiroshi Ishiguro, Masaru Mimura, and Mitsuru Kikuchi. 2019. Job interview training targeting nonverbal communication using an android robot for individuals with autism spectrum disorder. Autism 23, 6 (2019), 1586--1595.Google ScholarGoogle ScholarCross RefCross Ref
  23. Divesh Lala, Koji Inoue, and Tatsuya Kawahara. 2018. Evaluation of real-time deep learning turn-taking models for multiple dialogue scenarios. In International Conference on Multimodal Interaction (ICMI). 78--86.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Divesh Lala, Koji Inoue, and Tatsuya Kawahara. 2019. Smooth turn-taking by a robot using an online continuous model to generate turn-taking cues. In International Conference on Multimodal Interaction (ICMI). 226--234.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Divesh Lala, Pierrick Milhorat, Koji Inoue, Masanari Ishida, Katsuya Takanashi, and Tatsuya Kawahara. 2017. Attentive listening system with backchanneling, response generation and flexible turn-taking. In SIGdial Meeting on Discourse and Dialogue (SIGDIAL). 127--136.Google ScholarGoogle ScholarCross RefCross Ref
  26. Divesh Lala, Shizuka Nakamura, and Tatsuya Kawahara. 2019. Analysis of effect and timing of fillers in natural turn-taking. In INTERSPEECH. 4175--4179.Google ScholarGoogle Scholar
  27. Markus Langer, Cornelius J König, Patrick Gebhard, and Elisabeth André. 2016. Dear computer, teach me manners: Testing virtual employment interview training. International Journal of Selection and Assessment 24, 4 (2016), 312--323.Google ScholarGoogle ScholarCross RefCross Ref
  28. Akinobu Lee, Keiichiro Oura, and Keiichi Tokuda. 2013. MMDAgent -- A fully open-source toolkit for voice interaction systems. In International Conference on Acoustics, Speech and Signal Processing (ICASSP). 8382--8385.Google ScholarGoogle ScholarCross RefCross Ref
  29. Julia Levashina, Christopher J Hartwell, Frederick P. Morgeson, and Michael A. Campion. 2014. The structured employment interview: Narrative and quantitative review of the research literature. Personnel Psychology 67, 1 (2014), 241--293.Google ScholarGoogle ScholarCross RefCross Ref
  30. Julie McCarthy and Richard Goffin. 2004. Measuring job interview anxiety: Beyond weak knees and sweaty palms. Personnel Psychology 57, 3 (2004), 607-- 637.Google ScholarGoogle ScholarCross RefCross Ref
  31. Iftekhar Naim, M Iftekhar Tanveer, Daniel Gildea, and Mohammed Ehsan Hoque. 2015. Automated prediction and analysis of job interview performance: The role of what you say and how you say it. In International Conference on Automatic Face and Gesture Recognition (FG).Google ScholarGoogle ScholarCross RefCross Ref
  32. Ryosuke Nakanishi, Koji Inoue, Katsuya Takanashi, and Tatsuya Kawahara. 2018. Generating fillers based on dialog act pairs for smooth turn-taking by humanoid robot. In International Workshop on Spoken Dialog System Technology (IWSDS).Google ScholarGoogle Scholar
  33. Deborah M Powell, David J Stanley, and Kayla N Brown. 2018. Meta-analysis of the relation between interview anxiety and interview performance. Canadian Journal of Behavioural Science 50, 4 (2018), 195--207.Google ScholarGoogle ScholarCross RefCross Ref
  34. Pooja Rao S. B, Sowmya Rasipuram, Rahul Das, and Dinesh B. Jayagopi. 2017. Automatic assessment of communication skill in non-conventional interview settings: A comparative study. In International Conference on Multimodal Interaction (ICMI). 221--229.Google ScholarGoogle Scholar
  35. Antoine Raux and Maxine Eskenazi. 2009. A finite-state turn-taking model for spoken dialog systems. In North American Chapter of the Association for Computational Linguistics (NAACL). 629--637.Google ScholarGoogle Scholar
  36. Matthew J. Smith, Emily J. Ginger, Katherine Wright, Michael A Wright, Julie Lounds Taylor, Laura Boteler Humm, Dale E. Olsen, Morris D. Bell, and Michael F Fleming. 2014. Virtual reality job interview training in adults with autism spectrum disorder. Journal of Autism and Developmental Disorders 44, 10 (2014), 2450--2463.Google ScholarGoogle ScholarCross RefCross Ref
  37. Ilona Straub. 2016. 'It looks like a human!' The interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription. AI & society 31, 4 (2016), 553--571.Google ScholarGoogle Scholar
  38. Ming-Hsiang Su, Chung-Hsien Wu, and Yi Chang. 2019. Follow-up question generation using neural tensor network-based domain ontology population in an interview coaching system. In INTERSPEECH. 4185--4189.Google ScholarGoogle Scholar
  39. Ming-Hsiang Su, Chung-Hsien Wu, Kun-Yi Huang, Qian-Bei Hong, and HuaiHung Huang. 2018. Follow-up question generation using pattern-based seq2seq with a small corpus for interview coaching. In INTERSPEECH. 1006--1010.Google ScholarGoogle Scholar
  40. Sei Ueno, Hirofumi Inaguma, Masato Mimura, and Tatsuya Kawahara. 2018. Acoustic-to-word attention-based model complemented with character-level CTC-based model. In International Conference on Acoustics, Speech and Signal Processing (ICASSP). 5804--5808.Google ScholarGoogle ScholarCross RefCross Ref
  41. Daniela Villani, Claudia Repetto, Pietro Cipresso, and Giuseppe Riva. 2012. May I experience more presence in doing the same thing in virtual reality than in reality? An answer from a simulated job interview. Interacting with Computers 24, 4 (2012), 265--272.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Job Interviewer Android with Elaborate Follow-up Question Generation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction
        October 2020
        920 pages
        ISBN:9781450375818
        DOI:10.1145/3382507

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 October 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate453of1,080submissions,42%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader