skip to main content
10.1145/3613905.3651996acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

Exploring the Use of Large Language Model-Driven Chatbots in Virtual Reality to Train Autistic Individuals in Job Communication Skills

Published:11 May 2024Publication History

Abstract

Autistic individuals commonly encounter challenges in communicating with others which can lead to difficulties in obtaining and maintaining jobs. Thus, job training programs have emphasized training the communication skills of autistic individuals to improve their employability. Hence, we developed a virtual reality application that features avatars as chatbots powered by Large Language Models (LLMs), such as GPT-3.5 Turbo, and employs speech-based interactions with users. The use of LLM-driven chatbots allows job coaches to create training scenarios for trainees using text prompts. We conducted a preliminary study with three autistic trainees and two job coaches to gather early-stage feedback on the application’s usability and user experience. In the study, the trainee participants were asked to interact with the application in two scenarios involving customer interactions. Our findings indicate that our application shows promise for training job communication. Furthermore, we discuss its user experience aspects from the trainees’ and job coaches’ perspectives.

Figure 1:

Figure 1: An overview of the VR chatbot application’s setup. (a) The setup of the VR chatbot application for communication skills training. (b) A screenshot of a scenario where the virtual character is interacting with a trainee.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Autism is a lifelong neurodevelopmental condition that significantly impacts individuals in various aspects of life  [36]. Autistic individuals commonly encounter challenges in social interactions, communication, and learning, which can manifest in distinct characteristics and challenges  [25]. These unique traits and challenges associated with autism have been found to contribute to a range of educational and employment obstacles faced by individuals on the spectrum  [14, 24, 28, 29].

To improve the employability of autistic people, various educational and vocational rehabilitation programs have been developed focusing on job-training autistic people with the help of job coaches [13]. While many of these programs focus on training the hands-on skills of autistic trainees, job coaches in these programs (people who train them for employment) also emphasize the importance of training the communication skills of their autistic trainees such as communicating with customers, confidently facing interviews, etc. [2, 14, 29]. To train the communication skills of an autistic trainee, job coaches employ a variety of methods primarily involving simulated one-on-one practice “chat” sessions for different scenarios. However, developing new social communication scenarios, continuously conducting practice sessions, etc., can be taxing for a job coach and a trainee will always require a partner to practice such scenarios.

Recent research has begun exploring novel approaches that integrated Artificial Intelligence (AI)-powered chatbots with Virtual Reality (VR) [1, 3, 5, 23, 31] to offer more personalized training opportunities in job training. In these works, the AI-powered chatbots simulate communication experiences in various settings including virtual role-play for job interview simulations [30, 32], training assistance [33, 35], and emotional support [4, 8, 21, 22]. Here, while VR can provide an immersive and customizable environment for trainees to use and practice independently [5], training AI models and adapting the models for custom scenarios has been a challenging task [39]. However, the recent introduction of pre-trained Large Language Models (LLMs) has enabled more dynamic chatbot applications, such as ChatGPT [26], that simulate natural conversations based on text based instructions called ‘prompts’. Thus, to address the above challenge, we developed a chatbot application driven by an LLM (the GPT-3.5 Turbo model) to assist autistic individuals in practicing communication skills in VR (Figure 1(b)).

Additionally, while many studies present the perspectives of the autistic trainees, limited studies have assessed the views of the job coaches who work with the autistic individuals to understand their perspectives on using LLM-driven chatbots in VR for job communication skills training. Inspired by the previous works, our approach aims to fill this gap by focusing on gathering feedback from both autistic trainees and job coaches across different virtual service provider-customer interaction scenarios. To gather early-stage feedback on our approach and how it can assist job coaches in training autistic individuals of their job-related communication skills, we conducted a preliminary study involving three autistic trainees and two job coaches. In this study, the trainee participants were asked to interact with avatars in VR environments to explore this technique’s socializing and communication experience in two job scenarios: taking orders in a cafe and handling complaints about item returns in a shop. Meanwhile, the job coach participants were asked to evaluate the trainees’ experience and discuss how this technique could be further improved and integrated into the job training practices. Here, we posed two research questions: RQ1: “What is the user experience of the VR chatbot application in job communication skill training?” and RQ2: “What are the job coaches’ perspectives on using the VR chatbot application for job communication skills training?”

Our contributions of this work are:

1) We developed a VR chatbot application powered by the GPT-3.5 Turbo model that allows job coaches to create job training scenarios that can simulate an immersive service provider-customer communication experience.

2) We conducted a preliminary study to assess the user experience of our application in two job communication training scenarios from the perspectives of three autistic trainees and two job coaches who work with people with disabilities.

Skip 2VR CHATBOT APPLICATION Section

2 VR CHATBOT APPLICATION

To facilitate training communication skills of autistic trainees, we developed a VR chatbot application powered by GPT-3.5 Turbo. The main goals of this application were to allow the job coaches to flexibly and efficiently create scenarios and for trainees to immersively train their communication skills.

2.1 Application

Figure 2:

Figure 2: The in-app views of the two scenarios within the VR chatbot application. (a) Scenario 1: Users act as coffee shop owners and take orders from customers. (b) Scenario 2: Users act as jewelry store owners and handle customer complaints regarding item returns.

The VR chatbot application was developed using the Unity Platform version 2022.3.7f1 and Oculus Integration Package version 54.1. This application runs on a Windows laptop and is viewed on a Meta VR headset through Meta Quest Link.

A scene in the application consists of a virtual character and a virtual environment (Figure 1(a)). The virtual character, developed using the Meta Avatars SDK1, which supports using a mock-up output to generate life-like facial expressions and eye movements, aims to deliver an engaging experience to autistic individuals [6, 10, 20]. Additionally, it uses Oculus Lipsync for Unity2 to generate virtual characters’ lip movements based on the input speech sound. The virtual environment is presented using a 360-degree photo captured from the real world to simplify the scenario setup for job coaches while providing users with an immersive experience.

To offer autistic individuals a realistic communication experience with the virtual characters [20], the application only accepts voice-based interactions as the input, as shown in Figure 1(a). Users initiate this speech interaction by pressing a button on the VR headset controller. Once the user’s speech is recorded, the application transcribes the voice input using Meta Voice SDK’s Dictation feature3. Next, the transcript is sent as text strings to the GPT model through OpenAI APIs4. We used the GPT model as it is a natural language generation engine that can also simulate conversations [34]. Meanwhile, it allows implementing different scenarios easily through initial text prompts (system prompts) [12, 37] where the virtual character’s dialog characteristics can be influenced based on the instructions of the prompt text. In the current application version, the initial prompt that sets the scenario is stored in an editable text file within the application folder (see Sec 2.2). We used the “gpt-3.5-turbo-16k-0613” model developed by OpenAI for our study. This version of GPT model was selected based on its availability at the time the study was conducted, its immediate response time, excellent natural language understanding and generation capabilities, and high maximum token capacity [18]. To assist the model in understanding and remembering the dialog context, previous conversation dialogues are sent to the GPT model along with the transcript of the user’s latest speech input.

Next, the response message obtained from the GPT model is converted to speech using Meta Voice SDK’s Text-to-Speech (TTS) feature, which is powered by a Wit.ai-based service5. Especially during the coaching process, the dialogues from either the user or the virtual character are displayed as captions on the screen. This allows job coaches who observe the application’s view from an external screen to understand the conversations more clearly, thereby enhancing the coaching experience. For records and analysis, the conversations are saved on a local text file on the PC. Due to the 20-second maximum audio length limit of Wit.ai’s TTS feature, we determined through trial and error that a safe maximum text string length for the virtual character would be approximately 180 characters. This limitation is included as part of the initial prompt as a response restriction.

2.2 Scenarios

For this preliminary study (Sec 3), we designed two scenarios to simulate communication experiences in real-world service provider-customer interaction scenarios. These scenarios were proposed and designed during prior discussions with job coaches from a local job training organization.

In Scenario 1 (see Figure 2(a)), the user assumes the role of a coffee shop owner and takes orders from a virtual character posing as a customer. The expected ending criteria for this scenario are that the participant can ask for the ordered items, accept money, and hand over the prepared order to the customer.

In Scenario 2 (see Figure 2(b)), the user takes on the role of a jewelry store owner and handles a complaint from a virtual customer who wishes to return a necklace without a receipt. In this scenario, we expected the participant to listen to the customer’s request, comfort the customer, or address the customer’s concern.

To set up these two scenarios, we have built two distinct scenes in our VR chatbot application: a coffee shop and a market. In both scenes, a virtual character stands at the center, facing the user, and is surrounded by a 360-degree photo that serves as the environment. What differentiates the two scenes are the initial prompts, the virtual character’s appearance and voice, as well as the background image. The initial prompts for both scenarios were refined through trial and error.

Scenario 1 used a virtual character with a preset speech voice named “Prospector” from the Voice SDK’s TTS feature. Its backdrop was a 360-degree image of a coffee shop interior. We defined its initial prompt as “Act as a cafe customer. You are ordering something from a coffee shop. Make your sentence as oral as possible. Do not exceed 180 characters. Now let’s start acting.”

Scenario 2 used a virtual character with a preset character speech voice named “Rebecca” from the TTS feature. Its backdrop was a 360-degree image of a marketplace with small stalls. We set its initial prompt as “Act as a complaining customer. You are returning a necklace without a receipt. Make your sentence as oral as possible. Do not exceed 180 characters. Now let’s start acting.”

Skip 3METHODOLOGY Section

3 METHODOLOGY

To explore and evaluate the user experience of our approach, we conducted a preliminary study that involved autistic trainees and job coaches.

3.1 Participants

We recruited three job trainees (TP1-TP3) aged 26 to 63 (Mean = 38.67, SD = 21.08. 3 males) and two job coaches (JCP1, JCP2) with a mean age of 47 (SD = 8.49. 1 male and 1 female), as the participants of this study through word of mouth from a local job training organization that works with people with disabilities. One job coach worked with two trainees for six months, while another worked with the third trainee for the same period, providing training for their employment. All three trainee participants were diagnosed with autism and they all can communicate, read, comprehend interview questions, and perform the required training tasks using our proposed VR application. Among the trainee participants, TP3 was the only one who reported previous experience with VR, as well as having prior job and job training experience. Regarding the two job coach participants, JCP1 has been working as a job coach for 15 years and has coached 15 individuals with intellectual disabilities. JCP2 has been working as a job coach for 2 years and has experience with 8 individuals with intellectual disabilities. Each participant was compensated with 30 USD for their participation.

3.2 Apparatus

The study was conducted in a meeting room at the job training organization. To minimize the risk of VR-related motion sickness, participants were advised to remain seated during the VR experience. The equipment used included a Meta Quest headset, a 15-inch Windows 11 laptop, and a 55-inch television. The previously described scenarios were employed throughout the study. During the sessions, the VR headset was connected to the laptop, which mirrored the headset’s view. This view was also projected onto the television, allowing job coaches to closely monitor and guide the trainee’s experience. The laptop’s screen was video-recorded and participants’ conversations were voice-recorded. Additionally, transcripts of conversations between participants and chatbots were documented and stored on the laptop.

3.3 Study Procedure

The study was approved by the ethics review board of the authors’ institution. All five participants (trainees and job coaches) participated in the same initial study session (pre-study questionnaires and VR experiences), followed by a focus group session with the trainees and an interview with the job coaches. This format was adopted to allow job coaches to assist and instruct the trainees as required.

The trainees’ study session contained a pre-study survey, a VR experience study, and a focus group. After giving an introduction and having the participant sign the informed consent form, we asked the participants to complete the pre-study questionnaire, which inquired about trainees’ demographic information, professional background, and previous user experience with VR. Next, we had the trainees sequentially experience two virtual customer interaction scenarios in VR, starting with Scenario 1 followed by Scenario 2. This order was chosen based on the complexity, starting with the simpler scenario featuring fixed processes, and progressing to the more complex one without specific handling procedures. When each trainee participants was experiencing a scenario, both job coaches also stayed in the meeting room to observe and guide the trainee. Here, the trainees were asked to “communicate” with the customers similar to a typical practice session during their communication training. Thus, the job coaches were able to stand near their trainees, watch trainees’ views within the application as projected on the television, and provide verbal instructions to them. There was a 5-minute break after experiencing each scenario. Next, a focus group study was conducted among all the trainees. Here, we aimed to collect feedback on three key aspects: the qualitative user experience with the VR chatbot application, suggestions for improving the VR chatbot application, and their expectations for using the VR chatbot application in job communication skills training. Overall, the trainees’ study session took no more than 60 minutes to complete.

The study session for job coaches included four components: a pre-study survey, observation during the trainee’s VR experience study, observation during the trainee’s focus group study, and a post-study interview. Similar to the trainee’s session, after giving an introduction and having the job coach participants sign the informed consent form, they were asked to complete a survey to collect their demographic information and their expectations about communication skills training via chatbots in VR. Subsequently, as stated above, job coach participants joined the experiment session to observe and guide the trainee participants. Then, they participated in a focus group, where they listened to the trainee participants’ responses. After all the trainees finished their study sessions, we conducted a semi-structured post-test interview with the job coaches. Before the interview started, the job coaches had the option to personally experience the two virtual customer interaction scenarios in VR, to better understand the simulation experience. During the interview session, the job coaches were asked to provide feedback on several areas: the job training experience of the trainees with intellectual disabilities, suggested improvements for the current application, and their visions for using this approach in future communication skills training programs. The interview took less than 30 minutes to complete.

Skip 4FINDINGS Section

4 FINDINGS

We conducted a thematic analysis [11] of the conversation transcripts between the chatbot application and the participants, the observations recorded during the study, as well as of the feedback collected from the participants. By analyzing these data, we identified several themes reflecting the participants’ experiences with this approach.

4.1 Role-Playing Simulations Using the VR Chatbot Application

According to our analysis of the transcripts, our application that leverages LLM-driven chatbots demonstrated abilities in understanding the context, proposing creative conversations based on the scenario, and keeping the conversation going. These abilities benefit the performance of the job training-oriented role-playing simulations of our study. For example, in Scenario 1, the chatbot application can propose a specific order request, “Can I get a medium latte to go with almond milk, please? Oh, and a blueberry muffin too, if you have any left!” This was in response to our initial prompt that instructed without being given much context (See Sec 2.2). In Scenario 2, the chatbot application showed it could adapt the scenarios in response to user inputs. When TP2 tried to verify if the item was brought from their store, the chatbot application could adapt to the context to keep the conversation going: “Yes, I’m positive! I remember the salesperson recommending it to me. But now it’s broken and I want my money back, even without a receipt!”

Despite its creativity, the chatbots can generate emotionally charged responses, aiming to create conversations that more closely resemble real-world interactions. This was particularly evident in Scenario 2. When TP1 mentioned their “manager” was too busy to offer assistance, the virtual avatar responded in an agitated manner: “Of course, your manager is ‘busy’. This is just adding to the terrible service I’ve received. I will be taking my business elsewhere from now on.”

4.2 User Experience and Effectiveness of Using the Application for Communication Skills Training

In our study, the trainee participants found their overall experience with the VR chatbot application engaging and comfortable. They reported not feeling severe motion sickness during and after their experience with the application, which might be attributed to the fact that participants interacted with the chatbots while seated. Additionally, they were impressed by the level of immersion in the scenario environments. “I felt like being in the store,” said TP1.

During the focus group, participants gave positive feedback regarding the clarity of the conversations with the virtual character and how the training process with the application could promote thinking and enhance communication skills. Here, the trainee participants noted the ability to pause between conversation turns allows them to think carefully before responding to “customers”, offering a unique training experience compared to real-world communication scenarios. As TP3 put it, “I was trying to handle the situation, in terms of what I was going to talk to her (the chatbot)”, explaining his pause before replying to the customer.

Feedback from job coaches affirmed the effectiveness of the VR chatbot application. Both job coaches were enthusiastic about incorporating this technology into their training sessions. Especially, JCP2 noted the inherent challenges of communication skills training and the promise of the VR chatbot application in the skills training, stating that “As every situation is very different, so it is difficult to handle these situations like diverting to a manager or what else can be done. In the real world, you have to be on your toes, so the dynamic nature of this tool is going to be very helpful.”

Additionally, JCP1 noted that this application enabled trainees to independently practice their communication skills. He emphasized that the VR environment assists the autistic trainees in concentrating more effectively, by reducing distractions from their surroundings. Both job coaches agreed that generating transcripts or recording sessions during the training process were useful, as it allowed for a more effective evaluation of the trainees’ progress and outcomes.

4.3 Expectations of Job Coaches and Proposed Improvements

In discussing the application’s setup, JCP1 emphasized the need for simplification to accommodate job coaches who may not be tech-savvy. JCP1 suggested that implementing a more user-friendly scenario setup, perhaps through a PC-based integrated app, would be beneficial. This would enable coaches to develop and evaluate training scenarios without the need to interact directly with a VR device. JCP1 stated, “I hope that we, as job coaches, could develop a training scenario easily using this tool, even for those who are unfamiliar with VR and specific text-based instructions (like GPT prompts).”

Meanwhile, the job coaches highlighted the training process with the application could be optimized. For instance, in Scenario 2, it was observed that when TP2 and TP3 were undergoing the scenario, the “customers” consistently complained about their shopping experience. This led to their conversations deviating from the intended direction until the job coaches stepped in to guide the trainees. Thus, we propose incorporating specific ending criteria when designing scenarios using initial text prompts. For example, in Scenario 2, an appropriate ending criterion could be the “customers” successfully returning their products.

Skip 5DISCUSSION Section

5 DISCUSSION

To answer RQ1, as indicated by the trainee participants, the VR chatbot application creates a sense of presence in the given job environment. It also simulates a realistic and engaging service provider-customer communication experience in the scenarios. These comments might be related to the adoption of LLM-driven chatbots and VR in the application. Additionally, job coach participants noted that VR helps autistic trainees focus better on the training tasks, as the autistic individuals are often easily distracted by their surroundings in typical training process [7].

Meanwhile, the trainee participants also agreed on the application’s effectiveness in practicing communication skills. They found the ability to pause and think about their responses to the virtual character, who acted as a “customer,” to be beneficial for their communication skills training as it can assist in promoting them thinking.

To answer RQ2, according to feedback from job coaches, there are inherent difficulties in traditional communication skills training. One of the significant challenges is coaching trainees to handle the various situations that arise in a job scenario. In this context, the adoption of LLM-driven chatbots in the application could be particularly beneficial. It can dynamically generate virtual customer responses and reactions based on a trainee’s input and the given scenario design prompt, which can simplify the communication training setup and process for job coaches.

Moreover, job coaches expressed expectations to enhance the coaching experience with our application. In addition to the desire to simplify the setup process for VR training, which was also noted in previous studies in similar and other contexts [9, 17, 27], our observation and job coaches’ feedback highlighted the importance of specifying prompts in scenarios design. In our practice in this preliminary study, two simple prompts are used for setting up the two job scenarios. This approach aimed to provide more exploration space for using the application in job communication training and to gather insights from autistic trainees and job coaches. However, due to the natural language generation characteristics of the LLM-driven chatbots, a prompt without specified context might be more likely to generate unrealistic responses based on the trainees’ input and fail to fulfill the intended purpose of a training scenario [16, 19]. This issue was particularly evident during Scenario 2, which had ambiguous requirements in its prompt design. Therefore, when using LLM-driven chatbots in job communication training, especially with autistic individuals, it is necessary for job coaches to use more explicit prompts [37]. These should include clear requirements and specific ending criteria to achieve better training outcomes.

Additionally, although this study was conducted in a controlled environment, where job coaches were present to observe and guide the trainee participants, we did not have specific restrictions on the agents’ behaviors. This was mainly because prior to our study, the job coaches expressed their interests in exploring everyday service provider-customer interactions through this simulation. However, the job coaches highlighted that, when these chatbots are used in more independent training sessions, they may need to guide trainees in managing those unrealistic responses (which often referred to as “hallucination”) from the LLMs [16]. Meanwhile, it is also crucial for them to protect trainees from potential ethical issues and ensure their overall well-being during the training process [15, 38].

Skip 6LIMITATIONS AND FUTURE WORK Section

6 LIMITATIONS AND FUTURE WORK

This preliminary study included only three trainee participants and two job coaches, as several other participants withdrew due to personal reasons. Furthermore, recruiting participants of this particular demographic (autistic trainees and job coaches who work with them) has been challenging due to similar personal constraints. To address this, we plan to collaborate with more local job training organizations, thereby enlarging our participant pool.

Meanwhile, this study was conducted in a controlled environment over a two-hour session. To further explore the effects of our approach, we plan to conduct longitudinal studies. These future studies will primarily involve job coaches creating scenarios tailored to their needs using our system and integrating this technique into regular communication skill training sessions. These future studies will enable us to gather more comprehensive feedback.

In terms of experience with avatars, currently, the avatars can converse robustly with participants, but their physical expressions are limited to lip movements and blinking. Therefore, our future research will aim to investigate the integration of body language expressions and interactions with the virtual environment, among other aspects, to enable more immersive experiences for participants. We believe these enhancements will also benefit trainees by helping them learn to identify other non-verbal expressions and interactions of individuals.

As a major future research direction, based on our observation and insights from this study, we are considering further incorporating the role of job coaches into the communication skills training program facilitated by the application. This aims to develop tools for job coaches to intervene in the training experience, in addition to their role in observing and instructing trainees during the training process. Here, we aim to enable job coaches to guide the chatbot’s behaviors and eliminate unrealistic responses in subsequent interactions by sending assistant prompts to the GPT models. This proposed feature could also be beneficial in other scenarios of communication skills training, such as interview training.

Skip 7CONCLUSION Section

7 CONCLUSION

In this work, we developed a speech-based VR application, which leverages virtual characters as chatbots powered by LLMs, specifically using the GPT-3.5 Turbo model. With this tool, job coaches can easily create scenarios to train job-related communication skills for autistic individuals. To evaluate its user experience, we conducted a preliminary study involving three autistic trainees and two job coaches. In this study, the trainee participants interacted with the VR chatbot application in two scenarios: taking orders in a coffee shop and handling item return complaints in a market. Concurrently, the job coach participants observed and guided the trainees’ experience. Based on observation and participants’ feedback, our results suggest the potential and challenges of the VR chatbot application in job communication skills training. Furthermore, we identified the current utility and areas for future enhancement to optimize its job training experience.

Footnotes

Skip Supplemental Material Section

Supplemental Material

Video Preview

Video Preview

mp4

42.3 MB

3613905.3651996-talk-video.mp4

Talk Video

mp4

99.8 MB

3613905.3651996-video-figure.mp4

Video Figure

mp4

260.6 MB

References

  1. Ashwaq Zaini Amat, Michael Breen, Spencer Hunt, Devon Wilson, Yousaf Khaliq, Nathan Byrnes, Daniel J. Cox, Steven Czarnecki, Cameron L. Justice, Deven A. Kennedy, Tristan C. Lotivio, Hunter K. McGee, Derrick M. Reckers, Justin W. Wade, Medha Sarkar, and Nilanjan Sarkar. 2021. Collaborative Virtual Environment to Encourage Teamwork in Autistic Adults in Workplace Settings. In Universal Access in Human-Computer Interaction. Design Methods and User Experience(Lecture Notes in Computer Science), Margherita Antona and Constantine Stephanidis (Eds.). Springer International Publishing, Cham, 339–348. https://doi.org/10.1007/978-3-030-78092-0_22Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Pinaki Prasanna Babar, Mike Barry, and Roshan L Peiris. 2023. Understanding Job Coaches’ Perspectives on Using Virtual Reality as a Job Training Tool for Training People with Intellectual Disabilities. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 300, 7 pages. https://doi.org/10.1145/3544549.3585915Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Morris D. Bell and Andrea Weinstein. 2011. Simulated Job Interview Skill Training for People with Psychiatric Disability: Feasibility and Tolerability of Virtual Reality Training. Schizophrenia bulletin 37 (09 2011), S91–S97. Issue suppl 2. https://doi.org/10.1093/schbul/sbr061Google ScholarGoogle ScholarCross RefCross Ref
  4. Jackylyn Beredo, Carlo Migel Bautista, Macario Cordel, and Ethel Ong. 2021. Generating Empathetic Responses with a Pre-trained Conversational Model. In Text, Speech, and Dialogue, Kamil Ekštein, František Pártl, and Miloslav Konopík (Eds.). Springer International Publishing, Cham, 147–158. https://doi.org/10.1007/978-3-030-83527-9_13Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Lal Bozgeyikli, Evren Bozgeyikli, Andrew Raij, Redwan Alqasemi, Srinivas Katkoori, and Rajiv Dubey. 2017. Vocational Rehabilitation of Individuals with Autism Spectrum Disorder with Virtual Reality. ACM Transactions on Accessible Computing 10, 2 (April 2017), 5:1–5:25. https://doi.org/10.1145/3046786Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Lal "Lila" Bozgeyikli, Evren Bozgeyikli, Srinivas Katkoori, Andrew Raij, and Redwan Alqasemi. 2018. Effects of Virtual Reality Properties on User Experience of Individuals with Autism. ACM Trans. Access. Comput. 11, 4, Article 22 (nov 2018), 27 pages. https://doi.org/10.1145/3267340Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Federica Caruso, Sara Peretti, Vita Santa Barletta, Maria Chiara Pino, and Tania Di Mascio. 2023. Recommendations for Developing Immersive Virtual Reality Serious Game for Autism: Insights From a Systematic Literature Review. IEEE Access 11 (2023), 74898–74913. https://doi.org/10.1109/ACCESS.2023.3296882Google ScholarGoogle ScholarCross RefCross Ref
  8. Jacky Casas, Timo Spring, Karl Daher, Elena Mugellini, Omar Abou Khaled, and Philippe Cudré-Mauroux. 2021. Enhancing Conversational Agents with Empathic Abilities. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents (Virtual Event, Japan) (IVA ’21). Association for Computing Machinery, New York, NY, USA, 41–47. https://doi.org/10.1145/3472306.3478344Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Vanny Chao and Roshan Peiris. 2022. College Students’ and Campus Counselors’ Attitudes Toward Teletherapy and Adopting Virtual Reality (Preliminary Exploration) for Campus Counseling Services. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (, Athens, Greece,) (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 75, 4 pages. https://doi.org/10.1145/3517428.3550378Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yufang Cheng and Jun Ye. 2010. Exploring the social competence of students with autism spectrum conditions in a collaborative virtual learning environment – The pilot study. Computers & Education 54, 4 (2010), 1068–1077. https://doi.org/10.1016/j.compedu.2009.10.011Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Victoria Clarke and Virginia Braun. 2017. Thematic analysis. The journal of positive psychology 12, 3 (2017), 297–298.Google ScholarGoogle Scholar
  12. Yubing Gao, Wei Tong, Edmond Q. Wu, Wei Chen, GuangYu Zhu, and Fei-Yue Wang. 2023. Chat With ChatGPT on Interactive Engines for Intelligent Driving. IEEE Transactions on Intelligent Vehicles 8, 3 (March 2023), 2034–2036. https://doi.org/10.1109/TIV.2023.3252571 Conference Name: IEEE Transactions on Intelligent Vehicles.Google ScholarGoogle ScholarCross RefCross Ref
  13. Darren Hedley, Mirko Uljarević, Lauren Cameron, Santoshi Halder, Amanda Richdale, and Cheryl Dissanayake. 2017. Employment programmes and interventions targeting adults with autism spectrum disorder: A systematic review of the literature. Autism 21, 8 (Nov. 2017), 929–941. https://doi.org/10.1177/1362361316661855 Publisher: SAGE Publications Ltd.Google ScholarGoogle ScholarCross RefCross Ref
  14. Dawn Hendricks. 2010. Employment and adults with autism spectrum disorders: Challenges and strategies for success. Journal of Vocational Rehabilitation 32, 2 (Jan. 2010), 125–134. https://doi.org/10.3233/JVR-2010-0502 Publisher: IOS Press.Google ScholarGoogle ScholarCross RefCross Ref
  15. Madhan Jeyaraman, Swaminathan Ramasubramanian, Sangeetha Balaji, Naveen Jeyaraman, Arulkumar Nallakumarasamy, and Shilpa Sharma. 2023. ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World Journal of Methodology 13, 4 (Sept. 2023), 170–178. https://doi.org/10.5662/wjm.v13.i4.170Google ScholarGoogle ScholarCross RefCross Ref
  16. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55, 12, Article 248 (mar 2023), 38 pages. https://doi.org/10.1145/3571730Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Qiao Jin, Yu Liu, Svetlana Yarosh, Bo Han, and Feng Qian. 2022. How Will VR Enter University Classrooms? Multi-stakeholders Investigation of VR in Higher Education. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA,) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 563, 17 pages. https://doi.org/10.1145/3491102.3517542Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Katikapalli Subramanyam Kalyan. 2024. A survey of GPT-3 family large language models including ChatGPT and GPT-4. Natural Language Processing Journal 6 (2024), 100048. https://doi.org/10.1016/j.nlp.2023.100048Google ScholarGoogle ScholarCross RefCross Ref
  19. Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103 (2023), 102274. https://doi.org/10.1016/j.lindif.2023.102274Google ScholarGoogle ScholarCross RefCross Ref
  20. Evdokimos I. Konstantinidis, Magda Hitoglou-Antoniadou, Andrej Luneski, Panagiotis D. Bamidis, and Maria M. Nikolaidou. 2009. Using affective avatars and rich multimedia content for education of children with autism. In Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments (Corfu, Greece) (PETRA ’09). Association for Computing Machinery, New York, NY, USA, Article 58, 6 pages. https://doi.org/10.1145/1579114.1579172Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Aislyn PC Lin, Charles V Trappey, Chi-Cheng Luan, Amy JC Trappey, and Kevin LK Tu. 2021. A Test Platform for Managing School Stress Using a Virtual Reality Group Chatbot Counseling System. Applied Sciences 11, 19 (2021), 9071.Google ScholarGoogle ScholarCross RefCross Ref
  22. Atsuko Matsumoto, Takeshi Kamita, Yukari Tawaratsumida, Ayako Nakamura, Harumi Fukuchimoto, Yuko Mitamura, Hiroko Suzuki, Tsunetsugu Munakata, and Tomoo Inoue. 2021. Combined Use of Virtual Reality and a Chatbot Reduces Emotional Stress More Than Using Them Separately. JUCS - Journal of Universal Computer Science 27 (12 2021), 1371–1389. https://doi.org/10.3897/jucs.77237Google ScholarGoogle ScholarCross RefCross Ref
  23. Stefan Michalski, Caroline Ellison, Ancret Szpak, and Tobias Loetscher. 2021. Vocational Training in Virtual Environments for People With Neurodevelopmental Disorders: A Systematic Review. Frontiers in Psychology 12 (07 2021). https://doi.org/10.3389/fpsyg.2021.627301Google ScholarGoogle ScholarCross RefCross Ref
  24. Christopher A. Morgan and Byron Wine. 2018. Evaluation of Behavior Skills Training for Teaching Work Skills to a Student with Autism Spectrum Disorder. Education and Treatment of Children 41, 2 (2018), 223–232. https://www.jstor.org/stable/26535265 Publisher: [West Virginia University Press, Springer].Google ScholarGoogle ScholarCross RefCross Ref
  25. National Center on Birth Defects and Developmental Disabilities (NCBDDD) and Centers for Disease Control and Prevention (CDC). 2023. Signs & Symptoms | Autism Spectrum Disorder (ASD) | NCBDDD | CDC. https://www.cdc.gov/ncbddd/autism/signs.htmlGoogle ScholarGoogle Scholar
  26. OpenAI. 2022. Introducing ChatGPT. https://openai.com/blog/chatgptGoogle ScholarGoogle Scholar
  27. Hyanghee Park, Daehwan Ahn, and Joonhwan Lee. 2023. Towards a Metaverse Workspace: Opportunities, Challenges, and Design Implications. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (, Hamburg, Germany,) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 503, 20 pages. https://doi.org/10.1145/3544548.3581306Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Anne M. Roux, Paul T. Shattuck, Jessica E. Rast, Julianna A. Rava, and Kristy A. Anderson. 2015. National Autism Indicators Report: Transition into Young Adulthood. Philadelphia, PA: Life Course Outcomes Research Program, A. J.Google ScholarGoogle Scholar
  29. Melissa Scott, Ben Milbourn, Marita Falkmer, Melissa Black, Sven Bölte, Alycia Halladay, Matthew Lerner, Julie Lounds Taylor, and Sonya Girdler. 2019. Factors impacting employment for people with autism spectrum disorder: A scoping review. Autism 23, 4 (May 2019), 869–901. https://doi.org/10.1177/1362361318787789 Publisher: SAGE Publications Ltd.Google ScholarGoogle ScholarCross RefCross Ref
  30. Matthew J Smith, Emily J Ginger, Katherine Wright, Michael A Wright, Julie Lounds Taylor, Laura Boteler Humm, Dale E Olsen, Morris D Bell, and Michael F Fleming. 2014. Virtual reality job interview training in adults with autism spectrum disorder. Journal of autism and developmental disorders 44 (2014), 2450–2463.Google ScholarGoogle ScholarCross RefCross Ref
  31. Matthew J. Smith, Emily J. Ginger, Michael Wright, Katherine Wright, Laura Boteler Humm, Dale Olsen, Morris D. Bell, and Michael F. Fleming. 2014. Virtual Reality Job Interview Training for Individuals with Psychiatric Disabilities. The Journal of nervous and mental disease 202, 9 (Sept. 2014), 659–667. https://doi.org/10.1097/NMD.0000000000000187Google ScholarGoogle ScholarCross RefCross Ref
  32. Iulia Stanica, Maria-Iuliana Dascalu, Constanta Nicoleta Bodea, and Alin Dragos Bogdan Moldoveanu. 2018. VR Job Interview Simulator: Where Virtual Reality Meets Artificial Intelligence for Education. In 2018 Zooming Innovation in Consumer Technologies Conference (ZINC). 9–12. https://doi.org/10.1109/ZINC.2018.8448645Google ScholarGoogle ScholarCross RefCross Ref
  33. Natalia Stewart Rosenfield, Kathleen Lamkin, Jennifer Re, Kendra Day, LouAnne Boyd, and Erik Linstead. 2019. A virtual reality system for practicing conversation skills for children with autism. Multimodal Technologies and Interaction 3, 2 (2019), 28.Google ScholarGoogle ScholarCross RefCross Ref
  34. Viriya Taecharungroj. 2023. “What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data and Cognitive Computing 7 (02 2023), 35. https://doi.org/10.3390/bdcc7010035Google ScholarGoogle ScholarCross RefCross Ref
  35. Cheryl Y Trepagnier, Dale E Olsen, Laura Boteler, and Corinne A Bell. 2011. Virtual conversation partner for adults with autism. Cyberpsychology, Behavior, and Social Networking 14, 1-2 (2011), 21–27.Google ScholarGoogle ScholarCross RefCross Ref
  36. Paul Whiteley, Kevin Carr, and Paul Shattock. 2019. Is Autism Inborn And Lifelong For Everyone?Neuropsychiatric Disease and Treatment 15 (Oct. 2019), 2885–2891. https://doi.org/10.2147/NDT.S221901Google ScholarGoogle ScholarCross RefCross Ref
  37. J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–21. https://doi.org/10.1145/3544548.3581388Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Jianlong Zhou, Heimo Müller, Andreas Holzinger, and Fang Chen. 2023. Ethical ChatGPT: Concerns, Challenges, and Commandments. arxiv:2305.10646 [cs.AI]Google ScholarGoogle Scholar
  39. Yue-ting Zhuang, Fei Wu, Chun Chen, and Yun-he Pan. 2017. Challenges and opportunities: from big data to knowledge in AI 2.0. Frontiers of Information Technology & Electronic Engineering 18, 1 (Jan. 2017), 3–14. https://doi.org/10.1631/FITEE.1601883Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Exploring the Use of Large Language Model-Driven Chatbots in Virtual Reality to Train Autistic Individuals in Job Communication Skills

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
        May 2024
        4761 pages
        ISBN:9798400703317
        DOI:10.1145/3613905

        Copyright © 2024 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 11 May 2024

        Check for updates

        Qualifiers

        • Work in Progress
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate6,164of23,696submissions,26%
      • Article Metrics

        • Downloads (Last 12 months)118
        • Downloads (Last 6 weeks)118

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format