ABSTRACT
Large language models are increasingly mediating, modifying, and even generating messages for users, but the receivers of these messages may not be aware of the involvement of AI. To examine this emerging direction of AI-Mediated Communication (AI-MC), we investigate people’s perceptions of AI written messages. We analyze how such perceptions change in accordance with the interpersonal emphasis of a given message. We conducted both large-scale surveys and in-depth interviews to investigate how a diverse set of factors influence people’s perceived trust in AI-mediated writing of emails. We found that people’s trust in email writers decreased when they were told that AI was involved in the writing process. Surprisingly trust increased when AI was used for writing more interpersonal emails (as opposed to more transactional ones). Our study provides insights regarding how people perceive AI-MC and has practical design implications on building AI-based products to aid human interlocutors in communication1.
- Ritu Agarwal and Jayesh Prasad. 1999. Are individual differences germane to the acceptance of new information technologies?Decision sciences 30, 2 (1999), 361–391.Google Scholar
- Mary D Salter Ainsworth, Mary C Blehar, Everett Waters, and Sally N Wall. 2015. Patterns of attachment: A psychological study of the strange situation. Psychology Press.Google Scholar
- Nancy K Baym. 1995. The emergence of community in computer-mediated communication.(1995).Google Scholar
- Joyce Berg, John Dickhaut, and Kevin McCabe. 1995. Trust, reciprocity, and social history. Games and economic behavior 10, 1 (1995), 122–142.Google Scholar
- John Bowlby. 1969. Attachment and Loss: Attachment; John Bowlby. Basic books.Google Scholar
- Pablo Briñol, Richard E Petty, and S Christian Wheeler. 2006. Discrepancies between explicit and implicit self-concepts: Consequences for information processing.Journal of personality and social psychology 91, 1(2006), 154.Google Scholar
- Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165(2020).Google Scholar
- Daniel Buschek, Martin Zürn, and Malin Eiband. 2021. The Impact of Multiple Parallel Phrase Suggestions on Email Input and Composition Behaviour of Native and Non-Native English Writers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 732, 13 pages. https://doi.org/10.1145/3411764.3445372Google ScholarDigital Library
- Alex Calderwood, Vivian Qiu, Katy Ilonka Gero, and Lydia B Chilton. 2020. How Novelists Use Generative Language Models: An Exploratory User Study.. In HAI-GEN+ user2agent@ IUI.Google Scholar
- Karen S Cook, Toshio Yamagishi, Coye Cheshire, Robin Cooper, Masafumi Matsuda, and Rie Mashima. 2005. Trust building via risk taking: A cross-societal experiment. Social psychology quarterly 68, 2 (2005), 121–142.Google Scholar
- Eric Corbett and Christopher Le Dantec. 2021. Designing Civic Technology with Trust. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 173, 17 pages. https://doi.org/10.1145/3411764.3445341Google ScholarDigital Library
- Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz studies: why and how. In Proceedings of the 1st international conference on Intelligent user interfaces. 193–200.Google ScholarDigital Library
- David C DeAndrea. 2014. Advancing warranting theory. Communication Theory 24, 2 (2014), 186–204.Google ScholarCross Ref
- John December. 1996. Units of analysis for Internet communication. Journal of Computer-Mediated Communication 1, 4 (1996), JCMC143.Google Scholar
- Jens Edlund, Joakim Gustafson, Mattias Heldner, and Anna Hjalmarsson. 2008. Towards human-like spoken dialogue systems. Speech communication 50, 8-9 (2008), 630–645.Google Scholar
- Amanda J Ferguson and Randall S Peterson. 2015. Sinking slowly: Diversity in propensity to trust predicts downward trust spirals in small groups.Journal of Applied Psychology 100, 4 (2015), 1012.Google Scholar
- Jeffrey T Hancock, Mor Naaman, and Karen Levy. 2020. AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communication 25, 1 (2020), 89–100.Google ScholarCross Ref
- Jeffrey T. Hancock, Jennifer Thom-Santelli, and Thompson Ritchie. 2004. Deception and Design: The Impact of Communication Technology on Lying Behavior. Association for Computing Machinery, New York, NY, USA, 129–134. https://doi.org/10.1145/985692.985709Google ScholarDigital Library
- Jeffrey T. Hancock, Catalina Toma, and Nicole Ellison. 2007. The Truth about Lying in Online Dating Profiles. Association for Computing Machinery, New York, NY, USA, 449–452. https://doi.org/10.1145/1240624.1240697Google ScholarDigital Library
- Russell Hardin. 2002. Trust and trustworthiness. Russell Sage Foundation.Google Scholar
- Anthony Hartley and Donia Scott. 2001. Evaluating text quality: judging output texts without a clear source. In Proceedings of the ACL 2001 Eighth European Workshop on Natural Language Generation (EWNLG).Google ScholarDigital Library
- Susan C Herring. 1996. Computer-mediated communication: Linguistic, social, and cross-cultural perspectives. Vol. 39. John Benjamins Publishing.Google Scholar
- Matthew Hines. 2019. I smell a bot: California’s SB 1001, free speech, and the future of bot regulation. Hous. L. Rev. 57(2019), 405.Google Scholar
- Jess Hohenstein, Dominic DiFranzo, Rene F Kizilcec, Zhila Aghajari, Hannah Mieczkowski, Karen Levy, Mor Naaman, Jeff Hancock, and Malte Jung. 2021. Artificial intelligence in communication impacts language and social relationships. arXiv preprint arXiv:2102.05756(2021).Google Scholar
- Jess Hohenstein and Malte Jung. 2018. AI-Supported Messaging: An Investigation of Human-Human Text Conversation with AI Support(CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3188487Google ScholarDigital Library
- Jess Hohenstein and Malte Jung. 2020. AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Computers in Human Behavior 106 (2020), 106190.Google ScholarDigital Library
- Alon Jacovi, Ana Marasovic, Tim Miller, and Yoav Goldberg. 2020. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. CoRR abs/2010.07487(2020). arXiv:2010.07487https://arxiv.org/abs/2010.07487Google Scholar
- Maurice Jakesch, Megan French, Xiao Ma, Jeffrey T Hancock, and Mor Naaman. 2019. AI-mediated communication: How the perception that profile text was written by AI affects trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Sara Kiesler, Jane Siegel, and Timothy W McGuire. 1984. Social psychological aspects of computer-mediated communication.American psychologist 39, 10 (1984), 1123.Google Scholar
- Toko Kiyonari, Toshio Yamagishi, Karen S Cook, and Coye Cheshire. 2006. Does trust beget trustworthiness? Trust and trustworthiness in two games and two cultures: A research note. Social psychology quarterly 69, 3 (2006), 270–283.Google ScholarCross Ref
- Bran Knowles, Mark Rouncefield, Mike Harding, Nigel Davies, Lynne Blair, James Hannon, John Walden, and Ding Wang. 2015. Models and Patterns of Trust. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (Vancouver, BC, Canada) (CSCW ’15). Association for Computing Machinery, New York, NY, USA, 328–338. https://doi.org/10.1145/2675133.2675154Google ScholarDigital Library
- Cliff A.C. Lampe, Nicole Ellison, and Charles Steinfield. 2007. A Familiar Face(Book): Profile Elements as Signals in an Online Social Network. Association for Computing Machinery, New York, NY, USA, 435–444. https://doi.org/10.1145/1240624.1240695Google ScholarDigital Library
- Laura Larrimore, Li Jiang, Jeff Larrimore, David Markowitz, and Scott Gorski. 2011. Peer to peer lending: The relationship between language features, trustworthiness, and persuasion success. Journal of Applied Communication Research 39, 1 (2011), 19–37.Google ScholarCross Ref
- Min Kyung Lee and Katherine Rich. 2021. Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 138, 14 pages. https://doi.org/10.1145/3411764.3445570Google ScholarDigital Library
- Gale M Lucas, Jonathan Gratch, Aisha King, and Louis-Philippe Morency. 2014. It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior 37 (2014), 94–100.Google ScholarDigital Library
- Stefanie M. Faas, Johannes Kraus, Alexander Schoenhals, and Martin Baumann. 2021. Calibrating Pedestrians’ Trust in Automated Vehicles: Does an Intent Display in an External HMI Support Trust Calibration and Safe Crossing Behavior?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 157, 17 pages. https://doi.org/10.1145/3411764.3445738Google ScholarDigital Library
- Xiao Ma, Jeffery T. Hancock, Kenneth Lim Mingjie, and Mor Naaman. 2017. Self-Disclosure and Perceived Trustworthiness of Airbnb Host Profiles(CSCW ’17). Association for Computing Machinery, New York, NY, USA, 2397–2409. https://doi.org/10.1145/2998181.2998269Google ScholarDigital Library
- François Mairesse and Marilyn Walker. 2006. Automatic recognition of personality in conversation. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. 85–88.Google ScholarCross Ref
- Franc Mairesse, Marilyn Walker, 2006. Words mark the nerds: Computational models of personality recognition through language. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 28.Google Scholar
- François Mairesse, Marilyn A Walker, Matthias R Mehl, and Roger K Moore. 2007. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of artificial intelligence research 30 (2007), 457–500.Google ScholarDigital Library
- Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.Google ScholarCross Ref
- D Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and validating trust measures for e-commerce: An integrative typology. Information systems research 13, 3 (2002), 334–359.Google Scholar
- Gary S Nickell and John N Pinto. 1986. The computer attitude scale. Computers in human behavior 2, 4 (1986), 301–306.Google Scholar
- James W Pennebaker, Matthias R Mehl, and Kate G Niederhoffer. 2003. Psychological aspects of natural language use: Our words, our selves. Annual review of psychology 54, 1 (2003), 547–577.Google Scholar
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research 21, 140 (2020), 1–67.Google Scholar
- Ronald E Robertson, Alexandra Olteanu, Fernando Diaz, Milad Shokouhi, and Peter Bailey. 2021. “I Can’t Reply with That”: Characterizing Problematic Email Reply Suggestions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 724, 18 pages. https://doi.org/10.1145/3411764.3445557Google ScholarDigital Library
- Ronald E Robertson, Alexandra Olteanu, Fernando Diaz, Milad Shokouhi, and Peter Bailey. 2021. “I Can’t Reply with That”: Characterizing Problematic Email Reply Suggestions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.Google ScholarDigital Library
- Julian B Rotter. 1971. Generalized expectancies for interpersonal trust.American psychologist 26, 5 (1971), 443.Google Scholar
- Quentin Roy, Sébastien Berlioux, Géry Casiez, and Daniel Vogel. 2021. Typing Efficiency and Suggestion Accuracy Influence the Benefits and Adoption of Word Suggestions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 714, 13 pages. https://doi.org/10.1145/3411764.3445725Google ScholarDigital Library
- Astrid Schepman and Paul Rodway. 2020. Initial validation of the general attitudes towards Artificial Intelligence Scale. Computers in Human Behavior Reports 1 (2020), 100014.Google ScholarCross Ref
- Weiyan Shi, Xuewei Wang, Yoo Jung Oh, Jingwen Zhang, Saurav Sahay, and Zhou Yu. 2020. Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Apr 2020). https://doi.org/10.1145/3313831.3376843Google ScholarDigital Library
- Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, 2016. Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence. (2016).Google Scholar
- Sandeep Subramanian, Raymond Li, Jonathan Pilault, and Christopher Pal. 2019. On extractive and abstractive neural document summarization with transformer language models. arXiv preprint arXiv:1909.03186(2019).Google Scholar
- S. Shyam Sundar. 2007. The MAIN Model : A Heuristic Approach to Understanding Technology Effects on Credibility.Google Scholar
- Joseph B Walther. 1996. Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication research 23, 1 (1996), 3–43.Google Scholar
- Joseph B Walther. 2011. Theories of computer-mediated communication and interpersonal relations. The handbook of interpersonal communication 4 (2011), 443–479.Google Scholar
- Joseph B. Walther. 2015. Social Information Processing Theory (CMC). John Wiley & Sons, Ltd, 1–13. https://doi.org/10.1002/9781118540190.wbeic192 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781118540190.wbeic192Google ScholarCross Ref
- Jennifer Wang and Angela Moulden. 2021. AI Trust Score: A User-Centered Approach to Building, Designing, and Measuring the Success of Intelligent Workplace Features. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411763.3443452Google ScholarDigital Library
- Darcy Warkentin, Michael Woodworth, Jeffrey T. Hancock, and Nicole Cormier. 2010. Warrants and Deception in Computer Mediated Communication. In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work (Savannah, Georgia, USA) (CSCW ’10). Association for Computing Machinery, New York, NY, USA, 9–12. https://doi.org/10.1145/1718918.1718922Google ScholarDigital Library
- Rui Yan. 2018. ” Chitty-Chitty-Chat Bot”: Deep Learning for Conversational AI.. In IJCAI, Vol. 18. 5520–5526.Google Scholar
- Xi Yang and Marco Aurisicchio. 2021. Designing Conversational Agents: A Self-Determination Theory Approach. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 256, 16 pages. https://doi.org/10.1145/3411764.3445445Google ScholarDigital Library
Index Terms
- Will AI Console Me when I Lose my Pet? Understanding Perceptions of AI-Mediated Email Writing
Recommendations
AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness
CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing SystemsWe are entering an era of AI-Mediated Communication (AI-MC) where interpersonal communication is not only mediated by technology, but is optimized, augmented, or generated by artificial intelligence. Our study takes a first look at the potential impact ...
Effect of Confidence Indicators on Trust in AI-Generated Profiles
CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing SystemsArtificial Intelligence (AI) is increasingly augmenting and generating online content, but research suggests that users distrust content which they believe to be AI-generated. In this paper, we study whether introducing a confidence indicator, a text ...
Understanding Egyptian Consumers' Intentions in Online Shopping
The purpose of this article is to investigate the factors that impact on Egyptian consumers' attitudes and intentions to use online shopping by integrating the technology acceptance models of Davis, and Fishbein and Ajzen's theory of reasoned action. In ...
Comments