In companies we trust: consumer adoption of artificial intelligence services and the role of trust in companies and AI autonomy

Darius-Aurel Frank (Department of Management, Aarhus BSS – School of Business and Social Sciences, Aarhus University, Aarhus, Denmark)
Lina Fogt Jacobsen (Department of Management, Aarhus BSS – School of Business and Social Sciences, Aarhus University, Aarhus, Denmark)
Helle Alsted Søndergaard (Department of Management, Aarhus BSS – School of Business and Social Sciences, Aarhus University, Aarhus, Denmark)
Tobias Otterbring (Department of Management, University of Agder, Kristiansand, Norway)

Information Technology & People

ISSN: 0959-3845

Article publication date: 30 May 2023

Issue publication date: 18 December 2023

5748

Abstract

Purpose

Companies utilize increasingly capable Artificial Intelligence (AI) technologies to deliver modern services across a range of consumer service industries. AI autonomy, however, sparks skepticism among consumers leading to a decrease in their willingness to adopt AI services. This raises the question as to whether consumer trust in companies can overcome consumer reluctance in their decisions to adopt high (vs low) autonomy AI services.

Design/methodology/approach

Using a representative survey (N = 503 consumers corresponding to N = 3,690 observations), this article investigated the link between consumer trust in a company and consumers' intentions to adopt high (vs low) autonomy AI services from the company across 23 consumer service companies accounting for six distinct service industries.

Findings

The results confirm a significant and positive relationship between consumer trust in a company and consumers' intentions to adopt AI services from the same company. AI autonomy, however, moderates this relationship, such that high (vs low) AI autonomy weakens the positive link between trust in a company and AI service adoption. This finding replicates across all 23 companies and the associated six industries and is robust to the inclusion of several theoretically important control variables.

Originality/value

The current research contributes to the recent stream of AI research by drawing attention to the interplay between trust in companies and adoption of high autonomy AI services, with implications for the successful deployment and marketing of AI services.

Keywords

Citation

Frank, D.-A., Jacobsen, L.F., Søndergaard, H.A. and Otterbring, T. (2023), "In companies we trust: consumer adoption of artificial intelligence services and the role of trust in companies and AI autonomy", Information Technology & People, Vol. 36 No. 8, pp. 155-173. https://doi.org/10.1108/ITP-09-2022-0721

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Darius-Aurel Frank, Lina Fogt Jacobsen, Helle Alsted Søndergaard and Tobias Otterbring

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Artificial intelligence (AI) is fundamentally transforming the consumer service industry and is largely driven by the goal of delivering unique value for businesses, consumers and society at large (Davenport et al., 2020; Huang and Rust, 2018). The amount of consumer services that can nowadays be provided by AI technology covers a wide range of service industries including self-driving vehicles (Huang and Qian, 2021), service robots (Frank and Otterbring, 2023) and screening systems in healthcare (Frank et al., 2021a, b). Tapping into the potential of AI services has become a top priority for many companies (Huang and Rust, 2021), with leaders eying beyond traditional applications such as personalizing customer experiences (Cabrera-Sánchez et al., 2021) and chatting with customers (Pillai et al., 2023) toward more advanced applications, in which AI is automating tasks and performing decisions (Sharma et al., 2022a, b). However, whereas the former type of AI services is fairly well accepted (Longoni and Cian, 2022), the latter is commonly rejected by consumers (e.g. André et al., 2018; Carmon et al., 2019; Malodia et al., 2022).

AI autonomy is defined as “the ability of the AI technology to perform tasks derived from humans without specific human interventions” (Hu et al., 2021, p. 2). This capability allows AI to adapt to its environment, enabling AI services to become proactive based on what has been learned from past interactions with customers as well as from observations of the surrounding environment (Beer et al., 2014; Wen et al., 2022). To illustrate the difference AI autonomy plays, AI services low in autonomy are capable of aiding consumer decisions through personalized recommendations (e.g. De Bruyn et al., 2020; Gursoy et al., 2019; Kim et al., 2021), such as when the online fashion retailer Zalando uses AI to recommend consumers the right size and style of clothes (Marr, 2019) and when Amazon uses AI to personalize product offerings based on shoppers' past purchase behavior (Morgan, 2018). By contrast, AI services high in autonomy are capable of performing decisions for the consumer, such as when AI shopping assistants automatically reorder goods frequently used by consumers (e.g. ink cartridges; Klaus and Zaichkowsky, 2022) and when self-driving cars transport passengers autonomously (Casidy et al., 2021; Hegner et al., 2019).

When consumers face novelties as in the case of AI services, trust typically acts as a key determinant of consumers' adoption decisions (e.g. Frank et al., 2022; Gefen et al., 2003; Hasan et al., 2021). In the remainder of this article, we define trust as the willingness of trustors (e.g. consumers) to make themselves vulnerable to a trustee (e.g. a company) based on the expectation that the trustee will perform a desired action (e.g. providing a service that meets or exceeds expectations) important to the trustors (Mayer et al., 1995). In this regard, however, it is important to make a distinction between trust in technology and trust in a company, because AI services are delivered through AI technology, but they are developed, deployed and managed by companies. As such, consumers' trust in AI services is not solely influenced by the specific characteristics of the technology (McKnight et al., 2011), but also by their relationship with the company that offers the AI service. This link between consumers' trust in a company and their decisions to use services from the company is founded on commitment–trust theory (Morgan and Hunt, 1994), according to which the same AI service offered by two different companies would likely result in different outcomes in terms of adoption intentions due to differences in customers' existing relationships with the two companies (Lin et al., 2023).

Building on this notion and recent work that highlights a contingency of companies' brands as a source of trust in consumer preferences for autonomous vehicles (Eggers and Eggers, 2022), the present research investigates the general role of consumers' trust in a company and consumers' intention to adopt high (vs low) autonomy AI services from the company. Specifically, we test two fundamental research questions: (1) “To what extent does consumer trust in companies relate to their intention to adopt AI services from the same companies?”, and (2) “To what extent is this relationship between consumer trust in companies and consumer AI service adoption affected by AI autonomy?” We address these questions in a survey of 503 Danish consumers, in which we study consumers' adoption intentions of AI services from a total of 23 consumer service companies across six industries in relation to the consumers' trust in the company and the autonomy of the AI services. The results show that consumer trust in a company significantly and positively relates to consumers' adoption intentions toward AI services from the same company. Moreover, the positive link between trust in the company and AI service adoption is weaker for AI services high in autonomy relative to those low in autonomy. These findings are robust to the inclusion of several potential confounds, such as consumers' prior use of AI and their demographic profile.

Together, this study makes three key contributions. Firstly, it demonstrates and quantifies the influence of consumer trust in companies on AI service adoption intentions. Secondly, it documents AI autonomy as an important boundary condition for the link between consumers' trust in the company and their AI service adoption intentions, suggesting that high autonomy AI services may not be adopted at conventional levels of trust in companies. Thirdly, the large and representative sample, covering a total of 23 companies across six industries, enables the detection of even small effect sizes with high statistical power, thus offering generalizable conclusions across several distinct service scenarios – a rarity in the literature, where most studies are based on convenience samples of university students or online panel participants (Otterbring et al., 2020). Overall, these contributions improve the understanding of the complex nature of trust in companies and AI autonomy in consumers' AI adoption decisions, leading to actionable advice for marketers and decision-makers who oversee the design, implementation and regulation of high and low autonomy AI services.

Theoretical background

A growing body of research shows that whereas AI technology often outperforms humans in specific tasks and controlled environments, its superiority in terms of performance, speed and capabilities does not necessarily lead to appreciation and adoption by consumers (e.g. Castelo et al., 2019; Dietvorst et al., 2015; Longoni et al., 2019). An emerging stream of research has sought to explain the determinants of AI adoption (e.g. Yu et al., 2023; for a recent review, see Mustak et al., 2021). The most often used theoretical lens to understand such AI-related decisions, as determined in a review of 412 theoretical views in this context by Mariani et al. (2021), is the Technology Acceptance Model (TAM) by Davis (1989). TAM is a widely used theory in the field of information systems to explain user acceptance and use of technology (Yousafzai et al., 2007). However, as the theory was developed for use with non-intelligent technologies, it has limitations in its applicability to the rapidly evolving field of AI technology (Butt et al., 2021; Cabrera-Sánchez et al., 2021), as shown in the case of autonomous vehicle adoption (Hegner et al., 2019; Meyer-Waarden and Cloarec, 2022).

Trust in companies and AI service adoption

The theoretical foundation of this research extends to the role of trust, which has gained researchers' attention for a long time in the field of marketing (Moorman et al., 1993). Importantly, trust reflects the consumers' willingness to be vulnerable to the actions of a trustee (e.g. a company) despite potentially adverse outcomes for them due to involved risks, uncertainty or vulnerability (Becerra and Korgaonkar, 2011). The construct of trust is central to commitment–trust theory (Morgan and Hunt, 1994), which posits that in a seller-and-buyer relationship, trust leads to greater commitment to the relationship, increased satisfaction and a higher likelihood of positive outcomes. Importantly, commitment–trust theory has been extended from organizational actors to customer–supplier relationships (De Ruyter et al., 2001), with trust often discussed as one of the key determinants of customers' loyalty to companies and their brands in service (Aurier and N'Goala, 2010), retail (Hess and Story, 2005) and other shopping settings (Bilgihan, 2016).

Applied to consumer adoption of AI services, commitment–trust theory suggests that consumers with higher trust in a company will be more likely to adopt AI services offered by the company. The reasoning for this relationship between trust and AI adoption is that consumers who trust a company are more committed to engaging with freshly introduced services by that company. This approach to understanding AI adoption is fundamentally different from earlier research on trust in the technology as a determinant of technology adoption decisions (Frank et al., 2022; Gefen et al., 2003; McKnight et al., 2011), because it does not rely on customers' perceptions of the technology. Accordingly, the current approach (i.e. focusing on trust in the company) can be perceived as a strength in the context of consumers' AI service adoption, because at this stage consumers tend to have little experience with AI technology, and therefore less trust in relying on the technology (Hasan et al., 2021). Consumers' relationships with companies, on the other hand, are typically shaped over the course of several years with company interactions (Eisingerich and Bell, 2008), and many consumers have arguably already seen several technologies before AI solutions started making their way into a given company's offerings. Supporting this notion, studies on the introduction of online banking have revealed that consumers' trust in offline banks significantly and directly affected their intention to use online banking solutions from the same banks (Lee et al., 2007). Another example of the positive spillover effects of consumer trust can be found in the context of online shopping, in which consumer trust in e-commerce vendors was found to positively influence consumers' repurchase intentions (Liu and Tang, 2018). Based on this general line of logic, we hypothesize.

H1.

Consumers' trust in a company is positively associated with their intentions to adopt AI services from the company.

Consumer responses to AI autonomy

The second construct of relevance for this research is AI autonomy. As stated in the introduction, AI autonomy refers to the ability of AI to make independent decisions and perform tasks without needing human input (Hu et al., 2021). Prior research on consumer resistance to autonomous vehicles has taught us that there may be a threshold of novelty at which consumers are no longer willing to adopt AI services (König and Neumayr, 2017; Meyer-Waarden and Cloarec, 2022). Indeed, Eggers and Eggers (2022) found that whereas trustworthiness of technology companies could positively impact consumers' preferences for purchasing autonomous vehicles, such companies were found to provide a less natural fit with the concept of autonomously-driven vehicles relative to specialized companies that provided a better fit. This points to a limitation for the general notion of companies' brands as a source of trust (Delgado-Ballester and Munuera-Alemán, 2001), with AI autonomy possibly moderating the influence of trust in a company.

Categorization theory postulates that consumers form expectations about new products based on the fit between the new product category and the existing brand image of the company (Rosch and Mervis, 1975; Klink and Smith, 2001). The better fit between the original brand and the innovative product, the greater the likelihood of consumer adoption because the new product is associated with positive and familiar attributes of the original brand (Aaker and Keller, 1990; Bottomley and Holden, 2001). On the contrary, if the novel product does not fit with the image consumers hold about the brand and its associated company, consumer rejection is more likely to occur because the innovation is perceived as too far away from the original brand (cf. Lee and Aaker, 2004; see also Graf et al., 2018; Otterbring et al., 2022a, b). Trust typically involves some sense of familiarity due to repeated exposure to the trust source in the form of, for example, multiple interactions with a given company (Ha and Perks, 2005). Therefore, as familiarity breeds preference and trust (e.g. Kwan et al., 2015; Zajonc, 1968), and considering that high (vs low) autonomy AI is currently the less (vs more) prevalent AI alternative available on the market, we posit that high (vs low) AI autonomy constitutes a worse fit with companies in general, and with highly trusted companies in particular. Applied to consumer adoption of AI services, categorization theory hence suggests that the fit between AI autonomy and companies' brands should moderate the adoption decisions by consumers, such that high (vs low) AI autonomy will harm consumers' adoption intentions relatively more in case they put high (vs low) trust in a given company, considering the lack of fit between high AI autonomy and the image consumers' hold toward a highly trusted company. Accordingly, as depicted in our conceptual model (Figure 1), we hypothesize.

H2.

The positive link between consumer trust in a company and consumer AI service adoption intentions is moderated by AI autonomy, and is weaker for high (vs low) autonomy AI services.

Methodology

Sample and data collection

This study uses data collected in March 2021 from a representative sample of the adult Danish population, aged 18–79 years (Mage = 46.6 years). The sample consisted of 503 participants, with characteristics summarized in Table 1. Age-wise, participants between 18 and 33 years of age were intentionally oversampled to make up for the lack of consumers under 18 years of age who were prevented from participating in the study due to considerations of data protection regulations. Our oversampling of younger adult consumers can be justified by previous research (Nöjd et al., 2020), which has discussed this age segment as playing the most significant role in shaping the future of consumer behavior and purchasing decisions.

Procedure, materials and measures

The survey, originally designed in English, was independently translated to Danish by two researchers with Danish as their mother tongue and was subsequently retranslated into English by a third researcher for internal validation purposes, in accordance with best practices (Brislin, 1970).

After participants had provided their written informed consent and entered the survey, they indicated for 23 consumer service companies whether they had used them “in the recent past.” These companies were selected as they offered consumer services in the target country, which in terms of industries (grocery, mobile operators, delivery, e-commerce, furniture and streaming) accounted for more than two-thirds of consumers' annual household spending for consumer services according to data from Statistics Denmark (2021).

Based on the selection of companies, participants proceeded to answer the main questionnaire for a total of up to four companies (within-subjects), randomly drawn from those they had recently used and were not already sufficiently sampled (quota least fill). In each instance, the questionnaire was adapted to refer to the respective company name in all questions, as illustrated by the square brackets stating “company” in the following examples.

The focal dependent variable of this study was participants' AI service adoption intention, measured at the beginning of each main questionnaire using the single-item, “How likely is it that you would start using the described AI service from [company]?” Such single-item measures are valid if they, as in the current case, represent clear and unambiguous constructs (e.g. Bergkvist and Rossiter, 2007; Otterbring, 2020). The item was rated on a 7-point scale from 1 = very unlikely to 7 = very likely, in response to each of the two following hypothetical AI services, adapted to the companies' name, respectively:

Scenario A (low autonomy AI service): Imagine [company] is about to introduce a new artificial intelligence service that gives personalized recommendations based on your previous interactions with [company]’s offerings. Such recommendations could be, for example, for realizing savings, discovering new contents/items, or reminding you of things you otherwise would have missed.

Scenario B (high autonomy AI service): Imagine [company] is about to introduce a new artificial intelligence service that makes personalized decisions for you based on your previous interactions with [company]’s offerings. Such decisions could include, for example, renewing subscriptions or ordering/delivering items/contents based on predicted liking/needs.

Our focal independent variable, participants' trust in the company, was measured using a standard 3-item Likert scale (1 = strongly disagree, 7 = strongly agree), with the company-specific items, “I would rely on [company]”, “I would trust in [company],” and “[company] is trustworthy,” adapted from Becerra and Korgoankar (2011). As an index of these items showed high reliability (Cronbach's alpha = 0.96), the items were averaged into a single trust-in-the-company index prior to standardization and subsequent analyses.

To control for potential confounds beyond age, gender and region, participants' were asked to indicate if they had used AI-based services from the company before (“Have you ever used a service from [company] that was delivered (at least in parts) through artificial intelligence?”; answer options: yes, no and maybe). Lastly, participants proceeded to answer questions related to a different project, after which they were debriefed and remunerated.

Validation study

To ensure that our conceptualization of AI autonomy was adequate in terms of classifying high (vs low) autonomy AI services as constituting a worse fit with the companies consumers did (vs did not) trust, we conducted a separate validation study (Gruijters, 2022; Otterbring et al., 2022a, b) among 81 participants (34.6% female) through Prolific Academic, drawn from the same country as in our main study (i.e. Denmark). Participants were presented with the same AI service autonomy contexts as described above, and were instructed to indicate whether the described AI service was consistent with their image of the company for which they previously indicated the most (vs least) trust in, with the order randomized. Response alternatives ranged from 1 (totally disagree) to 7 (totally agree). A 2 (AI autonomy: low vs high) × 2 (trust in the company: low vs high) within-subjects ANOVA revealed the hypothesized interaction effect, F(1, 80) = 11.32, p = 0.001, ηp2 = 0.12. Follow-up paired-samples t-tests revealed that consumers indeed perceived the high autonomy AI service as constituting a significantly worse fit with the company (M = 3.38, SD = 1.86) than the low autonomy AI service in the case of the company participants put most trust in (M = 5.04, SD = 1.37; t(80) = 8.16, p < 0.001, d = 0.91). The same pattern applied to the company participants put least trust in, although the effect size was substantially weaker in this latter case (M = 3.75, SD = 1.79 vs M = 4.54, SD = 1.78; t(80) = 4.60, p < 0.001, d = 0.51). Thus, our assumption about low high (vs low) autonomy AI as constituting a worse fit with companies in general, and with companies that consumers trust in particular, was valid.

Analyses

To test the hypothesized relationships between AI service adoption intentions, consumer trust in the company and AI autonomy, the data were subjected to a linear mixed model analysis. The dependent variable was participants' AI service adoption intentions, and the two independent variables were trust in the company and AI autonomy (1 = high, −1 = low). The variables for participants' ID, company, and industry were specified as random effects to control for the repeated measurement of AI service adoption intentions, individual characteristics of participants not captured by the other variables and variation introduced by the sampling approach. Additional control variables were prior use of AI services from the company (1 = yes, 0 = maybe, −1 = no) as well as age, gender and region. Continuous independent variables were standardized and mean centered to allow for comparability of effects across the different models (Hair, 2010).

Results

Descriptive statistics

An overview of the descriptive statistics of the focal variables is shown in Table 2. As evident from the table, participants' adoption intentions for the two AI service scenarios were considerably higher in the low autonomy AI service (M = 3.78, SD = 1.99) compared to the high autonomy AI service (M = 2.71, SD = 1.90), with this difference in reported AI service adoption intentions replicating across all six industries. The average trust in the companies was above the scale midpoint of 4 overall (M = 5.05, SD = 1.38) and for each industry when viewed in isolation (Mmin = 4.72, Mmax = 5.39). Participants' reported previous usage of AI from a company was 13.7% overall, with the majority of participants reporting no prior usage of AI (50.6%), and with a large proportion reporting uncertainty about their prior usage of AI from the company (35.8%). Prior usage of AI was lowest in the groceries industry (9.6%) and peaked in the e-commerce industry (24.5%). Industries for which participants were most uncertain about their prior AI usage from the companies were in mobile operations (44.7%), delivery (42.8%) and streaming (40.7%).

Linear mixed model analysis

A series of five mixed linear regressions, summarized in Table 3, were conducted to stepwise investigate the effects of the two focal variables (trust in the company and AI autonomy; Models 1 and 2), the interaction effect of these two variables (Model 3), the influence of a relevant control variable (prior AI service usage; Model 4) and the influence of the remaining demographic variables included as controls (age, gender and region; Model 5) on the focal dependent variable of consumer AI service adoption intentions.

In support of H1, Model 1 found a significant and positive relationship between participants' trust in the company and their intentions to adopt AI services from the same company (b = 0.39, SE = 0.03, p < 0.001). This effect was found to be consistent across all subsequent models, indicating robustness to the inclusion of control variables. Model 2 showed a significant and negative main effect of high (vs low) AI autonomy on participants' AI service adoption intentions (b = −0.54, SE = 0.02, p < 0.001). As with the main effect of trust in the company, the main effect of AI autonomy also remained unchanged across all subsequently presented models.

Model 3, which in addition to Models 1 and 2 also tested for the interaction of participants' trust in the company and AI autonomy, found support for H2, given the significant and negative interaction term between trust in the company and high (vs low) AI autonomy (b = −0.14, SE = 0.02, p < 0.001). The size of the effect suggests that high AI autonomy is close to half the positive effect of trust in the company on AI service adoption relative to the low autonomy AI service, as shown in Figure 2.

Model 4, which in addition to the variables used in the previous models also controlled for participants' prior use of AI services from the company, showed that while participants' prior use of AI from a company significantly increased their AI service adoption intentions for the AI services in general from said company, this effect did not change the nature and significance of the formerly established link between consumer trust in the company and AI service adoption intentions nor did it change the moderating effect of AI autonomy.

Lastly, Model 5 showed that whereas participants' age had a negative effect on AI service adoption intentions, which is consistent with the general notion that digital natives and other younger consumer segments are more prone to adopt new technology (Gilly and Zeithaml, 1985; Laukkanen, 2016), gender and region of participants did not influence the focal outcome measure. As with the effect of prior use of AI services from the company, the significant effect of age on consumers' AI service adoption intentions did not change the nature and significance of the focal findings.

Overall, the random effects of all reported models captured large amounts of individual differences between participants in their AI service adoption intentions (τmin = 1.52; τmax = 1.74), no variation was found to be attributed to individual companies and little variation was attributed to the industries in our sample (τmin = 0.02; τmax = 0.03). The fixed effects in Models 1 to 5 explained between 3.9 (Model 1) and 16.7% (Model 5) of the total variance in participants' AI service adoption intentions, with the random effects boosting the explained variance of the models to approximately 55.5% in the final model (Model 5).

Discussion

The increasing prevalence of AI-powered services has led to a growing interest in understanding the factors that drive consumer adoption of these services. This study sought to examine the relationship between consumer trust in companies and their intentions to adopt high (vs low) autonomy AI services offered by the same companies. Our findings indicate a significant and positive association between consumer trust in a company and their intentions to adopt AI services from the same company. However, this relationship is moderated by the level of AI autonomy, with the positive link between trust in the company and AI service adoption intentions being significantly weaker for AI services high (vs low) in autonomy.

Theoretical contribution

The present findings contribute to the growing stream of literature on the role of consumer trust in AI adoption (e.g. Frank et al., 2022; Kim et al., 2021; Liu and Tang, 2018) by shifting the focus away from trust in the technology toward trust in the company. In an empirical investigation drawing on commitment–trust theory (e.g. Lin et al., 2023), the current research establishes evidence for the hypothesized link between consumers' AI service adoption intentions and their trust in the company offering this service. This relationship was replicated across all 23 consumer service companies and their associated six industries, attesting to the generalizability of the commitment-trust relationship in consumers' AI service adoption decisions. Moreover, the positive effect of trust related to the companies was not strengthened or weakened by prior experience with AI from these companies, underscoring the independence of the company as a source of trust, distinct from the characteristics of the AI technology itself (Delgado-Ballester and Munuera-Alemán, 2001).

Another contribution of this research is the examination of AI autonomy's role in consumers' AI service adoption. Our findings build on previous research on categorization theory in the context of autonomous vehicle adoption (Eggers and Eggers, 2022), showing that AI autonomy serves as a significant moderator in consumers' perception of the alignment between a company and the AI services it offers. Specifically, our results suggest that AI services high in autonomy may not be well-aligned with consumers' perceptions of trusted companies, thus undermining the commitment-trust relationship these companies have established with their customers. This interpretation is consistent with categorization theory, which posits that consumer adoption of innovative products relies on the congruence between a company's existing offerings and the level of product innovation (Klink and Smith, 2001).

Practical implications

The results reported herein offer a set of practical implications useful for the successful design, implementation and marketing of AI services across different consumer contexts and services industries. First, the demonstrated positive correlation between consumer trust in a company and the adoption of its AI services highlights the opportunity for companies to capitalize on established customer relationships when transitioning to AI-driven services. This is corroborated by the fact that customers in the current research were well-acquainted with the companies, as they reported to have regular interactions with these companies. However, considering that only slightly above 10% of consumers reported experience with AI from these companies, a pragmatic approach might be preferred. Such an approach would prioritize emphasizing the company's inherent trust-building dimensions, rather than focusing on investing into trust-building elements of the AI services themselves (cf. Casidy et al., 2021). Moreover, the prospect of capitalizing consumer trust in a company for AI service adoption should encourage institutional stakeholders to endorse investments in related trust-building initiatives, such as enhancing transparency, streamlining processes and ensuring privacy (Fox et al., 2022; Lin et al., 2023).

Second, our findings on the role of AI autonomy in consumer AI adoption raise a note of caution concerning the adverse effects of increasing capabilities of AI services, which are expected to provide value for companies and consumers (e.g. Davenport et al., 2020; Huang and Rust, 2018, 2021). This is because high AI autonomy appears to diminish the positive effects of consumer trust in a company relative to low autonomy AI services, suggesting that the successful marketing of increasingly capable AI in consumer services will require other differentiation strategies (Carmon et al., 2019). The relative ineffectiveness of consumer trust in a company in case of high autonomy AI services suggests that the risks of adverse effects of deploying such AI services may outweigh the benefits of drawing upon consumers' positive and trusting relationships with a parent company. This, in turn, paves the way for a less risky approach of spinning out highly autonomous AI services under new companies, such as used by Alphabet (formerly Google) when the parent company spun out the entire self-driving car business under a new company “Waymo” (Davies, 2016), arguably to minimize adverse effects of an AI service that could take the wrong turns. According to our results, this strategy may be superior to alternative approaches, as seen by Alphabets' competitor Tesla offering a “full self-driving capability” under its parent company's name, as the latter approach tends to take huge hits in company stock evaluations every time customers' use of the AI driving-mode results in some kind of car crashes with or without fatalities (Dey, 2021).

Lastly, our findings offer practical implications for policy makers who are to assess the impact that AI has on various stakeholders in society (Hickok, 2021). Here, the need for trust in companies in case of high autonomy AI services points toward a structural problem that extends to the entire consumer service industry. We suggest that future policy measures need to be tailored to help facilitate consumer trust in companies that offer AI services. This could be achieved through trust-building mechanisms, such as third-party certification and buyer feedback mechanisms (cf. Liu and Tang, 2018), which would allow classification and transparent communication of AI services high in autonomy, ultimately informing consumers about the steps undertaken to conserve their autonomy of choice.

Limitations and future research

Although our study provides valuable insights into the role of trust in consumer adoption of high and low autonomy AI services, it has a set of limitations. To guide future research, we have made several suggestions regarding predictors, moderators, mediators and outcome variables that we believe deserve further attention when it comes to the interplay between consumer trust, AI autonomy and AI adoption (see Table 4).

First, the present study specifically highlighted the existence and extent of AI when prompting consumers to evaluate future AI offerings from the company. Although this was done intentionally to ensure participants considered the implications of AI autonomy of the described service, it may have affected consumer responses and might not precisely represent their reactions to AI-driven services that do not explicitly mention AI. Future research could explore consumer responses to AI-driven services without directly referencing AI to yield a more authentic, realistic and accurate understanding of the relationship between consumer trust in a company, AI autonomy and AI service adoption.

Another limitation of this research lies in the study design potentially not capturing the full complexity of consumer behavior in adopting AI services. While we recognize the advantages of employing a representative sample and incorporating a diverse range of companies across various industries, participants merely indicated their intentions to adopt AI services from these companies, which may not accurately reflect their actual behavior under real-world conditions (Baumeister et al., 2007; Cialdini, 2009; Otterbring, 2021). Future research could address this limitation by monitoring actual adoption behavior of high and low autonomy AI services, in order to gain insights into the complex interplay between consumer trust in companies, AI autonomy and real-world AI adoption behavior.

Finally, a limitation of the current study is that it does not capture the underlying mechanisms through which trust in companies influences adoption of the varying levels of autonomy of AI services, nor did it consider potential antecedents of consumer trust in companies. Future research would therefore benefit from experimental designs that examine how different types of trust-building mechanisms, such as transparency or brand familiarity (cf. Lin et al., 2023; Liu et al., 2021), influence consumer behavior in this context. Likewise, as the next generation of AI services might even surpass the human capabilities portrayed in our shopping scenarios, future studies should seek to further elevate AI autonomy.

Conclusions

The overall conclusion of the current research is that consumer trust in a company is positively associated with consumers' willingness to adopt AI services from that company. This relationship appears robust and consistent across all companies and corresponding industries examined. However, the level of AI autonomy moderates this association, such that high (vs low) AI autonomy weakens the positive relationship between trust in a company and AI service adoption from that company. Taken together, these findings highlight the relevance of understanding consumer trust in companies and AI autonomy for the successful marketing of high and low autonomy AI services.

Figures

Conceptual model

Figure 1

Conceptual model

A visual representation of the interaction of consumer trust in the company and AI autonomy in shaping consumer AI service adoption intentions

Figure 2

A visual representation of the interaction of consumer trust in the company and AI autonomy in shaping consumer AI service adoption intentions

Sample characteristics

Sample (N = 503)Denmark % (N = 5,850,189)
Age<1819.68
18–3334.59%20.75
34–4922.66%19.56
50–6624.25%21.96
67–8217.89%14.90
>833.15
GenderFemale47.91%50.25
Male52.09%49.75
RegionCapital12.72%13.98
Islands (excl. Capital)40.95%40.70
Mainland46.32%45.32

Note(s): Population data obtained from Statistics Denmark (2021)

Source(s): Author's own creation/work

Descriptive statistics of focal dependent and independent variables

Industry
Variables Groceries (n = 468)Mobile ops (n = 244)Delivery (n = 360)Ecommerce (n = 184)Furniture (n = 186)Streaming (n = 403)Total
AI service adoption intentionsLow AI autonomyM (SD)3.88 (1.98)3.32 (1.92)3.29 (1.95)4.34 (2.04)3.82 (1.91)4.12 (1.98)3.78 (1.99)
High AI autonomyM (SD)2.60 (1.86)2.41 (1.72)2.51 (1.87)3.01 (2.07)2.79 (1.91)3.04 (1.93)2.71 (1.90)
Trust in the company M (SD)5.23 (1.22)4.72 (1.52)4.75 (1.54)5.18 (1.43)5.39 (1.23)5.08 (1.27)5.05 (1.38)
Prior use of AI from the companyYesin %9.610.28.924.516.718.413.7
Maybein %23.544.742.838.028.540.735.8
Noin %66.945.148.337.554.840.950.6

Source(s): Author's own creation/work

Mixed model estimates for consumer AI service adoption intentions

Model 1Model 2Model 3Model 4Model 5
PredictorsB (SE)95% CIpB (SE)95% CIpB (SE)95% CIpB (SE)95% CIpB (SE)95% CIp
(Intercept)3.24 (0.10)3.05–3.42<0.0013.23 (0.10)3.05–3.42<0.0013.23 (0.10)3.05–3.42<0.0013.38 (0.09)3.21–3.55<0.0013.18 (0.19)2.82–3.55<0.001
Trust in the company0.39 (0.03)0.33–0.46<0.0010.39 (0.03)0.33–0.46<0.0010.39 (0.03)0.33–0.45<0.0010.36 (0.03)0.30–0.42<0.0010.37 (0.03)0.31–0.43<0.001
AI autonomy −0.54 (0.02)−0.58–−0.49<0.001−0.54 (0.02)−0.58–−0.49<0.001−0.54 (0.02)−0.58–−0.49<0.001−0.54 (0.02)−0.58–−0.49<0.001
Trust in the company x AI autonomy −0.14 (0.02)−0.19–−0.10<0.001−0.14 (0.02)−0.19–−0.10<0.001−0.14 (0.02)−0.19–−0.10<0.001
Prior use of AI from the company 0.40 (0.04)0.32–0.49<0.0010.39 (0.04)0.31–0.48<0.001
Age −0.32 (0.06)−0.44–−0.21<0.001
Gender [Female] −0.10 (0.12)−0.34–0.130.391
Region [Islands] 0.36 (0.19)−0.02–0.740.061
Region [Mainland] 0.22 (0.19)−0.15–0.590.254
Random effects
σ22.151.821.801.771.77
τ001.69 ID1.73 ID1.74 ID1.61 ID1.52 ID
0.00 COMPANY0.00 COMPANY0.00 COMPANY0.00 COMPANY0.00 COMPANY
0.03 INDUSTRY0.03 INDUSTRY0.03 INDUSTRY0.02 INDUSTRY0.02 INDUSTRY
ICC0.440.490.500.480.47
N23 COMPANY23 COMPANY23 COMPANY23 COMPANY23 COMPANY
6 INDUSTRY6 INDUSTRY6 INDUSTRY6 INDUSTRY6 INDUSTRY
503 ID503 ID503 ID503 ID503 ID
Observations3,6903,6903,6903,6903,690
Marg. R2/Cond. R20.039/0.4650.110/0.5480.115/0.5540.136/0.5510.167/0.555

Source(s): Author's own creation/work

Potential avenues for future research

CategoryFuture research directions
Predictors
Individual differencesInvestigate the role of individual differences factors, such as personality traits, on consumer trust in companies and adoption intentions for high and low autonomy AI services
Antecedents of trustExamine how different trust-building facets, such as transparency or brand familiarity, might influence consumer trust in companies and AI service adoption
Moderators
AI autonomyExplore the contingency of the relationship between consumer trust in companies and AI service adoption at varying levels of autonomy across different AI systems
Industry-specific characteristicsInvestigate the moderating effects of industry-specific characteristics, such as consumer risk perceptions, on the relationship between trust and AI service adoption
Mediators
Perceptions of AIExamine the underlying mechanisms through which trust in companies influences adoption, such as perceived benefits, quality, reliability or ease of use
Consumer attitudesInvestigate the potential mediating effects of consumer emotions, attitudes or beliefs toward AI on the relationship between trust and adoption intentions
Outcome variables
Actual adoption behaviorConduct studies on consumers' actual adoption of AI services to gain insights into the interplay between trust, AI autonomy and real-world consumer behavior
Marketing strategiesExamine the potential effects of different marketing strategies, such as persuasive messaging or incentives, on consumer trust, AI service adoption and subsequent outcomes, such as customer satisfaction, spending or loyalty

Source(s): Author's own creation/work

References

Aaker, D.A. and Keller, K.L. (1990), “Consumer evaluations of brand extensions”, Journal of Marketing, Vol. 54 No. 1, pp. 27-41, doi: 10.1177/002224299005400102.

André, Q., Carmon, Z., Wertenbroch, K., Crum, A., Frank, D., Goldstein, W., Huber, J., Van Boven, L., Weber, B. and Yang, H. (2018), “Consumer choice and autonomy in the age of artificial intelligence and big data”, Customer Needs and Solutions, Vol. 5 Nos 1-2, pp. 28-37, doi: 10.1007/s40547-017-0085-8.

Aurier, P. and N'Goala, G. (2010), “The differing and mediating roles of trust and relationship commitment in service relationship maintenance and development”, Journal of the Academy of Marketing Science, Vol. 38 No. 3, pp. 303-325, doi: 10.1007/s11747-009-0163-z.

Baumeister, R.F., Vohs, K.D. and Funder, D.C. (2007), “Psychology as the science of self-reports and finger movements: whatever happened to actual behavior?”, Perspectives on Psychological Science, Vol. 2 No. 4, pp. 396-403, doi: 10.1111/j.1745-6916.2007.00051.x.

Becerra, E.P. and Korgaonkar, P.K. (2011), “Effects of trust beliefs on consumers' online intentions”, European Journal of Marketing, Vol. 45 No. 6, pp. 936-962, doi: 10.1108/03090561111119921.

Beer, J.M., Fisk, A.D. and Rogers, W.A. (2014), “Toward a framework for levels of robot autonomy in human-robot interaction”, Journal of Human-Robot Interaction, Vol. 3 No. 2, p. 74, doi: 10.5898/JHRI.3.2.Beer.

Bergkvist, L. and Rossiter, J.R. (2007), “The predictive validity of multiple-item versus single-item measures of the same constructs”, Journal of Marketing Research, Vol. 44 No. 2, pp. 175-184, doi: 10.1509/jmkr.44.2.175.

Bilgihan, A. (2016), “Gen Y customer loyalty in online shopping: an integrated model of trust, user experience and branding”, Computers in Human Behavior, Vol. 61, pp. 103-113, doi: 10.1016/j.chb.2016.03.014.

Bottomley, P.A. and Holden, S.J.S. (2001), “Do we really know how consumers evaluate brand extensions? Empirical generalizations based on secondary analysis of eight studies”, Journal of Marketing Research, Vol. 38 No. 4, pp. 494-500.

Brislin, R.W. (1970), “Back-translation for cross-cultural research”, Journal of Cross-Cultural Psychology, Vol. 1 No. 3, pp. 185-216, doi: 10.1177/135910457000100301.

Butt, A.H., Ahmad, H., Goraya, M.A.S., Akram, M.S. and Shafique, M.N. (2021), “Let's play: me and my AI‐powered avatar as one team”, Psychology & Marketing, Vol. 38 No. 6, pp. 1014-1025, doi: 10.1002/mar.21487.

Cabrera-Sánchez, J.-P., Villarejo-Ramos, Á.F., Liébana-Cabanillas, F. and Shaikh, A.A. (2021), “Identifying relevant segments of AI applications adopters – expanding the UTAUT2's variables”, Telematics and Informatics, Vol. 58, 101529, doi: 10.1016/j.tele.2020.101529.

Carmon, Z., Schrift, R., Wertenbroch, K. and Yang, H. (2019), “Designing AI systems that customers won't hate”, MIT Sloan Management Review, available at: https://www.researchgate.net/profile/Ziv-Carmon/publication/338005754_Designing_AI_Systems_That_Customers_Won't_Hate/links/5e0209a54585159aa495e486/Designing-AI-Systems-That-Customers-Wont-Hate.pdf

Casidy, R., Claudy, M., Heidenreich, S. and Camurdan, E. (2021), “The role of brand in overcoming consumer resistance to autonomous vehicles”, Psychology & Marketing, Vol. 38 No. 7, pp. 1101-1121, doi: 10.1002/mar.21496.

Castelo, N., Bos, M.W. and Lehmann, D.R. (2019), “Task-dependent algorithm aversion”, Journal of Marketing Research, Vol. 56 No. 5, pp. 809-825, doi: 10.1177/0022243719851788.

Cialdini, R.B. (2009), “We have to break up”, Perspectives on Psychological Science, Vol. 4 No. 1, pp. 5-6, doi: 10.1111/j.1745-6924.2009.01091.x.

Davenport, T., Guha, A., Grewal, D. and Bressgott, T. (2020), “How artificial intelligence will change the future of marketing”, Journal of the Academy of Marketing Science, Vol. 48 No. 1, pp. 24-42, doi: 10.1007/s11747-019-00696-0.

Davies, A. (2016), “Meet the blind man who convinced Google its self-driving car is finally ready”, Wired, available at: https://www.wired.com/2016/12/google-self-driving-car-waymo/

Davis, F.D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly, Vol. 13 No. 3, p. 319, doi: 10.2307/249008.

De Bruyn, A., Viswanathan, V., Beh, Y.S., Brock, J.K.-U. and Von Wangenheim, F. (2020), “Artificial intelligence and marketing: pitfalls and opportunities”, Journal of Interactive Marketing, Vol. 51, pp. 91-105, doi: 10.1016/j.intmar.2020.04.007.

De Ruyter, K., Moorman, L. and Lemmink, J. (2001), “Antecedents of commitment and trust in customer–supplier relationships in high technology markets”, Industrial Marketing Management, Vol. 30 No. 3, pp. 271-286, doi: 10.1016/S0019-8501(99)00091-7.

Delgado‐Ballester, E. and Luis Munuera‐Alemán, J. (2001), “Brand trust in the context of consumer loyalty”, European Journal of Marketing, Vol. 35 Nos 11/12, pp. 1238-1258, doi: 10.1108/EUM0000000006475.

Dey, E. (2021), “Tesla stock under pressure after fiery, fatal Model S crash”, 19 April, available at: https://www.aljazeera.com/economy/2021/4/19/tesla-stock-under-pressure-after-fiery-fatal-model-s-crash (accessed 13 February 2023).

Dietvorst, B.J., Simmons, J.P. and Massey, C. (2015), “Algorithm aversion: people erroneously avoid algorithms after seeing them err”, Journal of Experimental Psychology: General, Vol. 144 No. 1, pp. 114-126, doi: 10.1037/xge0000033.

Eggers, F. and Eggers, F. (2022), “Drivers of autonomous vehicles—analyzing consumer preferences for self-driving car brand extensions”, Marketing Letters, Vol. 33 No. 1, pp. 89-112, doi: 10.1007/s11002-021-09571-x.

Eisingerich, A.B. and Bell, S.J. (2008), “Perceived service quality and customer trust: does enhancing customers' service knowledge matter?”, Journal of Service Research, Vol. 10 No. 3, pp. 256-268, doi: 10.1177/1094670507310769.

Fox, G., Lynn, T. and Rosati, P. (2022), “Enhancing consumer perceptions of privacy and trust: a GDPR label perspective”, Information Technology & People, Vol. 35 No. 8, pp. 181-204, doi: 10.1108/ITP-09-2021-0706.

Frank, D.-A. and Otterbring, T. (2023), “Being seen… by human or machine? Acknowledgment effects on customer responses differ between human and robotic service workers”, Technological Forecasting and Social Change, Vol. 189, 122345, doi: 10.1016/j.techfore.2023.122345.

Frank, D.-A., Elbæk, C.T., Børsting, C.K., Mitkidis, P., Otterbring, T. and Borau, S. (2021b), “Drivers and social implications of Artificial Intelligence adoption in healthcare during the COVID-19 pandemic”, edited by Guidi, B., PLOS ONE, Vol. 16 No. 11, e0259928, doi: 10.1371/journal.pone.0259928.

Frank, B., Herbas-Torrico, B. and Schvaneveldt, S.J. (2021a), “The AI-extended consumer: technology, consumer, country differences in the formation of demand for AI-empowered consumer products”, Technological Forecasting and Social Change, Vol. 172, 121018, doi: 10.1016/j.techfore.2021.121018.

Frank, D., Chrysochou, P. and Mitkidis, P. (2022), “The paradox of technology: negativity bias in consumer adoption of innovative technologies”, Psychology & Marketing, Vol. 40 No. 3, pp. 554-566, mar.21740, doi: 10.1002/mar.21740.

Gefen, Karahanna and Straub (2003), “Trust and TAM in online shopping: an integrated model”, MIS Quarterly, Vol. 27 No. 1, p. 51, doi: 10.2307/30036519.

Gilly, M.C. and Zeithaml, V.A. (1985), “The elderly consumer and adoption of technologies”, Journal of Consumer Research, Vol. 12 No. 3, p. 353, doi: 10.1086/208521.

Graf, L.K.M., Mayer, S. and Landwehr, J.R. (2018), “Measuring processing fluency: one versus five items”, Journal of Consumer Psychology, Vol. 28 No. 3, pp. 393-411, edited by Kirmani, A. and Peck, J, doi: 10.1002/jcpy.1021.

Gruijters, S.L. (2022), “Making inferential leaps: manipulation checks and the road towards strong inference”, Journal of Experimental Social Psychology, Vol. 98, 104251, doi: 10.1016/j.jesp.2021.104251.

Gursoy, D., Chi, O.H., Lu, L. and Nunkoo, R. (2019), “Consumers acceptance of artificially intelligent (AI) device use in service delivery”, International Journal of Information Management, Vol. 49, pp. 157-169, doi: 10.1016/j.ijinfomgt.2019.03.008.

Ha, H.-Y. and Perks, H. (2005), “Effects of consumer perceptions of brand experience on the web: brand familiarity, satisfaction and brand trust”, Journal of Consumer Behaviour, Vol. 4 No. 6, pp. 438-452, doi: 10.1002/cb.29.

Hair, J.F. (Ed.) (2010), Multivariate Data Analysis: A Global Perspective, 7. ed., global ed., Pearson, Upper Saddle River, NJ Munich.

Hasan, R., Shams, R. and Rahman, M. (2021), “Consumer trust and perceived risk for voice-controlled artificial intelligence: the case of Siri”, Journal of Business Research, Vol. 131, pp. 591-597, doi: 10.1016/j.jbusres.2020.12.012.

Hegner, S.M., Beldad, A.D. and Brunswick, G.J. (2019), “In automatic we trust: investigating the impact of trust, control, personality characteristics, and extrinsic and intrinsic motivations on the acceptance of autonomous vehicles”, International Journal of Human–Computer Interaction, Vol. 35 No. 19, pp. 1769-1780, doi: 10.1080/10447318.2019.1572353.

Hess, J. and Story, J. (2005), “Trust‐based commitment: multidimensional consumer‐brand relationships”, Journal of Consumer Marketing, Vol. 22 No. 6, pp. 313-322, doi: 10.1108/07363760510623902.

Hickok, M. (2021), “Lessons learned from AI ethics principles for future actions”, AI and Ethics, Vol. 1 No. 1, pp. 41-47, doi: 10.1007/s43681-020-00008-1.

Hu, Q., Lu, Y., Pan, Z., Gong, Y. and Yang, Z. (2021), “Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants”, International Journal of Information Management, Vol. 56, 102250, doi: 10.1016/j.ijinfomgt.2020.102250.

Huang, Y. and Qian, L. (2021), “Understanding the potential adoption of autonomous vehicles in China: the perspective of behavioral reasoning theory”, Psychology and Marketing, Vol. 38 No. 4, pp. 669-690, doi: 10.1002/mar.21465.

Huang, M.-H. and Rust, R.T. (2018), “Artificial intelligence in service”, Journal of Service Research, Vol. 21 No. 2, pp. 155-172, doi: 10.1177/1094670517752459.

Huang, M.-H. and Rust, R.T. (2021), “Engaged to a robot? The role of AI in service”, Journal of Service Research, Vol. 24 No. 1, pp. 30-41, doi: 10.1177/1094670520902266.

Kim, J., Giroux, M. and Lee, J.C. (2021), “When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations”, Psychology & Marketing, Vol. 38 No. 7, pp. 1140-1155, doi: 10.1002/mar.21498.

Klaus, P. and Zaichkowsky, J.L. (2022), “The convenience of shopping via voice AI: introducing AIDM”, Journal of Retailing and Consumer Services, Vol. 65, 102490, doi: 10.1016/j.jretconser.2021.102490.

Klink, R.R. and Smith, D.C. (2001), “Threats to the external validity of brand extension research”, Journal of Marketing Research, Vol. 38 No. 3, pp. 326-335, doi: 10.1509/jmkr.38.3.326.18864.

König, M. and Neumayr, L. (2017), “Users' resistance towards radical innovations: the case of the self-driving car”, Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 44, pp. 42-52, doi: 10.1016/j.trf.2016.10.013.

Kwan, L.Y.-Y., Yap, S. and Chiu, C. (2015), “Mere exposure affects perceived descriptive norms: implications for personal preferences and trust”, Organizational Behavior and Human Decision Processes, Vol. 129, pp. 48-58, doi: 10.1016/j.obhdp.2014.12.002.

Laukkanen, T. (2016), “Consumer adoption versus rejection decisions in seemingly similar service innovations: the case of the Internet and mobile banking”, Journal of Business Research, Vol. 69 No. 7, pp. 2432-2439, doi: 10.1016/j.jbusres.2016.01.013.

Lee, A.Y. and Aaker, J.L. (2004), “Bringing the frame into focus: the influence of regulatory fit on processing fluency and persuasion”, Journal of Personality and Social Psychology, Vol. 86 No. 2, pp. 205-218, doi: 10.1037/0022-3514.86.2.205.

Lee, K.C., Kang, I. and McKnight, D.H. (2007), “Transfer from offline trust to key online perceptions: an empirical study”, IEEE Transactions on Engineering Management, Vol. 54 No. 4, pp. 729-741, doi: 10.1109/TEM.2007.906851.

Lin, S.-W., Huang, E.Y. and Cheng, K.-T. (2023), “A binding tie: why do customers stick to omnichannel retailers?”, Information Technology & People, Vol. 36 No. 3, pp. 1126-1159, doi: 10.1108/ITP-01-2021-0063.

Liu, Y. and Tang, X. (2018), “The effects of online trust-building mechanisms on trust and repurchase intentions: an empirical study on eBay”, Information Technology & People, Vol. 31 No. 3, pp. 666-687, doi: 10.1108/ITP-10-2016-0242.

Liu, T., Wang, W., Xu, J.D., Ding, D. and Deng, H. (2021), “Interactive effects of advising strength and brand familiarity on users' trust and distrust in online recommendation agents”, Information Technology & People, Vol. 34 No. 7, pp. 1920-1948, doi: 10.1108/ITP-08-2019-0448.

Longoni, C. and Cian, L. (2022), “Artificial intelligence in utilitarian vs Hedonic contexts: the ‘word-of-machine’ effect”, Journal of Marketing, Vol. 86 No. 1, pp. 91-108, doi: 10.1177/0022242920957347.

Longoni, C., Bonezzi, A. and Morewedge, C.K. (2019), “Resistance to medical artificial intelligence”, Journal of Consumer Research, Vol. 46 No. 4, pp. 629-650, doi: 10.1093/jcr/ucz013.

Malodia, S., Kaur, P., Ractham, P., Sakashita, M. and Dhir, A. (2022), “Why do people avoid and postpone the use of voice assistants for transactional purposes? A perspective from decision avoidance theory”, Journal of Business Research, Vol. 146, pp. 605-618, doi: 10.1016/j.jbusres.2022.03.045.

Mariani, M.M., Perez‐Vega, R. and Wirtz, J. (2021), “AI in marketing, consumer research and psychology: a systematic literature review and research agenda”, Psychology and Marketing, Vol. 39 No. 4, pp. 755-776, doi: 10.1002/mar.21619.

Marr, B. (2019), “The amazing ways retail giant Zalando is using artificial intelligence”, Forbes, available at: https://www.forbes.com/sites/bernardmarr/2019/09/20/the-amazing-ways-retail-giant-zalando-is-using-artificial-intelligence/ (accessed 16 February 2023).

Mayer, R.C., Davis, J.H. and Schoorman, F.D. (1995), “An integrative model of organizational trust”, The Academy of Management Review, Vol. 20 No. 3, p. 709, doi: 10.2307/258792.

Mcknight, D.H., Carter, M., Thatcher, J.B. and Clay, P.F. (2011), “Trust in a specific technology: an investigation of its components and measures”, ACM Transactions on Management Information Systems, Vol. 2 No. 2, pp. 1-25, doi: 10.1145/1985347.1985353.

Meyer-Waarden, L. and Cloarec, J. (2022), “‘Baby, you can drive my car’: psychological antecedents that drive consumers' adoption of AI-powered autonomous vehicles”, Technovation, Vol. 109, 102348, doi: 10.1016/j.technovation.2021.102348.

Moorman, C., Deshpandé, R. and Zaltman, G. (1993), “Factors affecting trust in market research relationships”, Journal of Marketing, Vol. 57 No. 1, pp. 81-101, doi: 10.1177/002224299305700106.

Morgan, B. (2018), “How Amazon has reorganized around artificial intelligence and machine learning”, Forbes, available at: https://www.forbes.com/sites/blakemorgan/2018/07/16/how-amazon-has-re-organized-around-artificial-intelligence-and-machine-learning/ (accessed 16 February 2023).

Morgan, R.M. and Hunt, S.D. (1994), “The commitment-trust theory of relationship marketing”, Journal of Marketing, Vol. 58 No. 3, pp. 20-38, doi: 10.1177/002224299405800302.

Mustak, M., Salminen, J., Plé, L. and Wirtz, J. (2021), “Artificial intelligence in marketing: topic modeling, scientometric analysis, and research agenda”, Journal of Business Research, Vol. 124, pp. 389-404, doi: 10.1016/j.jbusres.2020.10.044.

Nöjd, S., Trischler, J.W., Otterbring, T., Andersson, P.K. and Wästlund, E. (2020), “Bridging the valuescape with digital technology: a mixed methods study on customers' value creation process in the physical retail space”, Journal of Retailing and Consumer Services, Vol. 56, 102161, doi: 10.1016/j.jretconser.2020.102161.

Otterbring, T. (2020), “Appetite for destruction: counterintuitive effects of attractive faces on people's food choices”, Psychology & Marketing, Vol. 37 No. 11, pp. 1451-1464, doi: 10.1002/mar.21257.

Otterbring, T. (2021), “Peer presence promotes popular choices: a ‘Spicy’ field study on social influence and brand choice”, Journal of Retailing and Consumer Services, Vol. 61, 102594, doi: 10.1016/j.jretconser.2021.102594.

Otterbring, T., Rolschau, K., Furrebøe, E.F. and Nyhus, E.K. (2022b), “Crossmodal correspondences between typefaces and food preferences drive congruent choices but not among young consumers”, Food Quality and Preference, Vol. 96, 104376, doi: 10.1016/j.foodqual.2021.104376.

Otterbring, T., Samuelsson, P., Arsenovic, J., Elbæk, C.T. and Folwarczny, M. (2022a), “Shortsighted sales or long-lasting loyalty? The impact of salesperson-customer proximity on consumer responses and the beauty of bodily boundaries”, European Journal of Marketing. doi: 10.1108/EJM-04-2022-0250.

Otterbring, T., Sundie, J., Jessica Li, Y. and Hill, S. (2020), “Evolutionary psychological consumer research: bold, bright, but better with behavior”, Journal of Business Research, Vol. 120, pp. 473-484, doi: 10.1016/j.jbusres.2020.07.010.

Pillai, R., Ghanghorkar, Y., Sivathanu, B., Algharabat, R. and Rana, N.P. (2023), “Adoption of artificial intelligence (AI) based employee experience (EEX) chatbots”, Information Technology & People, Vol. ahead-of-print, doi: 10.1108/ITP-04-2022-0287.

Rosch, E. and Mervis, C.B. (1975), “Family resemblances: studies in the internal structure of categories”, Cognitive Psychology, Vol. 7 No. 4, pp. 573-605, doi: 10.1016/0010-0285(75)90024-9.

Sharma, S., Islam, N., Singh, G. and Dhir, A. (2022a), “Why do retail customers adopt artificial intelligence (Ai) based autonomous decision-making systems?”, IEEE Transactions on Engineering Management, pp. 1-17, doi: 10.1109/TEM.2022.3157976.

Sharma, S., Singh, G., Islam, N. and Dhir, A. (2022b), “Why do smes adopt artificial intelligence-based chatbots?”, IEEE Transactions on Engineering Management, pp. 114, doi: 10.1109/TEM.2022.3203469.

Statistics Denmark (2021), available at: https://www.dst.dk/en (accessed 11 February 2023).

Wen, H., Zhang, L., Sheng, A., Li, M. and Guo, B. (2022), “From ‘human-to-human’ to ‘human-to-non-human’ – influence factors of artificial intelligence-enabled consumer value co-creation behavior”, Frontiers in Psychology, Vol. 13, 863313, doi: 10.3389/fpsyg.2022.863313.

Yousafzai, S.Y., Foxall, G.R. and Pallister, J.G. (2007), “Technology acceptance: a meta-analysis of the TAM: part 1”, Journal of Modelling in Management, Vol. 2 No. 3, pp. 251-280, doi: 10.1108/17465660710834453.

Yu, X., Xu, S. and Ashton, M. (2023), “Antecedents and outcomes of artificial intelligence adoption and application in the workplace: the socio-technical system theory perspective”, Information Technology & People, Vol. 36 No. 1, pp. 454-474, doi: 10.1108/ITP-04-2021-0254.

Zajonc, R.B. (1968), “Attitudinal effects of mere exposure”, Journal of Personality and Social Psychology, Vol. 9 No. 2, Pt.2, pp. 1-27, doi: 10.1037/h0025848.

Acknowledgements

This research received support from the Carlsberg Foundation through a research infrastructure grant (No. CF21_0225).

Corresponding author

Darius-Aurel Frank can be contacted at: contact@dariusfrank.com

Related articles