Introduction

Concerns about the large-scale impacts of artificial intelligence (AI) on both the individual and societal levels are increasing (Calo, 2023; Fast & Horvitz, 2017; Gnambs & Appel, 2019; Leffer, 2023; Roose, 2023; Vanian, 2030). Extreme negative attitudes have recently been dubbed “p(doom),” or the probability that AI will take over the world (Roose, 2023). Predominant forms of AI that are the topic of concern include social and generative AI tools such as ChatGPT, Alexa and Google Home, healthcare and education robots, companion chatbots such as Replika, and algorithmic AI that is used in healthcare, education, policing, and other industries. The launch of ChatGPT in late 2022 alongside the rapid advancement of new generative AI tools like Sora have sparked an upsurge of new worries. For example, DALL-E, an image generation tool (Ramesh et al., 2021), sparked controversy among creative communities regarding the ownership, commodification, and commercialization of art by non-artists generally and, more specifically, by the algorithms themselves and the industries that created them (Roose, 2022; Wong, 2023).

Perhaps because negativity is more effective at capturing attention, although positive accounts of AI exist, the media is increasingly filled with articles on people’s fears about AI. According to these news sources, there is a widespread fear that AI will replace people and even take over the world, all resulting in catastrophe on both the individual and societal levels (Leffer, 2023; Roose, 2023). People do generally perceive that AI will have a significant impact on the world (Kelley et al., 2021), but does the public really hold extreme negative views? And to the extent that people do hold negative, p(doom) opinions, do those opinions reflect negative information gathered from media sources, direct experiences that people have with new AI tools, or people’s individual differences, such as personality and social health?

Prior to 2017, most public opinion research on AI focused on user experience of robots. Much of this research was conducted for the sake of understanding people’s use intentions for robotic technologies and their fears about automation leading to job displacement (Attig et al., 2017; Broadbent et al., 2009; Buchner et al., 2013; Gnambs, 2019). Data from this research indicated that perceptions of robots became more negative over time (Gnambs, 2019; Gnambs & Appel, 2019). For example, an analysis of the 2012 to 2017 European Union sample dataset (Gnambs & Appel, 2019) showed an increase in negative attitudes toward robots over those five years. The Pew Research Center collected data about people’s perceptions of new technologies such as AI-driven automatic vehicles. Their report (Smith & Anderson, 2017) indicated that people expressed more concern than enthusiasm for technologies that might displace people’s jobs.

Human-AI interaction is a new, growing, and socially important field. Research on perceptions of AI thus far has yielded conflicting results, and there are several hypotheses about why. One of these hypotheses in particular motivated the present study: the importance of individual differences in human users. In 2021, Krämer and Bente (Krämer & Bente, 2021) reviewed the last two decades of research on human-AI interaction. They underscored the relevance of individual differences like personality in interaction outcomes and suggested that the field has neglected to account for these variables in empirical studies. For example, research on the uncanny valley effect, a subfield of human-AI interaction, exhibits this trend. People can experience significant levels of eeriness or discomfort when interacting with an artificial agent. On the scale of machinelike to humanlike, there is a point at which an artificial agent is too humanlike yet not humanlike enough. At this point, people experience a decrease in positive affect, which is called the uncanny valley effect (Appel et al., 2020; Ciechanowski et al., 2019; Ho & MacDorman, 2017; Mori, 2012; Stein & Ohler, 2017). Much of the research on the uncanny valley has not accounted for individual differences, despite evidence for their importance. Instead, studies have focused on how altering various features of an AI agent relates to uncanny valley effects. These features include the AI agents’ facial characteristics, voice, and movement (Rosenthal-Von Der Pütten & Krämer, 2014). However, Lischetzke et al. (2017) and MacDorman and Entezari (2015) found that individual differences account for a significant amount of variance in uncanny valley effects and can be more informative than manipulations of AI features. A 2024 review of 23 empirical studies on trust in AI systems (Bach et al., 2024), for example, suggested that the relationship between trust in AI systems and individual differences in user characteristics was stronger than the relationship between trust in AI systems and the features of the AI systems in the studies. The uncanny valley is just one area within human-AI interaction that showcases this trend.

Here, we explain four major individual difference factors that relate to people’s attitudes toward and perceptions of AI based on prior research in the field. The first of these is personality. Most often, personality is measured by the Big Five Inventory (BFI) and its truncated versions such as the Ten Item Personality Inventory (TIPI), which measure traits of Agreeableness, Extraversion, Conscientiousness, Neuroticism, and Openness to Experience (John, 2021; Soto & John, 2017a, 2017b). Other assessments of personality include the Dark Triad, which measures narcissism, psychopathy, and Machiavellianism (Stein et al., 2024; Wissing & Reinhard, 2018). Kraus et al. (2021) found a positive relationship between Extraversion and trust in automatic vehicles, as well as negative one between Neuroticism and trust. Schepman and Rodway (2022) found that lower Extraversion related to positive perceptions of AI. In the same study, they found that Agreeableness and Conscientiousness related to more conciliatory attitudes toward AI’s negative impacts, and research by Chien et al. (2016) indicated that these two traits also related to higher trust in automation.

A second type of individual difference that relates to people’s attitudes toward AI is culture of origin (Haring et al., 2014; Kachouie et al., 2014; MacDorman et al., 2009). In 2021, Kaplan et al. (2021) reviewed 65 studies and suggested that trust in AI was significantly related to culture as well as other factors. Bartneck et al. (2007) found that participants’ cultural background influenced their negative attitudes toward the robot Aibo and had a larger effect than factors such as whether people owned the robot. Haring et al. (2014) compared Japanese and Australian participants’ perceptions of a humanoid robot and found that Japanese participants held more negative views of the AI agent.

Personality and culture of origin can interact (Chien et al., 2016; Leshner & Johnson, 2024; MacDorman & Entezari, 2015; Park & Woo, 2022; Waytz et al., 2010). For example, the classic measure of personality, the BFI, may apply differently in different cultures (Gurven et al., 2013; Triandis & Suh, 2002), despite the BFI being considered universal at the turn of the twenty-first century (McCrae & Costa, 1997). Sindermann et al. (2022) compared a German to a Chinese sample and found that the relationship between BFI personality traits and fear of AI depended on the participants’ country of residence. Agreeableness negatively correlated with fears about AI in the Chinese sample but not in the German sample. Not all cultural comparisons show these differences. Stein et al. (2024) used both the BFI and the Dark Triad to investigate attitudes in the USA and Germany. They developed a 12-item scale to capture general attitudes about AI (ATTARI-12) that included cognitive, affective, and behavioral general attitudes. They found that Agreeableness related to positive attitudes toward AI and that non-BFI personality measures such as conspiracy mentality related to negative attitudes for both samples.

A third attribute that may vary among people and may also shape people’s positive or negative attitudes toward AI is the tendency to anthropomorphize and attribute mind-like characteristics to AI. The extent to which someone perceives AI as having a mind may be particularly related to the personality of the individual, rather than to the features of the AI agent in the study. Kaplan et al. (2019) found that Extraversion, for example, related to the tendency to anthropomorphize or ascribe humanlike traits such as mind to AI. Waytz et al. (2010) found that Extraversion and Neuroticism both correlated with whether people ascribed experience (the capacity to have subjective experience and feel emotions such as love or pain) or agency (the ability to act on one’s own accord and to make decisions) to robots. The impact of mind perception and its relation to personality on attitudes toward AI remains unclear. Some studies indicated that perceiving mind in AI related to acceptance of AI, whereas others suggested that it related to fear or discomfort (Broadbent et al., 2013; Guingrich & Graziano, 2024, 2025; Küster et al., 2021; Stafford et al., 2014; Stein et al., 2020). Both research on culture and mind perception suggest that personality relates to people’s attitudes toward AI.

The fourth source of individual differences that might affect people’s attitudes toward AI is familiarity with and affinity for technology. Familiarity with AI can come in many forms, including direct exposure by engaging with AI tools or indirect exposure through media coverage and science fiction (Horstmann & Krämer, 2019). Evidence suggests that media exposure to science fiction, which often contains emotionally charged narratives of disruptive AI technology, strongly predicts fearful attitudes toward autonomous AI and overshadows the significance of demographic factors such as age, gender, and income (Liang & Lee, 2017). In contrast, direct familiarity with AI can reduce the uncanny valley effect and even reverse it, producing a “happy valley” effect (Cheetham et al., 2014). One way to measure affinity for technology is the Affinity for Technology Interaction scale (ATI) (Franke et al., 2019; Lezhnina & Kismih, 2020). The ATI has been found to be positively correlated with trust in automatic vehicles in a US sample (Kraus et al., 2021) and correlated with positive attitudes toward AI-generated deepfakes in a German sample (Kleine, 2022), but remains understudied.

AI technology is advancing quickly and attitudes will continue to change as new AI develops. It is therefore of great importance, for scientific understanding and for public policy, to study these changing attitudes. How prevalent p(doom)-level concerns are in the USA and who holds them are currently not well understood. In this study, we sought to provide a broad description of the current concerns about AI’s impact on the self and on society. We asked whether people in general held p(doom) views, and in addition asked a range of questions about AI attitudes, including people’s perceptions of AI’s impact on themselves and on society. Due to evidence within the literature that suggests that individual differences play an important role in perceptions of AI, we also aimed to uncover which of a broad range of individual difference factors might influence these sentiments toward AI within a nationally representative US sample. Given the known influence of country of origin, we chose to focus on a US sample for consistency. We asked participants to complete scales that measure personality traits, social traits, and attitudes toward AI technology, and to answer questions about mental health and diagnoses as well as demographics. We also tested whether brief interactions with chatbots, a popular application of AI based on large language models (LLMs), immediately before answering the questions would affect the views of participants. Our main goal was not to test any specific hypothesis or theory, which may be premature in this newly developing field, but instead to provide a basic, necessary picture of how a range of individual difference variables may relate to attitudes toward AI technology at this moment in time, and in particular to assess p(doom) attitudes in order to determine whether the public really does generally suffer from a crisis of existential fear toward AI.

Methods

All materials including anonymized data, survey, scales, and code are publicly available on our project’s OSF page.

Subjects

Full ethical approval was granted by Princeton University’s Internal Review Board, and all research was performed in accordance with regulations for human subjects research. We collected a nationally representative sample of the US population through Prolific (N = 402) in June 2023. We screened participants using three requirements. Participants who were excluded from participating included those younger than 18, those who had participated in any of our prior studies, and those who had an approval rate less than 95% for all previous studies they had participated in. All participants gave informed consent and were paid at a rate of 12 USD per hour. Participant ages ranged from 18 to over 65 (18–24: 19.2%, N = 77; 25–34: 37.4%, N = 150; 35–44: 22.4% N = 90; 45–54: 14.2%, N = 57; 55–64: 5.5%, N = 22; 65 + : 1.2%, N = 5). Participant gender identity was representative (men: 48.8%, N = 196; women: 48%, N = 193; Non-binary/third gender: 2.7%, N = 11; Prefer not to say: 0.5%, N = 2) as was race (White: 70%, N = 282; Black: 12%, N = 48; Asian: 8.7%, N = 35; Multiracial: 5.7%, N = 23; Other racial identities: 3%, N = 13; Native Hawaiian and Other Pacific Islander: 0.25%, N = 1). Level of education also fit a representative distribution, ranging from those without a high school degree to those with an advanced post-college degree. Representativeness was based on the US Census Bureau Race and Hispanic Origin data in 2023 (United States Census Bureau, 2023).

Exposure to AI Technology

We randomly assigned participants to one of two conditions: chatbot and control. Participants in the chatbot condition (47%, N = 187) were randomly assigned to interact with one of three popular chatbots (ChatGPT, Replika, or Anima) for at least 10 minutes immediately prior to answering the survey questions. They validated their engagement in the task by capturing and submitting screenshots with a visible time stamp at the start and end of their interaction with the chatbot. Participants in the control condition (53%, N = 215) had no interaction with a chatbot prior to the survey.

Survey

First, participants completed a demographics questionnaire. Next, in randomized order, they answered a series of questions and scales, which are listed below.

Our study design involved measuring characteristics of people that we thought might correlate with, or help explain, attitudes toward AI. We measured 29 individual difference factors, to be used as independent variables in regression analyses. We also measured 12 dependent variables that could serve as indicators of attitudes toward AI. We then performed a multiple regression analysis to determine which regressors could explain variance in the dependent variables.

Independent Variables (Regressors of Interest) in Regression Analysis

First, we describe the 29 independent variables, which were organized into three types.

Affinity for Technology Interaction (ATI)

We used the Affinity for Technology Interaction (ATI) scale to measure people’s level of interest in and experience with new technologies (Franke et al., 2019; Lezhnina & Kismih, 2020). The original scale contains nine items that measure people’s affinity for interacting with technical systems by having them rate their level of agreement with each item. We included two additional items related to AI-specific technologies: “I like to try out new artificial intelligence technologies like chatbots (e.g. ChatGPT)” and “I don’t really care how artificial intelligence works.” This second item was reverse-scored. Subjects rated whether they agreed with each item on a 7-point Likert scale, and we averaged the scores of all items to produce a single “ATI” score. Higher scores indicated greater affinity for technology.

The ATI scale was not used here as a measure of attitudes toward technology or AI, but rather as a measure of how intensively people use technology and how familiar they are with it. It could be that people with more intensive exposure to technology have a more optimistic view of AI. But it could also be that people with more intensive exposure to technology have more first-hand knowledge of the risks and harms of AI and thus might express more of a pessimistic or p(doom) attitude. Thus, the ATI was appropriate as an independent variable.

Personality

We used the Ten Item Personality Inventory (TIPI) to measure the Big Five Inventory (BFI) personality traits of Agreeableness, Extraversion, Conscientiousness, Neuroticism, and Openness to Experience. This truncated version of the BFI scale has temporal stability and strongly correlates with BFI scales that have more items (Furnham, 2008; Gosling et al., 2003; Nunes et al., 2018). Each personality trait in the TIPI is measured by averaging the scores of two items. For example, Agreeableness is measured by the following questions: “I see myself as: sympathetic, warm” and “I see myself as: critical, quarrelsome” (reverse-scored) rated on a 7-point Likert scale. Higher scores indicated higher levels of each trait.

Social Traits and Mental Health

We measured Sociability by the items that pertained to it within The Self-Perception Profile for Adults (items 2, 14, 27, and 39) (Messer & Harter, 2012). Each item contained two parts, which we separated into two questions with one question reverse-scored for a total of eight questions such as “I feel that I am enjoyable to be with” rated on a 7-point Likert scale from “Strongly disagree” to “Strongly agree.” We averaged all eight ratings to produce a single “Sociability” score for each subject. Higher scores indicated higher sociability.

We measured Social competence by the Perceived Social Competence Scale (Anderson-Butcher & Iachini Amorose, 2008), which contains six items such as “I am good at making friends” that are rated on a 7-point Likert scale from “Strongly disagree” to “Strongly agree.” We averaged these six ratings to produce a single “Social competence” score for each subject. Higher scores indicated higher perceived social competence.

We measured Self-esteem by the Rosenberg Self-Esteem Scale (RSE) (Rosenberg, 1965), which contains 10 items such as “On the whole, I am satisfied with myself,” rated on a 7-point Likert scale from “Strongly disagree” to “Strongly agree.” Items 2, 5, 6, 8, and 9 are reverse-scored, and we averaged all 10 ratings to produce a single “Self-esteem” score for each subject. Higher scores indicated higher self-esteem.

We measured Loneliness by the UCLA Loneliness Scale (Russell et al., 1978), which contains 20 items “designed to measure one’s subjective feeling of loneliness as well as feelings of social isolation.” This scale contains items such as “I am unhappy doing so many things alone,” rated on a 4-point scale of: “I often feel this way” (4), “I sometimes feel this way” (3), “I rarely feel this way” (2), and “I never feel this way” (1). We added all 20 ratings together to produce a single “Loneliness” score for each subject. Higher scores indicated higher levels of loneliness.

We measured mental health and other diagnoses by two items: “Which of the following have you experienced” and “Of those selected, which have you been diagnosed with by a physician?” with the option to select as many of the 13 listed diagnoses as applied, such as “Depression,” “Anxiety,” and “Autism.”

We also measured six demographic variables: gender identity, age, race, education level, income, and relationship status.

Dependent Variables in Regression Analysis

We measured 12 variables to assess people’s attitudes toward AI. These dependent variables are listed below.

AI Sentiments and p(doom)

We created five questions to capture participants’ sentiments about AI’s large-scale impact. These questions were based on colloquial, popularized opinions about AI, reflecting p(doom) and other questions related to relationships with and the moral status of AI (Roose, 2023). These five items were: “I worry that AI is very bad,” “I worry that AI will take over the world,” “I worry that AI will replace people,” “AI agents like chatbots, digital voice assistants, and robots would make good social companions,” and “I think AI should have moral rights.” For each item, participants indicated how much they agreed with it on a 7-point Likert scale from “Strongly disagree” to “Strongly agree.” Higher scores indicated higher agreement with the sentiment.

In addition to analyzing the five AI sentiment scores individually, we also combined the first three items (“I worry that AI is very bad,” “I worry that AI will take over the world,” and “I worry that AI will replace people”), averaging their ratings for each participant to create a single p(doom) score. Higher scores indicated more agreement with p(doom) views.

General Attitudes Toward Robots Scale (GAToRS)

We used a modified version of the General Attitudes Toward Robots Scale (GAToRS) (Koverola et al., 2022) to measure participants’ attitudes toward AI’s impact on their personal lives and on society as a whole. Composed of four subscales, GAToRS “differentiates (a) comfort and enjoyment around robots, (b) unease and anxiety around robots, (c) rational hopes about robots in general (at societal level) and (d) rational worries about robots in general (at societal level).” We adapted the scale to reflect attitudes toward artificial intelligence more generally. We did not use the General Attitudes toward Artificial Intelligence Scale (GAAIS) (Schepman & Rodway, 2020, 2022), because it groups attitudes into only two categories, positive and negative, whereas we were interested in two additional categories, personal and societal. In our adaption of GAToRS, we replaced the word “robots” with “artificial intelligence” for most items. If a scale item referred to entities that one could interact with generally, we replaced “robot” with “artificial intelligence agent such as a chatbot, digital voice assistant, or robot,” and we kept “robot” if the interaction method mentioned was specifically physical. The GAToRS Personal Level Positive Attitude (P+) subscale reflects people’s perceptions about how AI would impact them positively on a personal or individual level, such as “I would feel relaxed talking with an artificial intelligence agent like a chatbot, digital voice assistant, or robot.” The Personal Level Negative Attitude (P) subscale measures people’s perceptions about how AI would impact them negatively on a personal level.

The other two subscales, the Societal Level Positive Attitude (S +) and Societal Level Negative Attitude (S −) subscales, measure people’s perceptions about how AI would impact society, including questions such as “Artificial intelligence is a good thing for society, because it helps people” (S +) and “I am afraid that artificial intelligence agents like chatbots, digital voice assistants, or robots will encourage less interaction between humans” (S −). All items were rated on a 7-point Likert scale from “Strongly disagree” to “Strongly agree.” We averaged the scores of the items pertaining to each subscale to create four GAToRS scores.

Social Interest

We also measured people’s social interest in engaging with a chatbot or a human with two questions: “I am interested in talking to a chatbot right now” and “I am interested in talking to another person right now,” rated on a 7-point Likert scale from “Strongly disagree” to “Strongly agree.” Higher scores indicated greater interest in engaging with the specified agent. Desire to engage with a chatbot was interpreted as a sign of positive attitude toward chatbots, and desire to engage with a person was interpreted as a control measure or baseline of social interest for comparison.

Results

Internal Validity

We first tested the internal validity of each of our scales using Cronbach’s alpha (α) tests. The first three AI sentiment variables, which were used to measure p(doom) attitudes, had good internal consistency (α > 0.8). This result justifies our procedure of combining the first three AI sentiments to create a single p(doom) score. When all five of the AI sentiments were included, with reverse-scoring for items 4 and 5, the statistic did not meet the necessary threshold for internal consistency (α = 0.71), suggesting differences between p(doom) attitudes and other attitudes toward AI, and justifying our treatment of items 4 and 5 as separate measures.

We examined whether our adaptations to the ATI scale and the four GAToRS subscales impacted their internal consistency, and they did not (Cronbach’s α > 0.8 in each case). Each of the remaining scales, Sociability, Social competence, Loneliness, and Self-esteem had internal consistency (α > 0.8 for each scale).

Do People Have an Overall Positive or Negative View of AI?

Our first, central question was whether people generally hold positive or negative views of AI, and especially whether cataclysmic, p(doom) attitudes are prevalent. We examined each of the 12 dependent variables and asked whether, on each measure, participants tended to hold positive or negative views of AI.

Figure 1A shows the distribution of responses to the sentiment, “I worry that AI is very bad” (M = 3.29, SD = 1.88, Md = 3.00). The most frequent response is “Strongly disagree,” the mean corresponds to a response between “Slightly disagree” and “Disagree,” and the median corresponds to the response “Slightly disagree.” Because the distribution is bounded, ordinal, and did not satisfy the Shapiro–Wilk normality test, we evaluated the distribution with the non-parametric Wilcoxon Signed-Rank Test (WSR). We found that the distribution of responses is statistically significantly shifted away the neutral answer of 4, in the direction of disagreement (WSR: V = 17,184, p < 0.0001). The result from this question suggests that people tended not to worry that AI is very bad.

Fig. 1
figure 1

Density histograms of responses to the 12 dependent variables. A Density distribution of responses to AI sentiment 1, “I worry that AI is very bad.” The x-axis shows Likert-scale responses from “Strongly disagree” (1) to “Strongly agree” (7). The y-axis shows the proportion of responses. B Results for sentiment 2, “I worry that AI will take over the world.” C Results for sentiment 3, “I worry that AI will replace people.” D Results for p(doom) score (each subject’s score was an average of responses to sentiments 1, 2, and 3). The y-axis shows the proportion of responses and a smoothed density curve. E Results for sentiment 4, “AI agents like chatbots, digital voice assistants, or robots would make good social companions.” F Results for sentiment 5, “I think AI should have moral rights.” G Results for GAToRS P + score, indicating perception that AI can benefit one’s personal life. H Results for GAToRS P − score, indicating perception that AI can harm one’s personal life. I Results for GAToRS S + score, indicating perception that AI can benefit society. J Results for GAToRS S − score, indicating perception that AI can harm society. K Results for “I am interested in talking to a chatbot right now.” L Results for “I am interested in talking to another person right now”

Figure 1B shows the distribution of responses to the sentiment, “I worry that AI will take over the world.” The distribution is again significantly shifted toward disagreement (M = 3.39, SD = 1.81, Md = 3.00; V = 17882, p < 0.0001). Figure 1C shows the distribution of responses to the sentiment, “I worry that AI will replace people.” The distribution is also significantly shifted toward disagreement (M = 3.50, SD = 1.93, Md = 3.00; V = 22766, p < 0.0001).

Figure 1D shows the distribution of the composite p(doom) score, which is an average of the first three scores (A–C). As expected, given the previous three results, people’s opinions were in significant disagreement with a p(doom) attitude. The distribution is shifted significantly below the neutral score of 4 (M = 3.40, SD = 1.64, Md = 3.33; V = 20164, p < 0.0001). Overall, the distributions from Figure 1A–D suggest that people’s views of AI are not strongly negative or indicative of fear and doom. The distributions are significantly shifted away from overt worries. Public perception of AI may be less fearful than is often portrayed in media.

Figure 1E shows the distribution of responses to the sentiment, “AI agents like chatbots, digital voice assistants, or robots would make good social companions.” Here, the responses are more evenly distributed but are significantly shifted toward disagreement (M = 3.65, SD = 1.66, Md = 4.00; V = 19776, p < 0.0001). Figure 1F shows the distribution of responses to the sentiment, “I think AI should have moral rights.” The most frequent response is “Neither agree nor disagree,” but the distribution is significantly shifted toward disagreement (M = 3.03, SD = 1.70, Md = 3.00; V = 5719.5, p < 0.0001). These results therefore show ambivalence toward AI. Though negative sentiments are not generally held (Figure 1A–D), there is still a general attitude that AI should not join the human social or moral family (Figure 1E and F).

Figure 1G shows the distribution of GAToRS P + scores, or scores that indicate positive opinions about how AI will affect one’s personal life. The GAToRS P + score is composite (a mean of five, 7-point items) and therefore can take on more values than the discretized 1–7. However, the distribution did not satisfy the Shapiro–Wilk test for normality. Therefore, we analyzed the scores using a non-parametric test (WSR) to determine whether the distribution was significantly different from the neutral rating of 4. The distribution was significantly shifted toward agreement (M = 4.20, SD = 1.23, Md = 4.20; V = 42,912, p < 0.001). People tended to agree that AI can have a positive impact on their personal lives. Figure 1H shows the distribution of GAToRS P − scores, or scores that indicate negative opinions about how AI will affect one’s personal life. The distribution is significantly shifted toward disagreement (M = 3.15, SD = 1.25, Md = 3.00; V = 12,280, p < 0.0001). People tended to disagree that AI can have a negative impact on their personal lives. Based on a Wilcoxon matched-pairs test for non-parametric variables, the GAToRS P + score was significantly higher than the GAToRS P − score (V = 57,400, p < 0.0001). People were more likely to indicate that AI can have a positive impact on their personal lives than a negative one.

Figure 1I shows the distribution of GAToRS S + scores, or scores that indicate positive opinions about how AI will affect society generally. The distribution is significantly shifted toward agreement (M = 5.29, SD = 1.07, Md = 5.40; V = 69,469, p < 0.0001). People tended to indicate that AI can have a positive impact on society. Figure 1J shows the distribution of GAToRS S scores, or scores that indicate negative opinions about how AI will affect society generally. The distribution is also significantly shifted toward agreement (M = 5.06, SD = 1.05, Md = 5.20; V = 68523, p < 0.0001). Therefore, participants on average agree that AI can both positively (Figure 1I) and negatively (Figure 1J) impact society. Based on a Wilcoxon matched-pairs test, the GAToRS S+ score was significantly higher than the GAToRS S score (V = 42806, p < 0.005). People were more likely to indicate that AI can have a positive impact on society than a negative one.

Finally, Figure 1 also shows the distribution of responses to the social interest variables. Figure 1K shows the distribution of responses to the statement, “I am interested in talking to a chatbot right now.” The distribution is significantly shifted toward agreement (M = 4.34, SD = 1.88, Md = 5.00; V = 30612, p < 0.01). For comparison, Figure 1L shows the distribution of responses to the statement, “I am interested in talking to another person right now.” Again, the distribution is significantly shifted toward agreement (M = 4.59, SD = 1.67, Md = 5.00; V = 37782, p < 0.0001). The interest in talking to another person was significantly greater than the interest in talking to a chatbot based on a Wilcoxon matched-pairs test (V = 15743, p < 0.05). These results suggest that people are generally positive toward AI and on average interested in talking to an AI, though, in comparison, slightly more likely to be interested in talking to a real person. The findings shown in Figure 1, encompassing 12 different measures, provide a general picture of attitudes toward AI. Within our sample, perception of AI is mainly positive, not negative, and people do not in general hold strong p(doom) attitudes. Yet their positive attitudes are not uniform. For example, opinions were more evenly divided on whether AI can make good social companions, and most people indicated that AI should not have moral rights.

Does a Brief Interaction with a Chatbot Affect Opinions of AI?

We randomly assigned participants to two groups: those who spent an initial 10 minutes interacting with a chatbot before taking the inventories, and those who did not. We evaluated whether the chatbot condition had a significant effect on any of the 12 dependent variables shown in Fig. 1. Our variables were not normally distributed and our conditions had an uneven number of participants. We therefore performed Mann–Whitney U tests for comparing the distribution for those who interacted with the chatbot to the distribution for those who did not. The results were Bonferroni corrected for 12 comparisons (adjusted significance level: p = 0.004).

Only one significant effect of the chatbot manipulation emerged. Participants in the chatbot interaction condition had significantly less interest in talking to a chatbot thereafter, as compared to participants who did not interact with a chatbot (chatbot condition: M = 3.55, SD = 1.83, Md = 4.00; control condition: M = 5.02, SD = 1.64, Md = 5.00; Mann–Whitney U test: W = 29,030, p < 0.00001). Presumably, participants in the chatbot condition, having just interacted with a chatbot, had satisfied their curiosity and therefore were less interested in talking further to another chatbot. In comparison, the chatbot interaction manipulation had no significant effect on participants’ ratings of their interest in talking to another person (chatbot condition: M = 4.71, SD = 1.57, Md = 5.00; control condition: M = 4.49, SD = 1.74, Md = 5.00; Mann–Whitney U test: W = 18,818, p = 0.26).

Which Individual Difference Factors Predict p(doom)?

A primary goal of this study was to better understand the origin of p(doom) attitudes, or the belief that AI poses a significant risk to human civilization. We therefore tested whether any of the 29 measured individual difference factors significantly predicted the p(doom) score. The individual difference factors included the ATI score, five BFI personality traits (Agreeableness, Extraversion, Conscientiousness, Neuroticism, and Openness to Experience), four social traits (Social competence, Sociability, Self-esteem, and Loneliness), thirteen mental and other health diagnoses, and six demographic variables.

We standardized all continuous variables prior to analysis, as not all variables were measured on the same scale. Standardizing allowed us to generate meaningful statistics and to make valid inferences about relationships between variables. We then performed separate, univariate regressions to test whether each of these continuous independent variables was correlated with the p(doom) score, and applied ANOVA for categorical variables such as gender and race. We used Bonferroni correction for 29 comparisons (adjusted significance level: p = 0.0017). We evaluated each significant linear regression shown in Table 1 to determine whether a linear or quadratic model best explained it, using the Akaike Information Criterion (AIC), which relies on standard AIC delta (δ) levels to determine model fit (0 ≤ δ ≤ 2, no clear preference; 2 < δ ≤ 10, significant moderate preference for quadratic model; δ > 10, significant strong preference for quadratic model). The linear regression model should be retained if δ ≤ 2 and should be switched to a quadratic model if δ > 2. The appropriate model for each relationship is listed in the Model column. For the relationship between Agreeableness and p(doom), for example, δ = 11.45, which signified that a quadratic model was significantly more appropriate, and the statistics shown in the table represent those from a quadratic test.

Table 1 Significant relationships between independent variables and the p(doom) score at a Bonferonni correction of p = 0.0017. All scores were standardized including the p(doom) score. The Independent column refers to the independent variable. The Dependent column refers to the dependent variable. The Model column indicates the appropriate model for the relationship: quadratic, linear, or ANOVA. The p column indicates significance. The Effect column indicates the quad estimate for quadratic models shown in bold, the correlation coefficient r for linear models, and the difference between means for the ANOVA models. The SE column indicates the standard error, R2 indicates the variance explained, F indicates the F-statistic, t indicates the t-value, and CI lower and CI upper indicate the 95% confidence interval

Six significant relationships were found and are listed in Table 1. First, the ATI score had a significant negative relationship with the p(doom) score. People who had greater affinity for technology tended to score lower on the p(doom) measure. They were not as worried about the negative impacts of AI. Figure 2A and B reflect two ways of visualizing that relationship. In Fig. 2A, for each subject, the standardized ATI score was plotted on the x-axis while the standardized p(doom) score was plotted on the y-axis. With 402 subjects, the resulting display is crowded and the underlying trend is not apparent. In contrast, Fig. 2B uses a common method of visualizing trends in large samples. All subjects that shared the same score on the x-axis were grouped, their Y scores were averaged, and that average was plotted. Comparing Fig. 2A to B shows the benefit of this method of grouping. The relationship between the variables becomes visually apparent. The method was used only for graphical purposes. All analyses were performed on ungrouped data.

Fig. 2
figure 2

Two ways to visualize the regression data. A p(doom) score (y-axis) plotted against ATI score (x-axis) with jitter for overlapping points. Standardized scores were plotted in units of SD from the mean. Every subject contributed one data point, for 402 points displayed. B Same as A but using a grouping method to better visualize the underlying trend. All subjects who shared the same standardized X value were grouped, and the Y values within each group were averaged and plotted

P(doom) attitudes were also significantly correlated with personality and social traits. These relationships are shown in Figure 3. As shown in Figure 3A, the personality trait of Agreeableness had a significant quadratic relationship with the p(doom) score. At the low end of the Agreeableness range, the data showed a rising trend, in which more Agreeableness was associated with a more negative attitude toward AI. Through the middle and high end of the Agreeableness range, the data showed a falling trend, in which more Agreeableness was associated with a less negative attitude toward AI. As shown in Figure 3B, the personality trait of Neuroticism was significantly positively correlated with the p(doom) score; as neuroticism increased, people were more likely to hold negative views of AI. Figure 3C shows that the social trait of Self-esteem was significantly negatively correlated with the p(doom) score; as self- esteem increased, people were less likely to hold negative views of AI. Figure 3D shows that the social trait of Loneliness was significantly positively correlated with the p(doom) score; as loneliness increased, people were more likely to hold negative views of AI. These four correlations form a consistent pattern in which the personality (Strickhouser et al., 2017) and social traits (Holt-Lunstad et al., 2015; Zell & Johansson, 2024) that are associated with lower health are associated with negative, p(doom) attitudes.

Fig. 3
figure 3

Five significant relationships between regressors of interest and the p(doom) score. A Relationship between p(doom) and Agreeableness. Standardized scores were plotted in units of SD from the mean. All data sharing the same X value were grouped, and the mean of their Y values was plotted. A quadratic model was found to be appropriate and the best-fit quadratic curve is shown. B Relationship between p(doom) and Neuroticism. C Relationship between p(doom) and Self-esteem. D Relationship between p(doom) and Loneliness. E Violin plot for ANOVA results of p(doom) differences by gender, with mean and confidence intervals plotted in red. The groups that were significantly different from one another are designated with an asterisk

As shown in Fig. 3E, we obtained only one significant effect of a demographic, categorical variable on p(doom) attitudes. Women had significantly higher p(doom) scores than men.

Correlational Relationships Among Remaining Variables

Finally, we used regression analyses to explore whether any of the 29 individual difference factors was significantly correlated with the 11 remaining dependent variables including five AI sentiments, four GAToRS scores, people’s rated interest in talking to a chatbot, and people’s rated interest in talking to another person. We performed separate, univariate regressions to test whether each independent variable was correlated with each dependent variable. With 29 independent variables and 11 dependent variables, we tested 319 relationships and therefore used the corresponding Bonferroni correction (adjusted significance level: p = 0.00015). For each linear regression relationship that passed significance in this initial regression test, we evaluated whether a linear or quadratic model best explained the relationship using the Akaike Information Criterion (AIC).

Twenty-seven relationships between independent variables and dependent variables passed the Bonferroni corrected threshold for significance. They are listed in Table 2.

Table 2 Significant correlations from univariate regressions that emerged between remaining regressors of interest (29) and dependent variables (11) at a Bonferroni correction of p = 0.00015. All variables present were treated as continuous and standardized. The Independent column refers to the independent variable. The Dependent column refers to the dependent variable. The Model column indicates the appropriate model for the relationship: quadratic or linear. The p column indicates significance. The Effect column indicates the quad estimate for quadratic models shown in bold, and the correlation coefficient r for linear models. The SE column indicates the standard error, R2 indicates the variance explained, F indicates the F-statistic, t indicates the t-value, and CI lower and CI upper indicate the 95% confidence interval

As shown in Fig. 4, the ATI score was significantly correlated with six dependent variables. ATI was significantly negatively correlated with the sentiment “I worry that AI will take over the world,” significantly positively correlated with the sentiment “AI agents would make good social companions,” significantly positively correlated with the GAToRS P + score, had a significant inverse quadratic relationship with the GAToRS P − score, was significantly positively correlated with the GAToRS S + score, and was significantly positively correlated with interest in talking to a chatbot. These relationships show that as people’s exposure to and familiarity with technology increased, their positive views of AI tended to increase and their negative views tended to decrease. As shown in Fig. 5, personality traits significantly predicted people’s attitudes toward AI. The personality trait of Agreeableness significantly predicted seven dependent variables. Agreeableness significantly predicted the sentiment “I worry that AI is very bad,” the sentiment “I worry that AI will take over the world,” the sentiment “I worry that AI will replace people,” the GAToRS P + score, the GAToRS P − score, the GAToRS S + score, and interest in talking to a person. As shown in Fig. 5, all of these relationships better fit a quadratic than a linear model, such that negative views of AI tended to be greatest for people who scored in the middle of the Agreeableness scale, and positive views of AI were more common for people who scored on the low or high end of Agreeableness.

Fig. 4
figure 4

Six significant relationships between ATI and dependent variables. A Relationship between ATI and the sentiment “I worry that AI will take over the world.” Standardized scores were plotted in units of SD from the mean. All data sharing the same X value were grouped, and the mean of their Y values was plotted. B Relationship between ATI and the sentiment “AI agents such as chatbots, digital voice assistants, and social robots would make good social companions.” C Relationship between ATI and GAToRS P +. D Relationship between ATI and GAToRS P −. A quadratic model was found to be more appropriate than a linear model, and the best-fit quadratic curve is shown. E Relationship between ATI and GAToRS S +. F Relationship between ATI and interest in talking to a chatbot

Fig. 5
figure 5

Eleven significant relationships between personality traits and dependent variables. A Relationship between Agreeableness and the sentiment “I worry that AI will take over the world.” Standardized scores were plotted in units of SD from the mean. All data sharing the same X value were grouped, and the mean of their Y values was plotted. A quadratic model was found to be more appropriate than a linear model, and the best-fit quadratic curve is shown. B Relationship between Agreeableness and the sentiment “I worry that AI will replace people.” C Relationship between Agreeableness and the sentiment “I worry that AI is very bad.” D Relationship between Agreeableness and GAToRS P +. E Relationship between Agreeableness and GAToRS P −. F Relationship between Agreeableness and GAToRS S +. G Relationship between Agreeableness and interest in talking to a person. H Relationship between Neuroticism and the sentiment “I worry that AI will take over the world.” I Relationship between Neuroticism and GAToRS P +. J Relationship between Neuroticism and GAToRS P −. K Relationship between Extraversion and interest in talking to a person

Figure 5 also shows that the personality trait of Neuroticism was significantly positively correlated with the sentiment “I worry that AI will take over the world,” significantly negatively correlated with the GAToRS P+ score, and significantly positively correlated with the GAToRS P score. These results suggest that people with greater neuroticism are both more likely to hold negative opinions about AI and less likely to hold positive views about AI.

Figure 5 also shows that the personality trait of Extraversion was significantly positively correlated with interest in talking to a person.

As shown in Figure 6, social traits also significantly predicted people’s attitudes toward AI. Social competence was positively correlated with the GAToRS P+ score, negatively correlated with the GAToRS P score, and positively correlated with interest in talking to a chatbot. Sociability was negatively correlated with the sentiment “I worry that AI will take over the world,” positively correlated with the GAToRS P+ score, and negatively related to the GAToRS P score. Self-esteem was negatively correlated with the sentiment “I worry that AI will take over the world” and negatively correlated with the GAToRS P score. All of these relationships involving social traits point toward a trend of social health. The more socially healthy people report that they are, the more likely they are to hold positive views of AI, and the less likely they are to hold negative views.

Fig. 6
figure 6

Eight significant relationships between social traits and dependent variables. A Relationship between Social competence and GAToRS P +. Standardized scores were plotted in units of SD from the mean. All data sharing the same X value were grouped, and the mean of their Y values was plotted. B Relationship between Social competence and GAToRS P −. C Relationship between Social competence and interest in talking to a chatbot. D Relationship between Sociability and the sentiment, “I worry that AI will take over the world.” E Relationship between Sociability and GAToRS P +. F Relationship between Sociability and GAToRS P − . A quadratic model was found to be more appropriate than a linear model, and the best-fit quadratic curve is shown. G Relationship between Self-esteem and the sentiment, “I worry that AI will take over the world.” H Relationship between Self-esteem and GAToRS P − 

Only one demographic factor emerged as a significant predictor. Age was significantly negatively correlated with the GAToRS P score, suggesting that older people were less likely to think that AI can have a negative impact on their personal lives.

Discussion

In this study, we aimed to provide a broad description of public attitudes toward AI within the USA and to identify relationships between these attitudes and some of the variables that define individual differences. Our main goal was not to test any specific hypothesis or theory, which may be premature in this newly developing field, but instead to provide a basic picture of how a range of individual difference variables may relate to attitudes toward AI technology. We found that worries about AI’s negative large-scale impact, or p(doom) attitudes, were not as widely held as expected. On average, people disagreed with the sentiments that AI is very bad, that AI will take over the world, and that AI will replace people. At the same time, on average, people did not think AI agents would make good social companions, nor did they think that AI should have moral rights. People also indicated agreement that AI could positively impact their personal lives and society as a whole. It is interesting to note that these positive views were more pronounced for perceptions of impacts on the participants’ personal life, whereas the positive bias was less pronounced for perceptions of impacts on society in general.

We tested whether requiring people to engage with a chatbot prior to testing would alter their attitudes toward AI. Giving people a brief direct experience with popular AI agents, either to educate them if they had no prior experience, or to remind them, might impact people’s attitudes toward the technology. However, we found no significant effect of this manipulation on attitudes toward AI, except in one minor, unsurprising respect. Interacting with a chatbot beforehand reduced people’s interest in having another conversation with a chatbot thereafter. The exposure merely seemed to satisfy their curiosity, such that they were less likely to want another session with a chatbot. The largely negative results also suggest that adding an extra 10 minutes to the start of the study did not significantly affect responses to the remaining questions and measures.

We also tested whether individual differences significantly predicted people’s attitudes toward AI. Of the 29 individual difference factors tested, nine (ATI, Agreeableness, Neuroticism, Loneliness, Social Competence, Sociability, Self-esteem, Gender, and Age) significantly predicted at least some aspect of people’s attitudes toward AI. Two individual difference factors in particular predicted the most attitudes toward AI: affinity for technology, as measured by the ATI scale, and the personality trait of Agreeableness, as measured by the TIPI. People with greater affinity for technology were more likely to endorse the belief that AI has a positive impact on the self and on society and less likely to think AI has a negative impact. The relationship with Agreeableness was quadratic and therefore more complex. At the very low end of the range, subjects tended to have more positive views toward AI. Over the middle range of Agreeableness scores, subjects tended to have more negative views. Then, as Agreeableness increased further, views of AI became more positive. Through the majority of the range, excluding the lower end, it appears that as Agreeableness increases, positive views of AI increase and negative views decrease.

A larger story emerged from considering all individual difference factors. The more familiar people were with technology according to the ATI scale, the more positive their attitude toward AI. The more agreeable a person was, the more positive their attitude toward AI. The less neurotic they were, the more positive their attitude. The more socially healthy participants were based on the social trait scales, the more positive their attitude. All of these trends point toward negative attitudes toward AI coming partly from lack of experience and personal traits that might lead one toward a general negative stance. Our results provide evidence that negative or positive opinions toward AI might come partly from personal characteristics, rather than from any particular dangers or benefits of AI.

This pattern of results does not mean AI is not dangerous. It means that public opinions, including the very negative attitudes such as p(doom), are being shaped partly by individual psychological traits. In that sense, AI is similar to any other new technology. There is an undercurrent of social resistance to it, but that resistance is driven partly by psychological factors and by lack of familiarity.

As important as it is to note the variables that predicted attitudes toward AI, it may be just as important to note the large range of variables that appeared to have no relationship to people’s attitudes toward AI. These variables included the personality traits of Extraversion, Conscientiousness, and Openness to Experience; all demographic variables except for gender and age; and all mental health issues and other diagnoses such as Depression, Anxiety, and Autism.

Comparison to Prior Research on Individual Differences and AI Attitudes

Prior research in human-AI interaction has shown null or conflicting results when manipulating the characteristics of AI and assessing participants’ positive or negative views. The uncanny valley effect is one example of this trend, as previously discussed. Another example is anthropomorphism research. To understand how people perceive and behave toward AI, studies investigating anthropomorphism have, for example, manipulated the amount of humanlike physical features an AI agent has, the mind capacity of AI agents, and other characteristics of the AI in human-AI interactions (Bach et al., 2024; Ciechanowski et al., 2019; Guingrich & Graziano, 2024; Quadflieg et al., 2016; Stein et al., 2020; Waytz et al., 2010). These studies show conflicting effects of condition. These same studies, among others, have found significant relationships between a psychological trait—the tendency to anthropomorphize non-human agents—and perceptions of AI (Krämer & Bente, 2021). These previous findings together with our present findings suggest that individual difference factors play a role in people’s attitudes toward AI, possibly even a stronger or more consistent role than the characteristics of the AI itself.

Of the Big Five personality traits, we found that Agreeableness was by far the most predictive of opinions toward AI. We also found that Neuroticism predicted negative opinions toward AI. These findings are broadly consistent with previous research, in which personality traits were found to relate to attitudes toward technology (Dikaya et al., 2021; Fuglsang, 2024; Sindermann et al., 2020). In the literature, evidence generally favors Agreeableness over the other BFI traits as a predictor of attitudes toward AI. Agreeableness has been shown to positively correlate with optimism, while those who are more neurotic tend to be less optimistic (Sharpe et al., 2011), which may explain in part why we observed that Agreeableness related to more positive attitudes toward AI and Neuroticism related to more negative attitudes.

We found that social traits such as social competence and loneliness significantly predicted attitudes toward AI. Higher reported social ability and social health were associated with more positive opinions of AI. Our present findings are, in one way, potentially in conflict with recent research. At least some research suggests that AI, especially chatbot technology, is increasingly used to alleviate loneliness and perhaps also used as a safe social space for those with less social capability or opportunity (Guingrich & Graziano, 2024, 2025; Maples et al., 2024; Xie & Pentina, 2022). Given that relationship, one might expect lower social health to be associated with greater reliance on and positive attitudes toward AI. One possibility is that the attraction to AI by those with social needs is such a new phenomenon, affecting a smaller subset of people, that it is not reflected in the broader sample that we tested here. Another possibility is that the specific instance of AI used by those with lower social health, i.e., companion chatbots such as Replika, is perceived differently than AI technology in general, such as predictive algorithms in social media, healthcare, and policing.

In this study, we found that two demographic variables significantly predicted attitudes toward AI. In the first case, there was a significant difference between men and women in respect to p(doom): participants who identified as women endorsed p(doom) views more than men. In the second case, age was significantly negatively correlated with the GAToRS P − score; older participants were less likely to indicate that AI can have a negative impact on their personal life. Previous research on the relationship between gender, age, and attitudes toward AI is somewhat conflicting. Some studies found that gender, but not necessarily age, related to negative attitudes and trust toward robots and other AI agents (Albarrán Lozano et al., 2021; Grassini & Ree, 2023; Kaplan et al., 2021; Nomura et al., 2006). In contrast, Lee et al. ( 2017) found that age correlated with more negative attitudes toward automatic vehicles. Individual difference analyses using data from the 2012 Eurobarameter Survey on Public Attitudes towards Robots (Loffredo & Tavakkoli, 2016) suggested that both age and gender differences related to attitudes toward technology.

Limitations

The present study was limited to a snapshot in time (June 2023), a single country (the USA), and a limited set of measured variables (29 independent variables and 12 dependent variables). It provides only a glimpse of the factors that may influence opinions toward AI, and those opinions are in constant social flux. We therefore encourage future research to chart the changing public perception and to examine other countries and other potential predictive factors.

Our study is limited by the same concerns that limit other self-report studies that use Likert-scale questions. We report a general positive bias toward AI technology, yet the absolute values on the rating scales could be affected by social desirability biases. It is unlikely that an overall bias toward low or high numbers can explain the results, since for some questions, a low value corresponded to the more positive attitude and for other questions, a high value corresponded to the more positive attitude. It is also unlikely that an overall bias toward providing positive attitudes can explain the results, since some questions yielded a more negative answer, and other questions yielded a more positive answer. This mixture of directions of response is an important part of study design for such questions. It is still important, however, to be cautious when interpreting self-report data because of the possibility of hidden inaccuracies or biases.

Our study sample was limited by the potential biases in using an online platform for recruiting participants. It could be that the sample was biased toward those with more comfort with technology. This concern is legitimate. There is no question that a segment of society unwilling or unable to use an online site was missing from our sample. However, we suggest that the present findings are meaningful nonetheless, since an estimated 95% of people in the USA have broadband connections and are comfortable routinely engaging in monetary transactions online. Our sample ranged over the full scale of educational status. The ATI score, which measures familiarity with technology, also ranged over a broad spectrum and resembled the distributions published by the creators of the scale (Franke et al., 2019). There is no evidence that the Prolific site samples people who are any more technologically sophisticated than, for example, the people who use online shopping sites or any other online site. Moreover, it is not clear how to obtain a more inclusive, or less technologically biased sample. Almost all sampling methods, including those aimed at testing people in person, begin with an inline ad or announcement. Prolific is well-vetted and commonly used in psychological science because it is able to capture approximately representative samples. Therefore, we urge caution and recognize that our sample is likely to be missing the least technologically literate segments of US society, yet we also suggest that the sample is probably representative of the US online population and is therefore still a meaningful assessment of public opinion.

Lastly, our analyses were limited by the correlational nature of the relationships we evaluated. Causal inferences cannot be drawn.

Our research suggests that at this moment in time, in a representative online US sample, extreme, negative p(doom) attitudes are not the norm. The public is generally positive toward AI, in terms of its impact on their personal lives and on society. People with more experience and knowledge about technology, and people who are more socially healthy and agreeable, tend to be more positive toward AI technology. In these ways, the evolving public reaction to AI is similar to the societal reaction to any fundamental new technology.