Introduction

The outbreak of the COVID-19 pandemic saw scientists recommending preventive measures such as physical distancing and mask wearing. From the start, these measures were controversial, with some sectors of the public questioning their necessity and efficacy (DeMora et al., 2021; Simonov et al., 2020). ‘COVID-19 has shown us in the starkest terms—life and death—what happens when we don’t trust science and defy the advice of experts’ (Oreskes, 2021, p. x).

Policy-makers and scientific institutions have made protecting or rebuilding trust in science a priority: the US President’s Chief Medical Adviser stated that ‘Biden’s real COVID-19 challenge is restoring a nation’s trust in science’ (Fauci, 2020) and the chief executive officer of the American Association for the Advancement of Science echoed similar priorities under the headline ‘Why we must rebuild trust in science’ (Parikh, 2021).

What, specifically, did trust in science achieve as people faced the pandemic? Trust in science correlated positively with people’s adherence to pandemic measures (Bicchieri et al., 2021; Dohle et al., 2020; Mohammed et al., 2020; Pagliaro et al., 2021; Petersen et al., 2021; Plohl and Musil, 2021; Rothmund et al., 2020; Sailer et al., 2021; Stosic et al., 2021). Trust thus seems to be a good way to protect society from major public health hazards by encouraging the following of official guidelines. However, we argue that this conclusion is premature: reported associations between trust and the following of guidelines are not enough to concretely identify the specific role of trust in science in the pandemic, let alone justify that role.

How can trust in science be conceptualised?

Trust in science is a complex topic. It is thus worth first identifying what aspect is most relevant for understanding whether trust in science enhances adherence to pandemic prevention measures, and if not, what its role in the pandemic is.

In general terms, one individual trusts another (or an institution or a system) when the individual is vulnerable to or dependent on that other in some way and accepts the risks entailed in this dependency because the other shows features such as competence or benevolence, or because doing so reduces the complexity of the individual’s decision-making (Hendriks et al., 2016; Larson et al., 2018; Siegrist, 2021). In more specific terms, we are concerned with epistemic trust: trust in the knowledge produced by scientists (Hendriks et al., 2016; Irzik and Kurtulmus, 2019).

Two complementary aspects of epistemic trust are commonly studied: a normative aspect and a more pragmatic one. The normative question is: why should people trust in science? Answers to this question tend to spell out philosophical conditions under which trust is warranted (Irzik and Kurtulmus, 2019), and may focus on the reliability of science as a process, including the interplay between criticism and consensus among diverse scientists (Oreskes, 2021), or the hallmarks of trustworthiness that people use in judging who to trust (Hendriks et al., 2015).

The pragmatic question is: do people actually trust science? This focuses more on socio-psychological factors (Irzik and Kurtulmus, 2019), and this is what researchers are more concerned with when, for instance, they want to know if trust in science has been stable during the pandemic (Agley, 2020; Sibley et al., 2020). In probing whether trust in science encourages adherence to pandemic measures, we are mainly concerned with this pragmatic question: to what extent do people trust what scientists say, and is this trust associated with better adherence to science-based policies?

This is a still a matter of trust because lay-people lack access to the data and expertise that support scientists’ claims, though we highlight that it is nonetheless closely related to a number of similar concepts such as credibility (Hartman et al., 2017) and confidence (Siegrist, 2021), and it is not always clear just how these are to be distinguished.

We note that in some work on trust in science, distrust is not merely the mirror image of trust because there may be multiple species of distrust (Lewicki et al., 1998; Tranter and Booth, 2015). However, in the claims we are scrutinising, the main concern derives from reported associations between higher (vs. lower) trust in science and better (vs. worse) adherence. Our immediate focus, then, is on degrees of trust rather than types of (dis)trust.

Can trust in science explain adherence to pandemic rules?

Multiple studies report that trust in science is associated with better adherence to prevention measures (Bicchieri et al., 2021; Dohle et al., 2020; Mohammed et al., 2020; Pagliaro et al., 2021; Petersen et al., 2021; Plohl and Musil, 2021; Rothmund et al., 2020; Sailer et al., 2021; Stosic et al., 2021). A common conclusion is that trust in science is important precisely because it promotes adherence (Bicchieri et al., 2021; Dohle et al., 2020; Mohammed et al., 2020; Pagliaro et al., 2021; Plohl and Musil, 2021; Sailer et al., 2021; Stosic et al., 2021). Correspondingly, as lower trust is associated with lower adherence to prevention measures, this feeds into calls for trust to be restored (Fauci, 2020; Parikh, 2021).

However, trust in science has been fairly stable in the pandemic in some countries (Agley, 2020; Sibley et al., 2020); in others it even increased early in the pandemic (Wissenschaft im Dialog, 2020), which is the period covered by our data. Perhaps, then, ‘trust in science is not the problem’ (Leshner, 2021). But in that case, why is the belief that it must be restored for better adherence so prevalent? An early view of the public’s understanding of science, the ‘deficit model’, explained negative attitudes towards science as being due to a deficit of knowledge. Although more recent work has highlighted the limitations of the deficit model (for reviews, see Ahteensuu, 2012; Brossard and Lewenstein, 2009; Gregory and Lock, 2008; Sturgis and Allum, 2004), scientists communicating with the public or with policy-makers may still rely heavily on the deficit view (Simis et al., 2016). Perhaps, then, calls by prominent scientists to rebuild trust in science merely reflect the persistence of a deficit model, one that has shifted the blame from a lack of knowledge to a lack of trust.

In moving beyond the limitations of deficit models, we should not immediately blame a lack of trust for low adherence to pandemic measures, but should rather consider what factors might contribute to low trust, or what factors apart from trust explain behaviour in the pandemic (Leshner, 2021). Situating trust in science in this wider context can help identify its specific role in the pandemic, and also in responding to future threats.

What other factors might explain adherence to pandemic rules?

A primary issue is whether trust in science influenced approval of prevention measures in addition to adherence to those measures. Research on social norm change has shown that approval (positive attitudes to norms) and adherence (behaviour in line with norms) are two distinct mechanisms (Bicchieri, 2016). A distinction between these mechanisms has already been observed in the pandemic (Betsch et al., 2020; Dohle et al., 2020). The worry is that people who do not approve of new norms may nonetheless adhere to them because of coercion, fear or propaganda, and in these cases adherence is often fragile or short-lived (Mercier, 2017). In contrast, we would hope that any effect of trust in science is robust and long-lived, in which case it should change minds, not just coerce behaviour. Indeed, it should affect behaviour precisely because it has changed minds.

A second issue is whether trust in science still matters for behaviour change once the effects of social conformity are accounted for. People often trust and conform to others around them (Cialdini and Goldstein, 2004). If people prefer to associate with like-minded others, their adherence may be misattributed to trust in science, while actually stemming from social conformity. Indeed, the influence of one’s social circle had a strong impact on people’s following of COVID-19 rules (Chevallier et al., 2021; Moehring et al., 2021).

Finally, the role that trust in science played in the public’s adherence to COVID-19 measures, even before the divisive issue of vaccination was at play, is unlikely to have been consistent from group to group. Worldview- or value-based factors such as political ideology vary across groups, and are important components of attitudes towards science (Brossard and Lewenstein, 2009; Gauchat, 2012; Hornsey and Fielding, 2017; Rutjens et al., 2018a; Sturgis and Allum, 2004). Conservatives typically trust science less (Gauchat, 2012), but they are more likely to follow COVID-19 rules when they trust science more (Koetke et al., 2021; Plohl and Musil, 2021). It is thus important to consider how ideology impacts the role of trust in science.

How does trust in science vary across countries?

Much work on trust in science has focused on the USA (Diehl et al., 2021; Engels et al., 2013), However, levels of trust in science vary across countries (Borgonovi and Pokropek, 2020; Sturgis et al., 2021), as do associations between trust and other factors, such as political ideology (Pechar et al., 2018; Pennycook et al., 2020). Thus, one crucial aspect of understanding the importance of trust in science for adherence behaviour includes testing the extent to which patterns are consistent internationally.

We aimed to recruit not only from well-studied populations such as the USA, UK or Germany, but also from understudied, non-Western countries, and consequently made our survey available in several languages: Arabic, Bangla, Chinese, English, Farsi, French, German, Hindi, Italian, Spanish, Swedish and Turkish.

Summary of the present study

The main hypothesis being tested here is whether trust in science predicts better adherence to pandemic measures (pre-registration at https://osf.io/ke5yn/). However, the issues raised above prompt us to go beyond just testing for an association between trust in science and reported adherence to pandemic social distancing guidelines.

Consequently, we also test whether this holds after accounting for approval and social conformity (Research Question 1). We also examine whether trust in science acts more on minds (approval of prevention measures) or on behaviour (adherence to the measures), and whether the same holds accounting for political ideology (Research Question 2).

Finally, we check whether the role of trust in science is consistent internationally, or whether some countries deviate from global patterns (Research Question 3).

Methods

Participants

This data was collected as part of a larger project on the normative and social aspects of COVID-19 (Tuncgenc et al., 2021). A convenience sample was recruited in April and May 2020 via social media, university mailing lists, press releases and blog posts. Participation was not compensated. Overall, 6675 participants completed the survey. However, participants were able to opt out of certain personal questions (e.g., on political ideology). Further, the operationalisation of “close social circle” (see below) meant that some participants responded that they had no close circle, in which case there is no data for whether they thought their close circle was adhering to COVID-19 measures (our social conformity measure). These two sources of missing data mean that there are 4341 complete responses for the variables reported here.

Participants’ countries of residence with samples larger than 100 were: UK (1612); Turkey (630); USA (459); Peru (216); Germany (189); France (188); and Australia (109). More country information is available in the supplementary materials.

The study received ethical approval through the University of Nottingham, and all participants provided informed consent. Data was not analysed from any incomplete surveys, abandoned before the final debrief.

Procedure

The survey was delivered via a custom web app (desktop and mobile) written in jsPsych (De Leeuw, 2015).

Participants first selected which language they would like to do the survey in. After providing informed consent, participants indicated their close social circle using an established method (Dunbar and Spoors, 1995). First, participants listed the first names of all those people with whom they had had a conversation in the previous 7 days (ultimately, these names are not retained in the data). Second, those names were presented on the screen, and participants selected which names (if any) they would turn to for comfort or advice, using checkboxes. Their close social circle is operationalised as the subset of names that they selected at this second stage.

Participants were reminded of the general guidelines at the time (April–May, 2020): to keep physical distance from others. They used sliders to respond whether they were adhering to this advice (labels 0 = ‘Not been following the advice at all’; 50 = ‘Been following the advice exactly’; 100=‘Been doing more than what is advised’), and show their approval of the guideline (0 = ‘Not following the advice is completely ok’; 100 = ‘Not following the advice is completely wrong’). They were reminded of the names of those in their close social circle, and responded whether they thought their close social circle was adhering with the same guidelines (using the same slider response format).

To measure trust in science, we selected three items from the six-item Credibility of Science scale (Hartman et al., 2017) for reasons of brevity, given the length and voluntary nature of our study. This scale measures ‘generalised perceptions about the credibility of science (PCoS)—that is, the extent to which one’s default tendency is to trust in the methods and findings of science, hold positive attitudes toward the scientific enterprise, view scientists as credible, and so forth’ (Hartman et al., 2017, p. 358, emphasis ours).

The items used here were:

  1. 1.

    People trust scientists a lot more than they should

  2. 2.

    A lot of scientific theories are dead wrong

  3. 3.

    Our society places too much emphasis on science

Participants rated their agreement with these statements using a slider (0 = ‘completely disagree’; 100 = ‘completely agree’). The ‘trust in science’ score is the average of these three responses, reverse-scored for ease of interpretation such that a high score reflects high trust (reliability ωt = 0.75, α = 0.73, Revelle and Condon, 2019).

We considered whether these items may reflect broader reservations against scientific expertise rather than trust; whether our selection of these three items could bias results; and whether the negative phrasing of all three items may reflect distrust more that trustFootnote 1. To allay these concerns, we conducted a follow-up validation study, described under ‘Supplementary analyses’ below.

Participants described their political ideology, again using a slider (0 = ‘very liberal’; 100 = ‘very conservative’). They could opt out in two ways, with one checkbox indicating that this continuum did not describe their beliefs, and another checkbox indicating that they did not wish to respond.

Finally, participants provided demographic information, including age, gender and education level (which are included as control variables in all models reported here). For other questions asked in the survey as part of the larger project on the normative and social aspects of COVID-19, see Tuncgenc et al. (2021).

Open practices statement

A full demonstration of the survey can be found at the Open Science Foundation (OSF) repository for the broader project (https://osf.io/ke5yn/). The OSF repository for this specific study (https://osf.io/s5mdh/) contains the data and analyses.

The survey design was preregistered at the above project repository. The same registration included the hypothesis that adherence to official guidelines would be predicted by trust in science. For other hypotheses in the broader project (not relating to trust, see Tuncgenc et al., 2021).

The Bayesian models reported below were not pre-registered, but the full R analysis script is available at the above study repository. This includes full details of model priors, random effects structures, and control variables such as gender, age and education, as well as various supplementary analyses briefly described below.

Results

Overview of sample

Of the 6675 participants who finished the survey, 1577 opted out of the question on political ideology and 1199 indicated that they had no close circle (in the specific sense of ‘close circle’ as operationalised here: see the “Methods” section). This leaves 4341 completed responses, as 442 had missing data on both counts.

The final sample included 1293 men, 2985 women, 39 non-binary people, and 24 who chose not to answer the gender question. Table 1 summarises the main variables. The categories for education ranged from 0 = ‘No schooling completed’ to 4 = ‘Postgraduate degree’, so the point nearest the mean value corresponds to ‘3 = University undergraduate degree/professional equivalent’. The demographic variables (gender, age, education) were included as covariates in all analyses reported below, though the model coefficients for these covariates are reported only in the full analysis at https://osf.io/s5mdh/, which also gives details of how education was modelled as a monotonic (not continuous, linear) effect. For further details about recruitment and demographics, see Tuncgenc et al. (2021).

Table 1 Descriptive statistics.

We explore the effects of missing data in more detail at https://osf.io/s5mdh/, though as an initial check that these gaps not bias our conclusions, there was no significant difference in the main outcome variable, adherence to physical distancing guidelines, between the 4341 participants who answered all questions (mean adherence 63.8%) and the 2334 participants who had some missing data (mean adherence 62.9%, a difference of less than one percentage-point: 0.89[0.17,1.97]).

To gauge how well our convenience sample compares with more representative samples, we scaled our trust in science variable and regressed it on published national averages of trust in science (Borgonovi and Pokropek, 2020), which were derived from a global survey (Wellcome Global Monitor, 2018). Trust in science was moderately well predicted by these national average indexes (β = 0.4 [0.38, 0.43]). Indeed, this is a stronger relationship than any that trust in science has in our data (see for instance Fig. 2). We stress that these national norms reflect different survey items, different response scales and different survey delivery methods than our data, and that a comparison between national averages and individual responses will necessarily be noisy, so we consider this an encouraging result. Further, we check in a supplementary analysis that our conclusions still hold, controlling for these national norms (https://osf.io/s5mdh/).

Does trust in science predict unique variance in adherence behaviour?

Figure 1 shows coefficients from four separate Bayesian linear models where adherence was regressed on trust in science, or on trust in science and various combinations of approval and social conformity. Standardised regression coefficients are reported with 95% credibility intervals (CIs), as well as Bayes factors (BFs) where we want to assess the evidence in favour of there being no relationship. These models included country as a random effect (see https://osf.io/s5mdh/ for random effects structures, model priors, calculation of Bayes Factors, and control variables age, gender and education).

Fig. 1: Standardised effects (linear regression betas) with 95% credible intervals (CIs).
figure 1

These show the effects of trust in science, individual approval, and social conformity on adherence behaviour, according to which predictors were included in each model.

The effect of trust in science on adherence behaviour varied depending on which covariates were included. When trust in science was the only predictor, it predicted adherence (β = 0.08[0.06, 0.11]). When social conformity was included, the effect of science was reduced (β = 0.06[0.03, 0.09]). When approval of COVID-19 measures was included, the effect of science dropped out completely (with just approval as co-variate, trust in science β = 0.02[−0.01, 0.04], BF01 = 34; with approval and social conformity as covariates, science β = 0[−0.03, 0.02], BF01 = 70.6).

At best, trust in science had a small role in predicting adherence. At worst, it had no effect whatsoever. Considering direct predictors of adherence, then, it is inadvisable to place too much weight on people’s trust in science, independently of these other critical factors.

Does trust in science predict approval of the rules over adherence to the rules?

A second aim was to see whether trust in science predicts approval of the rules, adherence to the rules, or both. In particular, we argued that, if trust in science is to play a robust role in the pandemic, it should affect behaviour by first changing minds. This aim can be addressed with a path analysis, comprising simultaneous Bayesian linear regressions.

The model pathways are illustrated in Fig. 2a. In the Supplementary Material we justify the inclusion of each pathway, but briefly: in addition to the critical pathways connecting trust in science, approval and adherence, the model included a pathway from social conformity to adherence (Bicchieri et al., 2021; Moehring et al., 2021). Furthermore, as previous research has shown that political ideology predicts trust in science (Gauchat, 2012; Rutjens et al., 2018b), approval (Collins et al., 2021, their ‘support for restrictions’ variable), and adherence (Pennycook et al., 2020), and that trust in science may mediate the latter relationship (Plohl and Musil, 2021), pathways for these relationships were included. All pathways include random intercepts and slopes for country. See https://osf.io/s5mdh/ for further details, including demographic control variables (age, gender and education). Figure 2b plots standardised regression coefficients and CIs for the fixed effects from the simultaneous Bayesian regressions. The model R2 for adherence was 0.31 [0.29, 0.33].

Fig. 2: Pathways and posterior samples for path analysis.
figure 2

See Table S1 for justifications for each pathway. a Model pathway standardised coefficients, including 95% CIs for the direct and total effects of science and conservative ideology. b Posterior samples for model fixed effects, with whiskers showing 89% (thick) and 95% (thin) CIs.

In line with previous research, a more conservative ideology predicted lower trust in science (β = −0.23 [−0.29, −0.17]). There was no direct effect of trust in science on adherence (β = 0 [−0.06, 0.07], BF01 = 31.22). However, trust in science predicted approval (β = 0.25 [0.19, 0.32]), and had an indirect association with adherence via approval (β = 0.08 [0.06, 0.11]). Thus, trust in science had a moderate effect on whether people think they should adhere, but only a small, indirect effect on adherence behaviour.

Conservative ideology had no direct effect on approval (β = 0.01[−0.04, 0.06], BF01 = 38.48), though it had an indirect association with approval via trust in science (β = −0.06 [−0.08, −0.04]). Conservative ideology had no direct effect on adherence (β = −0.04[−0.09, 0.01], BF01 = 12.77), but had an indirect effect via the science—approval pathway (β = −0.02[−0.03, −0.01]), which contributed to a total effect (β = −0.05[−0.11, −0.01]).

How do the key relationships vary across countries?

As the strength of the effects of ideology and trust vary across countries (Czarnek et al., 2021; Pennycook et al., 2020; Siegrist, 2021), the model represented in Fig. 2a included by-country random slopes. The variation in these relationships can be explored using the posterior samples for the random slopes (here, showing the top-10 participating counties by sample size). Figure 3 plots these posterior samples for the pathways leading to and from trust in science (for the other pathways, see https://osf.io/s5mdh/).

Fig. 3: Posterior samples for random slopes for the top 10 countries by sample size.
figure 3

a The negative effect of conservative ideology on trust in science and b the positive effect of trust in science on individual approval. Fixed effects shown with dashed blue lines and 0 shown with dotted red lines. AUS: Australia; BGD: Bangladesh; DEU: Germany; FRA: France; GBR: United Kingdom; ITA: Italy; PER: Peru; SWE: Sweden; TUR: Turkey; USA: United States of America.

Despite some between-country variation, the effects of conservative ideology on trust in science (Fig. 3a) and of science on approval (Fig. 3b) were consistently in the same direction (relative to 0, shown with a dotted red line).

However, compared to population-level effects, in the USA, conservative ideology was more negatively linked to trust in science (consistent with previous findings, Pennycook et al., 2020), and trust in science was more positively linked to people’s approval of COVID-19 measures. Italy showed a similar, though weaker, pattern as the USA, whereas other countries were less consistent. For instance, Turkey had a fairly typical relationship between ideology and science, whereas the relationship between trust in science and approval was weak.

Supplementary analyses

We check that our findings do not depend on narrow assumptions with a range of alternative analyses. The full analyses are at https://osf.io/s5mdh/ and we also briefly summarise these analyses in the Supplementary Material. In particular, we discuss:

  1. 1.

    the reasons for our model pathways based on current literature (Figs. S1and S2; Table S1);

  2. 2.

    alternative pathways (e.g., where social conformity is not just a covariate, independent of the other predictors, Figs. S3S6);

  3. 3.

    alternative regression families instead of Gaussian regression (e.g., generalised linear regression with a zero one inflated beta family, Fig. S7);

  4. 4.

    imputed missing data (Fig. S8);

  5. 5.

    controlling for published national norms such as national levels of trust in science and the stringency of the prevention measures in each participant’s country of residence at the time of their participation (Fig. S9, using norms from Borgonovi and Pokropek, 2020; Hale et al., 2020);

  6. 6.

    simulation of potential unmeasured confounds (Fig. S10).

  7. 7.

    and measurement error (Fig. S11).

Our claims about the role of trust in science are robust against all of these alternative analysis strategies. The only conclusion which changes slightly is that there is sometimes evidence for a direct effect of ideology on adherence, depending on such modelling decisions. However, as our focus here is on trust in science rather than ideology, we simply conclude that there might be a direct effect of the latter on adherence, and that future work should explore this possibility.

In the “Methods” section, we mentioned several caveats about our measure of trust in science: it could reflect broader attitudes to science rather than trust specifically; we used three items from a six-item scale; and all items were negatively valenced, potentially indexing distrust rather than trustFootnote 2.

To assess whether these caveats affect our conclusions, we conducted a follow-up study where we recruited 1002 participants from Amazon’s Mechanical Turk and presented them with the above three items, as well as an item explicitly asking about trust in science (either positive “I trust science” or negative “I don’t trust science”, on the same response scale, with a virtual coin-flip determining which one of these two options each participant saw). At the same time, we re-analysed existing data sets (Sulik and McKay, 2021; Sulik et al., 2020) that include all six items along with variables known to correlate with trust in science (e.g., political ideology and science denial).

For details of the follow-up study and analysis, see https://osf.io/s5mdh/. Briefly, though: (1) We found that the above items correlated strongly with the explicit measure of trust in science (r = 0.76, p < 0.001). (2) We generated all possible combinations of three items from the six-item scale, and correlated each combination with variables known to be associated with trust in science (ideology and science denial). The correlations were very consistent across possible combinations, so our choice of these three items is unlikely to bias our results substantially. (3) We also generated alternative scores of trust in science. These were weighted averages (whereas the Results above report unweighted averages) where the weights were either factor loadings from an exploratory factor analysis, or regression coefficients from when the items in the follow-up study are used to predict either the positive or negative item about explicit trust. Our conclusions are robust against these different scoring methods. We found a difference between the positive and negative items, for which one interpretation is that our measure might more accurately be described as ‘distrust’ rather than ‘trust’. Crucially, though, we also found that this makes no difference to our conclusions above. Thus, the distinction between trust and distrust, though theoretically important, does not alter our conclusions.

Discussion

This study helps tackle the question of what difference trust in science could make when it comes to the adoption of new norms, such as those required by global threats. The results show that, in the face of the COVID-19 pandemic, trust in science only had a small and indirect effect on whether people reported following distancing guidelines. Better trust in science is unlikely to have yielded a major increase in adherence. To illustrate, suppose that a wildly successful messaging campaign leads to a 20% increase in trust in science. Multiplying this by the total effect in Fig. 2a, that would only yield a 2% increase in adherence.

Trust in science could nonetheless be credited for changing minds, if not directly affecting behaviour, in the sense that it was moderately associated with approval of new social distancing rules. One important implication is that the role of trust in science in the pandemic is unlike those of propaganda or threat, which focus on compelling behaviour (Mercier, 2017). This coheres with recent findings that trust in science is associated with support for pandemic measures (Algan et al., 2021; Dohle et al., 2020). However, it goes beyond such studies (which report an association between trust in science and adherence to pandemic measures) in showing that the latter association drops out because approval is itself associated with adherence.

The role of approval here is consistent with meta-analyses showing that in many areas—from adopting climate-friendly behaviours to sunscreen use, to exercise, healthy eating or condom use—having a positive attitude towards the behaviour in question and intending to do it is significantly predictive of how people actually behave (Chevance et al., 2019; Cologna and Siegrist, 2020; McDermott et al., 2016; Webb and Sheeran, 2006).

Attitudes toward science are part of a complex belief system. Extending previous research on associations between science and political ideology (Gauchat, 2012; Rutjens et al., 2018a), our results show that trust in science is a linchpin linking political ideology to approval of science-based guidelines: outside of the role of trust in science, ideology did not have a direct effect on approval of the rules, and its effect on adherence with the rules was small and fragile (e.g., depending on modelling decisions discussed in the Supplementary material). Previous research on climate change denial has shown that pro-science recommendations are more effective when they appeal to people’s values, and when they are consistent with their ideology (DeMora et al., 2021; Dixon et al., 2017; Hornsey and Fielding, 2017; Wolsko et al., 2016).

Based on these findings of a moderate and indirect effect of trust in science on behaviour, and of associations between between trust and ideology, we propose a ‘Bridge Model’ of science for enacting behavioural change. According to this model, trust in science affects behaviours (e.g., adherence to COVID-19 rules) through improving people’s attitudes (here, approval) towards the behaviour in question. In turn, trust in science serves as a bridge between political ideology and these pandemic-relevant attitudes and behaviours. This model contrasts with widespread assumptions in the existing literature that trust in science is important due to having a direct effect on behaviour change.

Trust in science generates other epistemic benefits, too: it makes people less susceptible to misinformation (Roozenbeek et al., 2020) and influences the formation of opinion-networks (Maher et al., 2020). It is a relatively stable trait (Agley, 2020), and is resistant to erosion from ideological opponents (Kreps and Kriner, 2020). In that sense, these findings may be helpful for policy-based interventions as they suggest that trust in science could serve as a ‘boost’ for behavioural change. Unlike ‘nudges’ that focus on behaviour and are usually easily reversible, ‘boosts’ focus on people’s decision-making processes and can thereby achieve sustained behavioural change (Hertwig and Grüne-Yanoff, 2017).

Notes on generalisability

Our study only considered social distancing, which was the dominant concern at the time of data collection, but which also required an abrupt change of behaviour and social norms. Future research should examine whether the link between changes in approval and changes in behaviour will generalise to other COVID-19 measures such as mask wearing and vaccination uptake, or generalise beyond the pandemic context to other cases where behaviour change is necessary, such as climate change. Vaccination has been a major theme of more recent stages of the pandemic, and higher vaccination rates are associated with higher trust in science (Hromatko et al., 2021; Lindholt et al., 2021; Soveri et al., 2021; Sturgis et al., 2021). The relationships between trust and vaccination intentions reported in these studies (e.g., r = 0.58 in Soveri et al., 2021; r = 0.37 in Hromatko et al., 2021) are larger than the association between trust in science and adherence to social distancing measures reported here. This leaves open the possibility that trust in science may matter more for vaccines than it seems to matter for social distancing. Nonetheless, as we have shown that such pairwise relationships are not enough to identify how trust in science matters for behaviour, we recommend that future work apply a framework such as our proposed Bridge Model to better understand that role when it comes to vaccines.

Political ideology is an established correlate of trust in science (Gauchat, 2012; Rutjens et al., 2018a). Here we measured political ideology using a common liberal-to-conservative response scale. However, recent research has shown that other aspects or facets of political ideology might matter more than this general spectrum for science-related attitudes. These include populism (Jylhä and Hellmer, 2020; Mede and Schäfer, 2020), reactance (Hornsey et al., 2018a) and social dominance orientation (Häkkinen and Akrami, 2014; Jylhä et al., 2016; Kerr and Wilson, 2021). As several hundred participants chose to opt out of our liberal-to-conservative item, a question for future research is whether other, more nuanced conceptions of ideology might increase response rates, or alter our conception of how ideology, trust in science, attitudes to policy and adherence to prosocial measures are related.

One of our research questions aimed to examine how general the patterns in the data would be across countries. Our findings indicate that relationships between ideology, trust in science and approval of pandemic measures followed the same pattern in the top 10 countries in our dataset. Still, considerable variation was observed among countries, with the USA appearing to be an outlier in both relations. On the one hand, these findings support previous work showing that conservative ideology is linked to less trust in science in predominantly Western countries (Gauchat, 2012; Pennycook et al., 2020). On the other hand, the high variability of responses provides strong reason to examine the links of trust in science with individual behaviour in diverse populations. Another important question for future research is how cross-country differences in political culture and ideology (i.e., going beyond the liberal/conservative distinction as discussed above) might affect these findings.

A limitation of our study, though not unique to it, is that our social-media recruitment process did not produce a representative sample. Specifically, there was a high proportion of educated women (see ‘Descriptive overview’ in Results). However, all analyses included demographic variables (such as age, gender and education) as covariates, and included country as a random effect to account for the imbalances in our sample distributions. An important indication that our recruitment procedure has not seriously biased results is that the levels of the main phenomenon of interest—trust in science—are strikingly similar to levels reported in previous studies. The average level of trust in science reported here—measured on a percentage scale with three items—was 75.6% (SD = 20%). This compares with levels previously reported during the pandemic, such as 82% (4.12 on a 5-point scale, using 14 items, with a sample recruited via social media, Plohl and Musil, 2021), 77% (5.39 on a 7-point scale, using just two items drawn from the same instrument used here, with a representative sample of New Zealanders, Sibley et al., 2020), or 76% (3.81 on a 5-point scale, using 21 items, with a sample of US residents recruited via Amazon’s Mechanical Turk, Agley, 2020). As these studies varied in the number of items (ranging from 2 to 21), as well as in their recruitment strategy and representativeness, this suggests that measurement of trust in science is somewhat robust to such methodological differences. Further, our finding that these relationships are unusually strong in the USA is consistent with previous work (Allum et al., 2008; Hornsey et al., 2018b).

Another limitation is that we measured people’s self-reported adherence rather than actual social distancing behaviour. However, the same patterns can be observed in social distancing whether measured via self-report and via mobile-phone movement tracking (Petherick et al., 2021), and a recent survey showed that responses regarding COVID-19 compliance do not suffer from social-desirability effects (Larsen et al., 2020). Given how our results cohere with so many findings about how trust relates to ideology, approval and adherence, distortions due to self-report are unlikely to be entirely responsible for driving our findings.

The anonymous, online, cross-sectional nature of our survey, where participants self-selected into the sample, might also conceivably limit the generalisability of our findings. As we only saved responses at the end of the survey (and only used complete responses in our analysis), we do not know how the attitudes of those who chose to quit the survey before finishing might have differed from our reported findings. The same goes for people who clicked on the link to our survey, but decided not to take part. Future work might also consider any effects of motivation: not only whether any participants who complete the survey may nonetheless lack the motivation to provide honest, sincere responses, but also whether such a tendency is associated with any of the factors analysed here. The cross-sectional design also limits our ability to draw causal inferences.

Finally, as noted briefly in the Methods (and in more detail in the supplementary analyses), our measure of “trust in science” might be called a measure of “distrust in science”, “credibility” or “negative attitudes towards science”. However, the relationships between trust, trustworthiness and credibility are not yet agreed theoretically: some researchers view trust as one aspect of credibility (Hartman et al., 2017); others seem to treat credibility and trustworthiness as aspects of trust (Nadelson et al., 2014); and still others see trustworthiness and credibility as related but distinct (Hendriks et al., 2016). Further, the precise nature of these interrelationships is a crucial avenue for future work. Based on our findings, we suggest that such work would benefit from studying trust, trustworthiness or credibility in the context of the ‘Bridge Model’ proposed here.

Conclusions

We probe the mechanisms and limits of trust in science in terms of achieving behavioural change during the current crisis, with implications for the handling of future crises. Trust in science can promote people’s policy approval of new rules, but has only a small, indirect effect on adherence to these rules. Science performs best, not at changing behaviour, but at convincing minds. We also show that trust in science acts as a pivotal link between political ideology and attitudes to science-based measures. This bridging role means it is a vital component in depolarising political and public debates when social changes are required.