Introduction

Repetition typically increases truth judgments of statements regardless of their actual truth (for meta-analysis, see Dechêne et al., 2010; see also Brashier & Marsh, 2020; Pillai & Fazio, 2021; Unkelbach et al., 2019). This "truth effect" is commonly explained by processing fluency (e.g., Reber & Schwarz, 1999; Unkelbach & Greifeneder, 2013) and familiarity (e.g., Begg et al., 1992). Repetition makes statements easier to process and more familiar than new ones, which are used as cues for truth (e.g., Brashier & Marsh, 2020; Ecker et al., 2022; see, e.g., Unkelbach & Rom, 2017, for a referential account).

The bulk of studies on the truth effect used uncertain factual statements (Henderson et al., 2022), often assuming that the truth ambiguity of statements is necessary to observe the truth effect (e.g., Dechêne et al., 2010; Unkelbach & Stahl, 2009). Some recent studies used more diverse statements, some of which challenged this assumption. For instance, the truth effect has been observed with true COVID-19 statements (Unkelbach & Speckmann, 2021), political opinions (Arkes et al., 1989), rumors (DiFonzo et al., 2016), fake news (Pennycook et al., 2018), emotional statements (Moritz et al., 2012), and statements that contradict prior knowledge (Fazio, 2020; Fazio et al., 2015), sometimes blatantly so (Fazio et al., 2019; Lacassagne et al., 2022).

In the present study, we investigated whether repetition increases belief in conspiracy theories (hereafter, conspiracism). For the present purpose, we confine defining conspiracism as "a belief that two or more actors have coordinated in secret to achieve an outcome and that their conspiracy is of public interest but not public knowledge" (Douglas & Sutton, 2023, p. 282; see also, e.g., Douglas et al., 2019; Keeley, 1999; Nera & Schöpfer, 2023). This definition is agnostic to the truth of conspiracy theories (some may be true, and others may be false). However, conspiracy theories are "epistemically risky" (Douglas & Sutton, 2023), meaning that they are typically implausible and prone to falsity – as a result, conspiracy theories are often considered a form of false and misleading information (Pennycook & Rand, 2021).

With the Internet, conspiracy theories can spread broadly, raising questions such as the antecedents and consequences of conspiracism (e.g., van Prooijen & van Vugt, 2018). Conspiracism is assumed to be rooted in individual differences and predispositions. For instance, intuitive (analytic) thinking has been associated with increased (decreased) conspiracism (e.g., Swami et al., 2014; van Prooijen, 2017). Other individual differences such as motivations to believe (Biddlestone et al., 2022; Douglas et al., 2019; Douglas et al., 2017), belief in finalism (Wagner-Egger et al., 2018), paranoia (Brotherton & Eser, 2015), other personality traits (Goreis & Voracek, 2019), and demographic factors (e.g., Freeman & Bentall, 2017) have also been associated with conspiracism (see, e.g., Douglas & Sutton, 2023, for an overview).

Research also investigated the consequences of exposure to conspiracy theories on behavior, behavioral intentions, and prejudice (Jolley & Douglas, 2014a, 2014b; Jolley, Meleady, & Douglas, 2020a; van der Linden, 2015; for a review, see Jolley, Mari, & Douglas, 2020b). The findings are consistent with the possibility of an increase in belief due to participants being merely exposed to a conspiracy theory. Of importance, these studies did not collect measures of belief in the presented conspiracy statements (e.g., adhesion, truth judgments), or these measures were not collected as a function of repeated exposure. In addition, such studies typically displayed only one overarching conspiracy theory – conspiracy statements that are thematically related (e.g., conspiracy theories of Princess Diana's death; Douglas & Sutton, 2008; Jolley & Douglas, 2014a). To test the effect of prior exposure on belief, one needs to measure belief in conspiracy theories when they were presented before and when they are new – in other words when exposure to the conspiracy theories is repeated or not.

To our knowledge, no experimentation investigated the effects of (repeated) exposure to conspiracy theories on their believability. As endorsing conspiracy theories may be key to influencing behavior, it is critical to directly address the causal role of repetition on truth judgments of conspiracy theories. Relatedly, Muirhead and Rosenblum (2019) suggested the concept of "new conspiracism," which refers to the phenomenon that repetition, not evidence, is commonly used to validate conspiracy theories. Such conspiratorial thinking, Muirhead and Rosenblum reasoned, dispenses with the burden of explanation (which is necessary to uncover real conspiracies; e.g., journalistic investigations) and imposes its reality through repetition (exemplified by the catch-phrase "a lot of people are saying."), which is amplified by social media. Although this phenomenon when tackled in the political science domain assigns repetition a major role, this role has yet to be tested.

Here, we ask whether the truth effect extends to conspiracy statements.

In an earlier investigation, Béna et al. (2019) found initial evidence in line with the hypothesis that repetition might increase the perceived truth of conspiracy statements. Béna et al. reanalyzed large-scale surveys that used representative samples of the French population (Institut Français d'Opinion Publique (IFOP), 2017, 2019). In these surveys, respondents indicated whether they had already seen and to what extent they agreed with ten conspiracy statements corresponding to popular conspiracy theories (e.g., NASA faked moon landing). The re-analyses showed that participants agreed more with conspiracy statements they recognized than those that they did not recognize. Although Béna et al. were not in the position to analyze agreement as a function of actual repetition, but only as a function of perceived repetition, their results align with studies finding that recognized statements were more believed than statements deemed to be new, whether the statements were actually old or not (Bacon, 1979).

In the present high-powered preregistered experiment, we manipulated repeated exposure to conspiracy statements and uncertain factual statements (trivia statementsFootnote 1). Based on the range of statements that the truth effect was found with and on the initial results from Béna et al. (2019), we hypothesized that repeated exposure would increase truth judgments of conspiracy statements. We included trivia statements as a reference point,Footnote 2 allowing us to compare the truth effect magnitude with conspiracy statements compared with trivia statements.Footnote 3 Finding the truth effect with conspiracy statements would be informative as we would learn that repeated exposure is a possible antecedent of conspiracism.

By experimentally repeating statements only once, manipulating materials within participants, and administering a true/false truth judgment task, we proceeded to a conservative test of the truth effect with conspiracy statements. For instance, the truth effect was not found with highly implausible statements (e.g., "Elephants run faster than cheetahs") when only one repetition and scales with few response points were used (Pennycook et al., 2018), but occurred when more repetitions and a sensitive scale were involved (Lacassagne et al., 2022).

In addition to assessing the causal effect of repetition on truth judgments of conspiracy and trivia statements, we also probed participants' cognitive style and conspiracy mentality (two widely studied individual differences in the context of conspiracism). As mentioned above, conspiracism is associated with several individual differences, including cognitive style. In contrast, truth effect research found little evidence for correlations between the truth effect and individual differences, including cognitive style (de Keersmaecker et al., 2020; but see Newman et al., 2020, for a correlation with the need for cognition). If we find a truth effect with conspiracy statements, we can ask whether it depends on individual differences, such as cognitive style and conspiracy mentality. On this matter, no straightforward prediction can be derived from the null results involving individual differences in the truth effect literature, nor from results involving individual differences in conspiracism research. As a result, these analyses were exploratory.

In addition, as we included trivia statements, we tested whether the size of the truth effect with trivia statements depends on cognitive style (conceptually replicating previous research, e.g., de Keersmaecker et al., 2020; conspiracy mentality is less relevant on this matter).

Methods

We report how we determined our sample size, criteria for data exclusion, all manipulations, and all measures in the study. The preregistration, experiment program, data, and analyzes are publicly available at https://osf.io/edzac.

Participants and design

We used a 2 (Repetition: repeated vs. new) × 2 (Materials: conspiracy or trivia) design, with the two factors manipulated within participants. Trivia statements were either factually true or false, which is a nested manipulation inside the trivia statements condition.

We collected complete data from a total of 374 participants online. After data exclusion,Footnote 4 there were 299 participants in the final sample (Mage = 28.59 years, SDage = 11.43, 82.6% women, 57.53% students). An a priori power analysis (conducted with G*Power 3.1.9.7; Faul et al., 2007) showed that we needed 282 participants to detect an effect of Repetition on proportions of "true" judgments in the conspiracy statements condition (the critical effect we are interested in) as small as Cohen's d = 0.2 (in a two-tailed paired samples t-test with α = .05/4 = .0125; 1-β = .8).

Materials

Conspiracy statement selection

To operationalize conspiracy theories, we used 20 existing and widespread conspiracy statements (e.g., the NASA faked the moon landing; Lady Diana's accident being a disguised murder). We used 18 conspiracy statements from IFOP (2017, 2019, 2020, see also Wagner-Egger et al., 2018). We further created two conspiracy statements (one on hydroxychloroquine, the other on climate change). The 20 conspiracy statements we used are available in French at https://osf.io/dtn9q.

Trivia statement selection

To use statements with average uncertain truth, we selected 20 factual statements (e.g., "There are no domestic snakes in Scotland and Groenland") about a variety of topics (sciences, arts, history) from a larger pool of statements selected to be uncertain (including French translations of statements from Unkelbach & Rom, 2017, and Silva, 2014). Ten statements were factually true, and ten statements were factually false. The 20 trivia statements we used are available in French at https://osf.io/dtn9q.

Statement presentation

For each participant, 40 statements (20 conspiracy statements; ten true factual uncertain statements; ten false factual uncertain statements) were randomly allocated to either the repeated or new condition. In each Repetition condition, there were 20 statements (half conspiracy statements, half trivia statements).

Cognitive style

We used a French version of the original three-item Cognitive Reflection Test (CRT; Frederick, 2005) to probe participants' cognitive style. The CRT is intended to probe individual differences in the tendency to override intuitive but incorrect responses (e.g., "In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?" [French translation in the current study: “Un lac est recouvert de nénuphars dont l'étendue double chaque jour. Si les nénuphars mettent 48 jours à couvrir toute la surface du lac, en combien de temps en couvriraient-ils la moitié ?”]). We computed the number of problems correctly solved (M = 1.4; SD = 1.15). No or few problems solved are associated with intuitive thinking, while more solved problems are associated with analytic thinking.

Conspiracy mentality

We administered the Conspiracy Mentality Questionnaire (CMQ; Bruder et al., 2013, translated into French by Lantian et al., 2016).Footnote 5 The CMQ consists of five items aimed at probing individuals’ general susceptibility to conspiracy explanations (e.g., “I think that events which superficially seem to lack a connection are often the result of secret activities” [French translation: “Je pense que des événements qui, en apparence, ne semblent pas avoir de lien sont souvent le résultat d’activités secrètes”]). Participants indicated how likely they thought the five statements were on a 5-point Likert scale ("Certainly not, 0%", "25%", "50%", "75%", "Certainly, 100%"). For each participant, we computed the mean response (Cronbach's α = .82; M = 3.16; SD = 0.85), with higher scores indicating a higher conspiracy mentality.

Procedure

After the ethical committee approval, we ran the study online with the Qualtrics online survey tool (Qualtrics, Provo, UT) between 5 October 2020, and 12 January 2021. We created a JavaScript code to randomize statements attribution in each repetition condition and order of presentation for each participant. One of the co-authors distributed the study to various French-speaking Facebook groups related (e.g., undergraduate student groups from several majors) and unrelated (e.g., news groups from several French cities) to our university. As a result, the researcher and his interest in the truth effect were unlikely to be known by the participants overall, and the final sample is unlikely to mainly reflect the researcher own's network. The post indicated that the study was about the evaluation of information without further details. It was strongly recommended to participate on a computer in a quiet room.

The study was conducted online in French. After the display of the consent form and the collection of their agreement, participants gave demographic information (sex, age, professional situation, mother tongue, level on the Common European Framework of Reference scale for French if the mother tongue was other than French).

Instructions then indicated that statements, some true and some false, would be displayed without a time limit with the task to rate their interest (as frequently done in truth effect studies, see, e.g., Henderson et al., 2022) on a 5-point Likert scale (1 – "Not interesting at all"; 5 – "Extremely interesting"). Participants then rated the interest of 20 statements (ten conspiracy statements; five trivia false; five trivia true) displayed in a random order one by one in the center of the screen.

Immediately after this task, participants were introduced to the true/false truth judgment task. In this task, the 20 statements from the interest judgment task were mixed with 20 new ones (ten conspiracy statements; five trivia false; five trivia true) and displayed in a random order one by one in the center of the screen without a time limit. The instructions stressed that it was important to answer even if some statements seemed odd or if the participants were uncertain. Participants were in addition asked not to look for information about the statements during the task.

Once the truth judgment task was completed, we administered the three-item CRT and the CMQ. The CRT and CMQ order was counterbalanced between participants. In the CRT, participants were asked to solve three short problems displayed individually in a random order, without time limit. Participants had to give their response in an open numerical format. In the CMQ, we told participants that we were interested in their personal opinion and that they would indicate the extent to which they thought the five items, displayed on the same screen, were true.

Finally, we asked participants (1) whether they looked for information about the statements or the problems during the study (yes/no answer), (2) whether they happened to answer without reading the displayed statements (yes/no answer), and (3) after reading the study objectives, whether they allow us to use their data in our analyses (yes/no answer). We used responses to these three questions as exclusion criteria (see the Participants and design section above). Participants were then thanked and debriefed in a concluding text.

Results

To conduct the statistical analyses, we used R (R Core Team, 2021) and the packages afex (Singmann et al., 2021, version 1.0-1), emmeans (Lenth, 2020, version 1.5.2-1), and stats (in base R). We calculated Cohen’s d with effsize (Torchiano, 2020, version 0.8.1). We made the raincloud plots (Allen et al., 2021) with scripts from Allen et al. and ggplot2 (Wickham, 2016, version 3.3.5); we made the regression plots with interactions (Long, 2019, version 1.1.0) and ggpubr (Kassambara, 2020, version 0.4.0).

A truth effect with trivia and conspiracy statements

We conducted the preregistered 2 (Repetition: repeated or new) × 2 (Materials: conspiracy or trivia statements) repeated-measures ANOVA on the proportions of "true" responses (see Fig. 1). The main effect of Repetition was statistically significant, F(1, 298) = 119.45, p < .001, η2G = .041. Overall, repeated statements were more often judged as true (M = .51; SD = .13) than new ones (M = .42; SD = .13). The main effect of Materials was also significant, F(1, 298) = 877.19, p < .001, η2G = .599. Trivia statements were more often judged as true (M = .72; SD = .19) than conspiracy statements (M = .21; SD = .19). Critically, these main effects were qualified by a two-way interaction, F(1, 298) = 42.7, p < .001, η2G= .015 (see Fig. 1).

Fig. 1
figure 1

Proportions of “true” responses as a function of Materials and Repetition. The dots are the participants’ scores (horizontally jittered). The error bars are the 95% confidence intervals of the means, with the mean in between. The distributions are the kernel probability density of the data in each Materials × Repetition condition (trimmed to remain within the range of possible values, between 0 and 1). Dashed horizontal line: no bias toward a “true” or “false” response

To interpret the two-way interaction between Repetition and Materials, we conducted pairwise comparisons based on the full model in each Materials condition. For trivia statements, "true" responses were more frequent when the statements were repeated (M = .79; SD = .21) than when they were new (M = .65; SD = .22) – the typical truth effect, t(298) = 11.43, p < .0001, Cohen’s d = 0.649, 95%CId = [0.526; 0.772]. This effect of repetition was also significant for conspiracy statements: "true" responses were more frequent for repeated (M = .22; SD = .22) than new statements (M = .19; SD = .19), t(298) = 3.45, p = .0006, d = 0.169, 95%CId = [0.072; 0.266]. The truth effect was significant for both trivia and conspiracy statements, but larger for trivia statements (as indicated by the non-overlapping 95%CI of the Cohen's ds and the significant interaction between Repetition and Materials in the ANOVA).

Truth effect scores are unmoderated by CMQ and CRT scores

We conducted the preregistered multiple regression model with "true" responses proportions as the dependent variable and Repetition, Materials (both dummy-coded), CMQ scores, and the number of correct responses in the CRT (both standardized) as factors (participants as a random variable).

Similar to the ANOVA reported above, we found a main effect of Repetition, F(1, 885) = 66.84, p < .001, a main effect of Materials, F(1, 885) = 2211.9, p < .001, and a significant two-way interaction between Repetition and Materials, F(1, 885) = 22.29, p < .001. No other interactive effect involving Repetition was statistically significant,Footnote 7 indicating that the size of the truth effect was not significantly moderated by CMQ and CRT scores both with trivia statements and conspiracy statements.

We found a main effect of CMQ scores on the proportions of "true" responses, F(1, 295) = 68.91, p < .001: Higher CMQ scores were associated with larger proportions of "true" responses. We found a significant two-way interaction between CMQ scores and Materials, F(1, 885) = 139.32, p < .001. For trivia statements, proportions of "true" responses did not vary as a function of CMQ scores (see Fig. 2a). We tested this effect in a non-preregistered multiple regression similar to the analysis reported above, except we removed the Materials factor and we restricted the analyses to the trivia or conspiracy statements. The effect of CMQ scores was not significant, F(1, 295) = 2.12, p = .146. In contrast, for conspiracy statements, higher CMQ scores were associated with larger proportions of "true" responses, F(1, 295) = 192.98, p < .001. The latter result aligns with the notion that CMQ scores capture a general tendency to believe in various conspiracy theories.

Fig. 2
figure 2

Proportions of “true” responses as a function of Materials and mean CMQ scores (a) and the number of Cognitive Reflection Test (CRT) problems correctly solved (b). The shaded areas around the regression lines are the 95% confidence intervals. Mean Conspiracy Mentality Questionnaire (CMQ) scores and the number of CRT problems correctly solved were standardized in the regression analyses

Back to the full model, another statistically significant effect was a two-way interaction effect between Materials and CRT scores, F(1, 885) = 35.43, p < .001 (see Fig. 2b). Similar to the non-preregistered analyses conducted to decompose the interaction involving CMQ scores, we decomposed the interaction between Materials and CRT scores. For trivia statements, higher CRT scores were associated with larger proportions of "true" responses, F(1, 295) = 9.76, p = .002. In contrast, for conspiracy statements, higher CRT scores were associated with smaller proportions of "true" responses, F(1, 295) = 14.82, p < .001.

Discussion

Repetition increases truth judgments of false, implausible, and misleading information. Although conspiracy theories can be seen as such statements, whether repetition increases truth judgments of conspiracy theories had yet to be investigated. It has recently been noted that exposure to conspiracism is rarely experimentally varied (Douglas & Sutton, 2023), despite the relevance of such manipulation for both truth effect and conspiracism research (see below). In the present experiment, we manipulated repeated exposure to conspiracy and trivia statements before asking participants to judge the truth of repeated and new statements. We also assessed participants' conspiracy mentality and cognitive style (intuitive vs. analytic thinking).

We found that repetition increased truth judgments of trivia statements (replicating the truth effect with the typical statements, e.g., Dechêne et al., 2010; Unkelbach et al., 2019) and conspiracy statements (extending the demonstration of the truth effect to another category of statements). This extension dovetails nicely with repetition increasing the perceived truth of statements, even implausible and misleading ones (Fazio et al., 2019; Pillai & Fazio, 2021; see below). While this was not our main goal, the present study addresses one limitation of truth effect research, namely the need for more diverse materials, particularly those related to health and politics (Henderson et al., 2022). Regarding conspiracism, we provide empirical support for a causal effect of repetition on conspiracism while (repeated) exposure is rarely varied in conspiracism research (Douglas & Sutton, 2023), and its effect on conspiracy beliefs is not assessed. Finding the truth effect with conspiracy statements suggests that situational factors, in addition to individual factors (e.g., personality; motivation), are central to explaining conspiracism (Brashier, 2023; Douglas et al., 2017).

Of note, we did not find associations between conspiracy mentality or cognitive style and the size of the truth effect, whether it is with trivia or conspiracy statements. Failing to find a relationship between cognitive style and the truth effect with trivia statements aligns with previous research also failing to do so (de Keersmaecker et al., 2020) and with the general difficulty in finding associations between quantitative individual differences and the truth effect (for an exception, see, e.g., Newman et al., 2020). Turning to conspiracy statements, not finding associations of the truth effect with conspiracy mentality or cognitive style may be surprising if one assumes that beliefs in conspiracy theories are mainly rooted in individual differences such as conspiratorial or intuitive thinking (Bago et al., 2022). Beyond suggesting that situational factors may prove important to understand conspiracism, these null results suggest they are independent of some individual ones.

Consistent with Swami et al. (2014), we found that analytic thinking was negatively associated with conspiracy statements' overall level of truth judgments. We also found results consistent with conspiracy mentality capturing a general propensity towards conspiratorial thinking (e.g., Imhoff & Bruder, 2014): truth judgments of conspiracy (but not trivia) statements were positively associated with conspiracy mentality, regardless of repetition.

Overall, the present study suggests that repeated exposure may be a simple way to increase conspiracism. Although the effect size we found was relatively small (d = 0.169; 95%CId = [0.072; 0.266]) and smaller than the truth effect with trivia statements (d = 0.649; 95%CId = [0.526; 0.772]), the present study led to a rather conservative test: Conspiracy statements were experimentally repeated only once, and we used a binary truth judgment task. Our results suggest that one repetition was enough to make some conspiracy statements believed more to the point of being perceived as true versus false. As more repetitions have been shown to increase the size of the truth effect (e.g., Fazio et al., 2022; Hassan & Barber, 2021), real-word settings – in which repetition of the same information may occur more than once may even lead to larger effects of repetition on conspiracism.

One interesting question is why the truth effect was smaller for conspiracy than trivia statements. Two possible explanations are implausibility and exposure rates. Conspiracy theories are less likely than trivia statements to be perceived as true regardless of repetition (as was found here – because they are epistemically risky; see Introduction, and Douglas & Sutton, 2023). This relative implausibility makes it likely that statements initially perceived to be false remain perceived false even if repetition increases perceived truth. As a result, the truth effect is less likely to be observed for implausible (including conspiracy) statements than relatively plausible statements, even if repetition increases perceived truth regardless of a statement’s plausibility (see Fazio et al., 2019, for a model and empirical support; see Lacassagne et al., 2022, for small increases in truth judgments of implausible statements).

The second explanation we consider is exposure rates. We used widespread conspiracy theories (such as "The Americans have never been to the Moon and NASA faked evidence and images of the Apollo mission's landing on the Moon," which 63% of a representative sample of the French population declared having already heard before participating in a survey; IFOP, 2019). As a result, it is possible that we compared one additional exposure to conspiracy statements and one single exposure to trivia statements. If so, and because there is evidence for a logarithmically shaped effect of repetition on truth judgments (Fazio et al., 2022; Hassan & Barber, 2021; the repetition-induced increase in truth judgment is larger for initial than subsequent repetitions), the repetition-induced increase in truth judgments for already-heard conspiracy theories is likely to be smaller than for unknown trivia statements. Future research may build on the present study design to orthogonally manipulate factors of interest beyond materials, such as statements' plausibility or experimental exposure rates.

Through analyses of two large-scale surveys (IFOP, 2017, 2019), Béna et al. (2019) found that perceived prior exposure could increase conspiracism. However, even if perceived exposure is associated with actual exposure, evidence for a causal effect of repeated exposure on conspiracy beliefs has been lacking. The present experiment provides such support in showing repetition-induced perceived truth of conspiracy statements.

We recommend exercising caution regarding the generalizability of the current findings to richer, real-world contexts. To determine the causal role of repetition on conspiracism, we used a truth effect paradigm, which is particularly suited to study how truth judgments depend on statements' repeated exposure. In the present experiment, statements were displayed without context or source information. In real-world contexts, statements come with various additional information, such as a source that can be more or less credible, be familiar or unknown, belong to one's own social group or to another one, to name a few. On social media, pictures often go together with titles of news articles, and comments and reactions appear next to the statements. Whether valid or not, possible sources of truth cues are various, and repeated exposure is only one of them. Whether repeated exposure increases conspiracism in natural settings is an open empirical question. Of interest, Nadarevic et al. (2020) found that participants rely on multiple cues to judge the truth of statements related to education, health, and politics on simulated social media posts. Testing whether repetition increases conspiracism in such settings would be informative to help identify when repetition delivers cues for truth judgments.

If repetition increases conspiracism beyond the procedure we used, a challenge is to reduce this effect. The truth effect with trivia statements is robust, and reducing it to non-significance is difficult. For instance, asking participants to avoid the truth effect reduced it but not to the point of canceling it (Calio et al., 2020; Nadarevic & Aßfalg, 2016). This result suggests that repetition-induced conspiracism may be difficult to cancel, too, although empirical evidence is still lacking.

Interestingly, research has found that repetition increases "has been used as fake news on social media" judgments – a "fakeness-by-repetition" effect (Corneille et al., 2020; see also Béna et al., 2022). This effect suggests that repetition may sometimes help fight misinformation effects rather than consistently being an issue to overcome. More research on the fakeness-by-repetition effect with consequential statements such as conspiracy theories and other types of misinformation would help identify judgment contexts where repetition can be used to fight belief in misinformation. Other interventions, such as orienting information processing on statements' truth right from the exposure phase, may help reduce the truth effect (e.g., Brashier et al., 2020 ; Nadarevic & Erdfelder, 2014; Smelter & Calvillo, 2020; see the "accuracy focus" to reduce the spread of misinformation, e.g., Pennycook et al., 2020, 2021; Roozenbeek et al., 2021). Whether such manipulations limit the effect of repetition on conspiracism is an important question for future research.

Conclusion

Repetition may be a simple way to increase conspiracism. The present experiment showed that the effect of repetition on truth judgments extends to conspiracy statements, regardless of cognitive style and conspiracy mentality. As we were interested in the causal role of repetition on conspiracism, we relied on a truth effect paradigm with minimal contextual information. Future research may test whether repetition increases conspiracism when other and possibly more diagnostic information is available. If this is the case, identifying ways to reduce repetition-induced conspiracism may contribute to fighting conspiracism as a whole.