Hostname: page-component-848d4c4894-wg55d Total loading time: 0 Render date: 2024-05-11T10:50:57.630Z Has data issue: false hasContentIssue false

Survey mode and satisfaction with democracy

Published online by Cambridge University Press:  10 August 2023

Hamad Ejaz
Affiliation:
Department of Political Science, Georgia State University, United States of America
Judd R. Thornton*
Affiliation:
Department of Political Science, Georgia State University, United States of America
*
*Corresponding author. Email: jrthornton@gsu.edu
Rights & Permissions [Opens in a new window]

Abstract

A growing body of evidence indicates the public is less committed to democracy than conventional wisdom long held. One possibility is that many in the public have internalized the norm that democracy is “good” but that such support is not firmly held. An implication of this reasoning is that because there is an expectation to express support for democracy, responses will be influenced by the presence of an interviewer due to social desirability effects. In this note, we examine the 2012 and 2016 American National Election Studies—in each year, a portion of respondents were interviewed via the internet while others were interviewed face-to-face. We identify a politically relevant difference between the two survey modes: those interviewed face-to-face express greater satisfaction with democracy. Indeed, the difference we identify is similar in magnitude to the difference typically observed between electoral winners and electoral losers. Our result is robust to different measurement and estimation strategies. While levels of satisfaction are influenced by the presence of an interviewer, a followup analysis indicates that the relationship between satisfaction and winner–loser status is similar across modes.

Type
Research Note
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of the European Political Science Association

As evidence has accumulated regarding democratic backsliding, scholars of both comparative and American politics have turned their attention to scrutinizing the public's commitment to democracy and democratic norms. While it has been argued that the public's support for democracy is declining (Mounk, Reference Mounk2018), support for democracy from public surveys remains high, including in the US (Voeten, Reference Voeten2018). However, a growing body of evidence indicates these positive attitudes may be held less firmly than it seems. Along these lines, in this note we examine if a social norm to support democracy has led to an overestimate of positive attitudes toward democracy.

Although expressed support for democracy remains high, a substantial portion of the public expresses openness to “at least one nondemocratic approach—expert rule, autocracy, or military rule—as a good way to govern...” including “many in economically advanced nations”—this includes 53 percent in the United States (Wike and Fetterolf, Reference Wike and Fetterolf2018, 140). That is, there exists a portion of the public who could be termed “democrats in name only” in that they express support for democracy but also endorse processes incompatible with liberal democracy (Wuttke et al., Reference Wuttke, Gavras and Schoen2022, 426).

These observational findings are consistent with experimental evidence demonstrating the limits of the public's commitments to democratic norms. For example, partisans fail to punish their leaders for espousing undemocratic positions (Graham and Svolik, Reference Graham and Svolik2020). Failure to punish co-partisans in power is particularly notable as voters’ knowledge of democratic norms was pre-tested. In short, partisans support their preferred parties despite the knowledge that their policies would undermine democracy (Simonovits et al., Reference Simonovits, McCoy and Littvay2022) and may even be willing to compromise on the constitutional process underpinning a democracy if their supported leaders believe it is appropriate for political expediency (Kingzette et al., Reference Kingzette, Druckman, Klar, Krupnikov, Levendusky and Ryan2021). It has become increasingly clear that partisanship can trump support for democracy—a key component of which is accepting electoral outcomes, particularly when preferred parties or candidates face defeat. Elite rhetoric may play an important role in this regard: exposure to elite allegations of electoral irregularities may lead to public disillusionment with the process of transition of power in a democracy (Clayton et al., Reference Clayton, Davis, Nyhan, Porter, Ryan and Wood2021).

Taken together, both survey and experimental findings suggest that public support for democracy is less firmly held than the existing literature supposed. An assumption that underlies much of this literature is that there exists a norm around expressing support for democracy (Svolik, Reference Svolik2019). If such a norm exists, this implies that responses to questions about democracy will be influenced by the presence of an interviewer as there is an expectation to express support for democracy.

It has long been recognized that characteristics of the survey process can influence responses (Deming, Reference Deming1944). Among these characteristics, the presence or absence of an interviewer has been identified as a source of variation in survey responses (Wiseman, Reference Wiseman1972). As such, the potential for survey mode effects was recognized since the advent of surveys administered over the internet (Joinson, Reference Joinson1999), confirmed by more recent evidence (e.g., Chang and Krosnick, Reference Chang and Krosnick2009).

If a particular response to a question is perceived as socially undesirable, a respondent may avoid being entirely honest when answering (Locander et al., Reference Locander, Sudman and Bradburn1976) as individuals engage, consciously or not, in a process of image management whereby they answer questions so as to present themselves in the most positive light (Millham and Kellogg, Reference Millham and Kellogg1980; Paulhus, Reference Paulhus1984). If respondents are less susceptible to the pressures that drive social desirability when surveys are self-administered, those interviewed via the internet will be more honest as the perceived cost of social sanctions is diminished. Social desirability effects are pervasive (Tourangeau et al., Reference Tourangeau, Rips and Rasinski2000, ch. 9) and their relevance is well established for both political attitudes (e.g., Krysan, Reference Krysan1998) and behavior (e.g., Karp and Brockington, Reference Karp and Brockington2005). Different patterns of responses between internet and face-to-face surveys are similarly well documented—for example, levels of partisan acrimony are higher among respondents interviewed via the internet (Iyengar and Krupenkin, Reference Iyengar and Krupenkin2018).

We combine insights from the literatures on social desirability and support for democracy. We argue if it is the case that (1) there is a norm to support democracy and (2) that respondents feel freer to be honest in internet surveys then differences should emerge in reported satisfaction across survey mode. Specifically, we hypothesize that those interviewed in-person will express greater satisfaction with democracy. We test this hypothesis with surveys that are nearly identical with the exception of the mode of administration. We find support for our hypothesis and identify a politically relevant difference in self-reported satisfaction. We conclude that absent the possibility of an immediate social sanction, respondents report less satisfaction with democracy indicating a lower level of support than was often appreciated. Our result is robust to different measures and estimation strategies. Reassuringly we demonstrate that substantive relationships—we examine winner–loser status and satisfaction—are not necessarily altered by survey mode.

Our focus on satisfaction with democracy warrants some discussion as the ubiquitous satisfaction item does not directly tap the sort of illiberal attitudes associated with deconsolidation. While there exists disagreement among scholars as to what precisely the item measures, there is some degree of consensus: it is thought to be an intermediate-level variable for measuring political support, that lies between diffuse support variables such as regime-type preferences and specific support variables such as executive approval of an incumbent political leader (Foa et al., Reference Foa, Klassen, Slade, Rand and Collins2020). Nevertheless, the lack of specificity in the question wording of the measure is an issue (Canache et al., Reference Canache, Mondak and Seligson2001). Literal interpretations of this measure run into problems arising from subjective evaluations of what democracy is, and its consistent evolution of what it has come to be at the time of research (Ferrin, Reference Ferrin2016). However, as indicated by Foa et al. (Reference Foa, Klassen, Slade, Rand and Collins2020), satisfaction with democracy reliably measures individual assessment of the democratic performance in the country. In this sense, satisfaction with democracy can be interpreted as an individual's assessment of the country's political system in their experience more than a generalized experience of democracy (Linde and Ekman, Reference Linde and Ekman2003) if not quite “ an expression of approval of the democratic process” (Singh et al., Reference Singh, Karakoç and Blais2012, 205). As a practical matter, the item is related to a host of relevant indicators, including satisfaction with democratic institutions (Lundmark et al., Reference Lundmark, Oscarsson and Weissenbilder2020) and the performance of democracy, including the rule of law and corruption (Wagner et al., Reference Wagner, Schneider and Halla2009). Thus, even if dissatisfaction is not synonymous with illiberalism, the item can be utilized to test our hypothesis of a norm to express support for democracy.

1. Data

We utilize the 2012 and 2016 American National Election Studies (ANES) conducted by the Center for Political Studies (CPS). In each year, the CPS obtained two separate nationally representative samples—one of which was conducted with face-to-face interviewers (FTF) and another over the internet. In 2012, 3567 respondents (65 percent of the total) who completed both the pre- and post-wave surveys were interviewed by internet compared to 1880 who were interviewed FTF. In 2016, 2566 respondents (71 percent) were interviewed over the internet compared to 1040 FTF respondents. While different sampling techniques were used to obtain the samples (e.g., cluster sampling was used for the FTF interviews), this does not necessarily pose an issue for our purposes. We acknowledge that a research design in which mode was randomly assigned after respondents agreed to be interviewed is necessary to obtain a clean causal estimate of mode on satisfaction (Gooch and Vavreck, Reference Gooch and Vavreck2019). While we cannot treat survey mode as truly random, we may be able to treat mode “as if” randomly administered (Dunning, Reference Dunning2008) allowing us to estimate the impact of mode—and thus, social desirability—on self-reported satisfaction.

We scrutinize the appropriateness of the assumption of “as if” random assignment by examining if demographic differences exist across modes, perhaps resulting from different patterns of response rates. This is essential as it has been argued that studies of mode effects may be conflated with other aspects of the survey process such as “sampling method, response rates, or sampling frame” (Gooch and Vavreck Reference Gooch and Vavreck2019, 144). Indeed, in both years, there are slight sampling frame differences as the FTF sample only includes respondents from the 48 contiguous states and Washington D.C. However, only 0.23 percent of respondents are from Hawaii and Alaska and as such the decision to include them does not alter the results we present. A larger issue is that the 2012 FTF sample included an over-sample of Black and Latino Americans.Footnote 1 We address this issue by utilizing survey weights and, as we discuss in more detail momentarily, by adjusting for a set of demographic covariates, including race, potentially related to survey mode. While we utilize the survey weights provided by the CPS in each of our analyses, this decision does not drive our results.

To examine if there are any demographic imbalances across the two samples, we estimate a model where the dependent variable is survey mode. We include age, income, sex, marital status, education, race, and Census Bureau region; coding instructions for each variable are included in the supplementary material (Appendix A). None of the demographic variables are associated with survey mode at the 5 percent significance level in either year.Footnote 2 Full results of the balance tests are presented in the supplementary material (Appendix B). While we observe balance between the samples, we estimate the relationship between mode and satisfaction while adjusting demographic covariates as we cannot treat mode as truly random; moreover, doing so increases the precision of the estimated relationship. We avoid including any variables that themselves may be influenced by mode.

As noted, we utilize the standard satisfaction item in which respondents are asked, “On the whole, are you very satisfied, fairly satisfied, not very satisfied, or not at all satisfied with the way democracy works in the United States?” While it is argued the measure “gauges people's responses to the process of democratic governance” (Anderson, Reference Anderson1998, 584) and can “be thought of as more concrete than measures that tap citizens’ views of democratic principles but as more diffuse than evaluations about the government in place” (Nadeau et al., Reference Nadeau, Daoust and Dassonneville2021, 6), we investigate the robustness of our results using two related attitudes—outlined later—given conceptual ambiguity discussed earlier. We initially examine the effect of mode by dichotomizing the satisfaction item so that all those who reported being at least somewhat satisfied as one and all others as zero. We test the robustness of our results by utilizing the original four-category coding (coded so that larger values represent greater satisfaction).

2. Results

We begin by presenting the distribution of satisfaction conditioned on survey mode in Figure 1, where the satisfaction measure is coded dichotomously.Footnote 3 The columns represent survey mode, while each row presents the data for a particular year. The gap between the percent who are satisfied and unsatisfied is noticeably smaller in the internet sample in both years. For example, in 2016 the gap is 35 percentage points in the FTF sample compared to 28 percentage points in the internet sample.

Figure 1. Satisfaction with democracy conditioned on survey mode. Satisfaction is measured dichotomously—the precent satisfied represents those who are at least “somewhat satisfied” with democracy.

We estimate a linear regression model with a dummy variable representing survey mode to estimate the effect of mode on self-reported satisfaction. We code those interviewed by internet as one and those interviewed FTF as zero. A negative coefficient therefore indicates a lower likelihood of being satisfied among internet respondents. We present results for each year separately as well as for a pooled model. The result of interest, the coefficient for mode, is presented in Table 1. Full results are presented in the supplementary material (Appendix B). In 2012, being interviewed in person increases the probability of being at least somewhat satisfied by 16.5 percentage points (p < 0.001) and in 2016 by 7.5 percentage points (p < 0.001). This shift is similar in magnitude to known sources of satisfaction, including winner–loser status (e.g., Bernauer and Vatter, Reference Bernauer and Vatter2012; Halliez and Thornton, Reference Halliez and Thornton2022,5). Models estimated with logistic regression—presented in the supplementary material (Appendix C)—return identical results.

Table 1. The relationship between survey mode and satisfaction with democracy

The different sampling frames used for the two samples in each year do not influence our results. Models excluding respondents from Alaska and Hawaii return identical coefficients to those in Table 1. Full results are presented in the supplementary material (Appendix D).

Given the oversampling in the 2012 FTF sample discussed earlier, we assess if the decision to utilize survey weights is driving our results. We unsurprisingly identify demographic differences across the two modes when we examine the 2012 data without weights. When we estimate the relationship between survey mode and satisfaction while adjusting for demographic characteristics without weights, the estimated coefficient is quite similar to those reported in the first column in Table 1: −0.153 (p < 0.001). A model without weights using the 2016 data returns a coefficient of −0.071 (p < 0.001). Full results of these analyses are presented in the supplementary material (Appendix E).

We further examine the robustness of our main result by retaining the original four category coding of satisfaction. We again observe a meaningful shift as a result of survey mode. Being interviewed via the internet leads to a decline of 0.244 (p < 0.001) in 2012 and 0.130 (p < 0.001) in 2016. Further, our substantive conclusions remain the same when explicitly taking the ordered nature of the satisfaction item into account. Full results of each of these analyses are presented in the supplementary material (Appendix F).

Finally, our main analyses did not adjust for partisanship, ideological self-identification, or vote choice, as each variable itself might be driven by survey mode. Reassuringly, we observe balance for each of these three variables across the samples in either year. Further, results from models that adjust for all three return similar results: the coefficient for mode is −0.147 (p < 0.000) in 2012, −0.084 (p = 0.001) in 2016, and −0.120 (p < 0.001) in a pooled model. Full results are presented in the supplementary material (Appendix G).

3. Extensions

Here we extend our analysis in two ways. First, we estimate models with alternative measures of attitudes about democracy. Second, we examine if the relationship between winner–loser status and satisfaction varies by mode.

3.1 Alternative measures of democratic attitudes

Given the ambiguity as to what precisely the satisfaction item measures, we examine if our result extends to other indicators tapping democratic goodwill. Lamentably, the ANES does not include measures of illiberal attitudes in either year. We are, however, able to examine political trust and efficacy. Political trust is a dichotomous measure of if the respondent thinks the federal government is run for the benefits of a few, or for all; efficacy is measured using a two-item scale.Footnote 4 Coding details are included in the supplementary material. As before, we estimate the model in each year as well as in a pooled model. Survey mode is related to both attitudes: those interviewed by the internet report lower levels of trust and less belief that their vote matters. We present results of these analyses in the supplementary material (Appendix I). While this analysis increases confidence that our results are not limited to the quirks of the satisfaction question, future research should more thoroughly examine mode effects on items more directly linked to illiberal attitudes—for example, support for military takeover or the preference for a less constrained executive.

3.2 Does the winner–loser satisfaction gap vary by mode?

If survey mode influences satisfaction, it is possible that it also influences the substantive relationship between it and known predictors. To assess this possibility, we examine if the well-established winner–loser relationship satisfaction varies by mode. To do so, we estimate a standard model of satisfaction examining the relationship between winner–loser status (coded by presidential vote) and satisfaction where we interact winner–loser status with mode. We control for interest, perceptions of the economy, ideology, income, education, and gender (coding instructions are presented in the supplementary material).

A significant coefficient for the interaction term would indicate the relationship varies by mode. Reassuringly, we fail to reject the null hypothesis in both years (p2012 = 0.724; p2016 = 0.430) and with a pooled model (p = 0.663). Full results are presented in the supplementary material (Appendix H). While it is worthwhile to further investigate if well-established relationships vary by mode, these results suggest scholars are on reasonably firm ground when examining substantive determinants of satisfaction no matter the mode.

4. Conclusion

Across different measurement and estimation strategies, we identify a meaningful difference in self-reported satisfaction with democracy between those interviewed face-to-face compared to over the internet. Respondents’ expressed attitudes about democracy are influenced by the presence of an in-person interviewer suggesting there exists a norm to support democracy. Consequently, existing estimates of the public's satisfaction with democracy may be biased upward when relying on in-person samples. With that said, we stress that even internet respondents are, on average, satisfied with democracy. We also demonstrated that our result extends to two other items tapping attitudes toward democracy, political trust, and efficacy. While survey mode influences levels of satisfaction, a subsequent analysis demonstrated that reassuringly substantive relationships are not necessarily altered by mode—the relationship between winner–loser status and satisfaction is similar across both FTF and internet samples.

As scholars have charted the global retreat of democracy over the last several decades, attention has also turned to the public's commitment to democracy. While our results do not speak directly to issues of democratic consolidation or backsliding, our results indicate that expressed support for such institutional regimes is influenced by the manner in which such attitudes are measured. In particular, measures tapping public support of democracy conducted using in-person interviews may be prone to overestimation due to perceived social sanctions and “social image” issues. That is, a portion of the public has internalized the norm that democracy is “good” but it is less clear what precisely their expressed support entails.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/psrm.2023.32. To obtain replication material for this article, https://doi.org/10.7910/DVN/HNYJTA.

Acknowledgement

The authors wish to thank Shane Singh for helpful feedback.

Footnotes

1 Another difference in 2012 is that the internet sample was recruited from GfK's KnowledgePanel thus including respondents more familiar with taking surveys. Reassuringly, “panel conditioning” appears to be unrelated to social desirability (Struminskaya, Reference Struminskaya2016). This different recruitment strategy does make our assumption of as if random assignment more tenuous. Fortunately, the 2016 recruitment is similar across modes.

2 This result holds when we examine the bivariate relationship between each of these covariates and survey mode.

3 We display the distribution of the four-point scale conditional on mode in the supplementary material (Appendix F).

4 Respondents are asked if it matters who is in power and if their vote makes a difference.

References

Anderson, CJ (1998) Parties, party systems, and satisfaction with democratic performance in the new Europe. Political Studies 46, 572588.CrossRefGoogle Scholar
Bernauer, J and Vatter, A (2012) Can't get no satisfaction with the Westminster model? Winners, losers and the effects of consensual and direct democratic institutions on satisfaction with democracy. European Journal of Political Research 51, 435468.CrossRefGoogle Scholar
Canache, D, Mondak, JJ and Seligson, MA (2001) Meaning and measurement in cross-national research on satisfaction with democracy. Public Opinion Quarterly 65, 506528.CrossRefGoogle Scholar
Chang, L and Krosnick, JA (2009) National surveys via RDD telephone interviewing versus the internet: comparing sample representativeness and response quality. Public Opinion Quarterly 73, 641678.CrossRefGoogle Scholar
Clayton, K, Davis, NT, Nyhan, B, Porter, E, Ryan, TJ and Wood, TJ (2021) Elite rhetoric can undermine democratic norms. Proceedings of the National Academy of Sciences 118, e2024125118.CrossRefGoogle ScholarPubMed
Deming, WE (1944) On errors in surveys. American Sociological Review 9, 359369.CrossRefGoogle Scholar
Dunning, T (2008) Improving causal inference: strengths and limitations of natural experiments. Political Research Quarterly 61, 282293.CrossRefGoogle Scholar
Ferrin, M (2016) An empirical assessment of satisfaction with democracy. In How Europeans view and evaluate democracy, Oxford: Oxford University Press, pp. 283306.CrossRefGoogle Scholar
Foa, RS, Klassen, A, Slade, M, Rand, A and Collins, R (2020) The global satisfaction with democracy report 2020.Google Scholar
Gooch, A and Vavreck, L (2019) How face-to-face interviews and cognitive skill affect item non-response: a randomized experiment assigning mode of interview. Political Science Research and Methods 7, 143162.CrossRefGoogle Scholar
Graham, MH and Svolik, MW (2020) Democracy in America? Partisanship, polarization, and the robustness of support for democracy in the United States. American Political Science Review 114, 392409.CrossRefGoogle Scholar
Halliez, AA and Thornton, JR (2022) The winner-loser satisfaction gap in the absence of a clear outcome. Party Politics 92, 13540688211058111.Google Scholar
Iyengar, S and Krupenkin, M (2018) The strengthening of partisan affect. Political Psychology 39, 201218.CrossRefGoogle Scholar
Joinson, A (1999) Social desirability, anonymity, and internet-based questionnaires. Behavior Research Methods, Instruments, & Computers 31, 433438.CrossRefGoogle ScholarPubMed
Karp, JA and Brockington, D (2005) Social desirability and response validity: a comparative analysis of overreporting voter turnout in five countries. The Journal of Politics 67, 825840.CrossRefGoogle Scholar
Kingzette, J, Druckman, JN, Klar, S, Krupnikov, Y, Levendusky, M and Ryan, JB (2021) How affective polarization undermines support for democratic norms. Public Opinion Quarterly 85, 663677.CrossRefGoogle Scholar
Krysan, M (1998) Privacy and the expression of white racial attitudes: a comparison across three contexts. Public Opinion Quarterly 62, 506544.CrossRefGoogle Scholar
Linde, J and Ekman, J (2003) Satisfaction with democracy: a note on a frequently used indicator in comparative politics. European Journal of Political Research 42, 391408.CrossRefGoogle Scholar
Locander, W, Sudman, S and Bradburn, N (1976) An investigation of interview method, threat and response distortion. Journal of the American Statistical Association 71, 269275.CrossRefGoogle Scholar
Lundmark, S, Oscarsson, H and Weissenbilder, M (2020) Confidence in an election authority and satisfaction with democracy: evidence from a quasi-natural experiment of a failed election in Sweden. Electoral Studies 67, 102216.CrossRefGoogle Scholar
Millham, J and Kellogg, RW (1980) Need for social approval: impression management or self-deception?. Journal of Research in Personality 14, 445457.CrossRefGoogle Scholar
Mounk, Y (2018) The people vs. democracy. In The People vs. Democracy. Cambirdge, MA: Harvard University Press.CrossRefGoogle Scholar
Nadeau, R, Daoust, J-F and Dassonneville, R (2021) Winning, losing, and the quality of democracy. Political Studies 71, 00323217211026189.Google Scholar
Paulhus, DL (1984) Two-component models of socially desirable responding. Journal of Personality and Social Psychology 46, 59–8.CrossRefGoogle Scholar
Simonovits, G, McCoy, J and Littvay, L (2022) Democratic hypocrisy and out-group threat: explaining citizen support for democratic erosion. The Journal of Politics 84, 000000.CrossRefGoogle Scholar
Singh, S, Karakoç, E and Blais, A (2012) Differentiating winners: how elections affect satisfaction with democracy. Electoral Studies 31, 201211.CrossRefGoogle Scholar
Struminskaya, B (2016) Respondent conditioning in online panel surveys: results of two field experiments. Social Science Computer Review 34, 95115.CrossRefGoogle Scholar
Svolik, MW (2019) Polarization versus democracy. Journal of Democracy 30, 2032.CrossRefGoogle Scholar
Tourangeau, R, Rips, LJ and Rasinski, K (2000) The psychology of survey response.CrossRefGoogle Scholar
Voeten, E (2018) Are people really turning away from democracy?. Journal of Democracy. Web Exchange. https://www.journalofdemocracy.org/wp-content/uploads/2018/12/Journal-of-Democracy-Web-Exchange-Voeten_0.pdfGoogle Scholar
Wagner, AF, Schneider, F and Halla, M (2009) The quality of institutions and satisfaction with democracy in Western Europe—a panel analysis. European Journal of Political Economy 25, 3041.CrossRefGoogle Scholar
Wike, R and Fetterolf, J (2018) Liberal democracy's crisis of confidence. Journal of Democracy 29, 136150.CrossRefGoogle Scholar
Wiseman, F (1972) Methodological bias in public opinion surveys. The Public Opinion Quarterly 36, 105108.CrossRefGoogle Scholar
Wuttke, A, Gavras, K and Schoen, H (2022) Have Europeans grown tired of democracy? New evidence from eighteen consolidated democracies, 1981–2018. British Journal of Political Science 52, 416428.CrossRefGoogle Scholar
Figure 0

Figure 1. Satisfaction with democracy conditioned on survey mode. Satisfaction is measured dichotomously—the precent satisfied represents those who are at least “somewhat satisfied” with democracy.

Figure 1

Table 1. The relationship between survey mode and satisfaction with democracy

Supplementary material: Link

Ejaz and Thornton Dataset

Link
Supplementary material: PDF

Ejaz and Thornton supplementary material

Appendix

Download Ejaz and Thornton supplementary material(PDF)
PDF 399.3 KB