Introduction

One of the ways that journal editors attempt to manage challenges to responsible conduct of research is to require ethical assurance statements from authors submitting papers for review. Over and above standard publication agreements, journals may require that published research reporting data based on the analysis of human subjects must document evidence of research ethics approval. Many journals also require authors to offer similar assurances that they have a data sharing plan and their analyses of public opinion data conform to professional standards for reporting and interpretation. Arguments in the research integrity literature detail the need for these assurances as a safeguard against research misconduct or questionable research practices.

That these requirements are ethically desirable does not mean that they are a normal part of publication practices in all journals that publish relevant research. We were concerned to investigate the prevalence of such ethical assurance statement requirements in political science journals—a set of journals that routinely publish work involving human subjects research and data-intensive studies, and, less frequently, sponsored research.

To investigate the prevalence of ethical assurance statement requirements in political science and related journals as well as to assess the overall level of concern with ethical authorship and ethical editorship, we conducted a survey of the editors of political and social science journals. We based our survey of political science editors’ concerns with publication ethics and other ethical authorship matters on a previous survey on these topics conducted by Wager et al. (2009). We surveyed political science journal editors asking about the frequency at which they perceived incidences of unethical conduct in their publications, their level of confidence in addressing issues of unethical publication, and whether managerial issues pertinent to running a journal impeded their ability to attend to problems of ethics management at their publications.

Related specifically to the issue of ethical assurance statements, in our survey, we asked the question,

Which of the following requirements does your journal impose for submitted or accepted manuscripts (check those that you require in some form):

  1. 1)

    Documentation of human subjects protections approval as appropriate,

  2. 2)

    Documentation of conformance to the standards for reporting and interpretation of public opinion data promulgated by the American Association of Public Opinion Research (AAPOR 2011) or similar organizations,

  3. 3)

    A data sharing plan, including contact information and use right, whereby the data used in accepted papers can be acquired for replication and related purposes

Excluding those journals for which these three requirements are irrelevant, such as political philosophy journals, we uncovered that very few of the responding editors indicated that their journal requires a human subjects assurance statement. However, a significant number of the journals did require a data-sharing plan and required conformity to reporting standards for public opinion data.Footnote 1

Background of the Problem

Ethical lapses in research are a matter of increasing public and professional concern. Public concern over abuses of human subjects in research and research misconduct are fueled by numerous stories in major news outlets’ science pages. Professional documentation of breaches of ethical standards, such as duplicate publication and research misconduct, routinely appears in high-impact professional outlets, like Nature and Science, as well as smaller disciplinary journals. Much of this negative press is directed at biomedical and physical sciences. Whether political and social sciences are prone to such ethical lapses has not yet been studied.

While we would enjoy believing that the political and social sciences are above such ethical lapses, we are concerned here to test whether this belief is warranted. We surmised, based on our experience and evidence of the growth of political science publications, that political science journals could be at risk from comparable ethical issues. For example, we are concerned that an increasing number of submissions to these journals and a (reported) increase in the publications pressures faced by young faculty may indicate that there is a greater probability that some manuscripts may include falsified, fabricated, or plagiarized information, or report data gathered from human subjects research that was not approved by necessary regulatory authorities. Given conservative estimates by research integrity experts that research misconduct (e.g., falsification of data) may be published in 1 of 10,000 works and questionable research practices (e.g., insufficiently reviewed human subjects research) may be present in as many as 1 in 100 works, we estimated that our disciplines could not escape this scourge entirely (Steneck 2006).

One method that journals may use to prevent incidences of misconduct from marring their pages is to require authors to submit assurance statements. In the biomedical and physical sciences, these assurances can include human subjects research ethics permissions, conflict of interest reporting, data sharing plans, and acknowledgements of assistance from external consultants to the project, such as biostatisticians. In the social and political sciences, issues of financial conflict of interest due to study sponsorship and lack of acknowledgement of external statistical help do not present such prominent issues. For political and social sciences, however, issues of human subjects ethics clearance, data-sharing plans, and data reporting standards figure prominently as methods for assuring ethicality and non-misconduct in research.

While the focus of the present paper is on the prevalence of ethical assurance statements in this sample of journals, our survey asked editors about many aspects of ethical authorship and publication. Other topics we queried editors about include their perception of the severity of ethical problems in manuscripts submitted to their journal, such as the prevalence of research misconduct; potential reviewer misconduct and problems with peer reviewer recruitment, management and retention; duplicate submissions problems, and the value of reputational rankings, such as those we used to select our journal sample, for assessment of the quality of academic work. While this is not an exhaustive list, it matches the concerns of our predecessors—Wager et al. (2009)—and reflects many of the concerns in the present publication ethics literature (Bebeau and Monson 2011; Berquist 2010; Brice and Bligh 2005; Callaham 2003; Carlson and Ross 2010; Christakis and Rivara 2006; Fanelli 2009; Graf et al. 2007; Laflin et al. 2005; Marcovitch 2009; Marusic 2010; Norman and Griffiths 2008; Pitak-Arnnop et al. 2011; Wager 2007).

Method

We have created, in a two-step process, a set of what can be called the reputationally most important journals in the opinions of political science scholars. We first adopted the set of 90 journals in political and social science that Garand and Giles (2003) used for a comparison of reputational rankings of journals among political scientists in the United States with citation rankings from the ISI Journal Citation Reports. Giles and Garand created this set by using the intersection of those journals used in prominent, earlier studies in political science, those suggested by informally polling other United States political scientists about what they thought to be the most important journals in their sub-fields, and those for which ISI Journal Citation Reports were available. We then, second, informally polled another set of scholars about important journals because we were concerned that the Giles and Garand list did not include many appropriate journals in normative political theory and policy studies. The result of this process is a set of 112 journals that is arguably the reputationally most prominent universe of journals that political scientists read and seek to publish in.

We uncovered the email addresses of the editors from the websites of the 112 individual journals, and surveys. were emailed to the editors of these journals.Footnote 2 We obtained 48 usable responses, an unusually high response rate (of 43 %) when compared to comparable studies such as Wager et al. (2009) and Borkowski and Welsh (1998, 2000). Within these responses were replies from editors of journals considered some of the best in the general discipline as well as the best in individual sub-fields (e.g., international relations, comparative politics). For example, editors from five of the six top journals according to Giles and Garand’s reputational ranking responded to our survey.

The responding editors also give us good representation in all research subfields, as is shown in the distribution across the sub-fields for the journals in Table 1. The response rates by journal subfield mirror reasonably closely the proportions in the full sample of 112—except for relatively high responses among American politics and relatively lower response among comparative politics journals. At the same time, a conservative interpretation is that this cannot be considered a truly random sample because the response rate falls below the most conventionally desired levels (Weisberg 2005, 190–191). Thus we cannot estimate population parameters with confidence from our sample. Yet the data allow us to characterize the concerns of a wide and prominent range of journal editors.

Table 1 Distribution of sample and responses

Our interest here was primarily descriptive—to survey the landscape of ethical compliance practices in political and social science journals but not to assess causality for either ethical lapses in such journals, or editors’ adoption of ethical assurance statements for their venue. Thus we approached these data descriptively, looking for percentages of journals, whether general or field specific, that responded positively to our query on their requirements in these three areas.

Results

The responses to our query to editors on their requirements for human subjects ethics review, public opinion data response rate and interpretation, and data sharing plan assurances reveal notable differences. The results show a stronger concern with data sharing than any other assurance. Of the 48 journal editors who responded to the question (posed above in the introduction), 17 or 35 % (with a standard deviation of 0.48) of respondents, reported that they required an assurance of data sharing plan, including contact information and use rights, from authors. In contrast, only 6 of 48, or 12 % (with a standard deviation of 0.33) of respondents reported requiring assurance statements about human subjects or public opinion data reporting. There were some differences in the subfield orientation of journals that reported requiring these assurances, as shown in Table 2. Yet the relatively small numbers of cases in most of the categories of journals make it hazardous to draw firm conclusions about subfield differences..

Table 2 Distribution of requirements by subfield

We suspected, however, that some subsets of journals would be especially likely to require ethical and other assurances from authors. One of our expectations was that journals that publish work in all sub-fields (so called general journals) and those that are ranked higher in prestige would be more likely to require ethical assurance statements. More generally, our expectation is that prominent journals would have higher standards in this respect. This expectation is based on our belief that visibility of the journal would correlate with higher levels of readership, including particularly astute readers who may be keen to replicate studies (e.g., requesting shared data), or to assess critically their results (e.g., needing AAPOR reporting requirements), or to replicate their study with human subjects (e.g., needing to know the circumstances of the previous research to use the data for secondary analysis).

To test this belief, we first examined the responses only from the editors of the top three journals by reputation in political science and the top three journals in each subfield.Footnote 3 Yet we e found that the results for this subset mirrored those of the full sample. Of the 19 journals identified in this subset, only 11 % require human subjects or public opinion reporting assurances, 11 % require assurances regarding the reporting of public opinion data, while 58 % require a data-sharing plan.Footnote 4 These same patterns do not arise, however, when we examined the journals that claim a general, or discipline wide, scope on their information web pages. Those journals with a general scope are more likely to require ethical assurance statements than those with a more specialized remit. Of the 11 general journals whose editors responded to our queries, 36 % report requiring evidence of human subjects compliance, 9 % report requiring assurances related to public opinion reporting, and 64 % require a data sharing plan. Thus journals with the broadest professional audiences are the most progressive in these respects.

Finally, we also anticipated that journals affiliated with the Committee on Publication Ethics (COPE) would be especially likely to require all three of the kinds of assurances under consideration here. The editors of nineteen journals that are members of COPE responded to the survey. Among these nineteen journals, however, only 2 (or 11 %) require documentation of human subjects assurances, only 2 require assurances on the professional use of survey research data, while six or 33 % require a data sharing plan. Thus even the COPE affiliated journals, that one would plausibly expect to be especially concerned about ethical matters, are not distinguishable for these policies.

Discussion

What might explain the divergences found in our survey regarding requirements for assurance statements in political and social science journals? Although our survey was not designed to offer a systematic answer to this question, we suspect that the knowledge of individual journal editors and disciplinary debates may be driving the requirement of these assurances. The knowledge of authorship ethics of each of the journal editors cannot be established using this data; however, we suspect that editors that felt comfortable responding to our queries must have some elementary knowledge of the topic.

Other factors that may drive requirements for ethical assurance statements may include disciplinary norms and debates. Data access and data sharing policies, for the purposes of replicating previous work, were the topic of major disciplinary debate in political science during the mid-1990s. Participants in this debate argued that a robust, but flexible, data sharing policy would help to advance replication studies, verification analyses, and secondary use of data for political science (King 1995; Hernson 1995; Box-Steffensmeier and Tate 1995). Although considerable attention has been given to the role of Institutional Review Board (IRB) review of human subjects research in political and social sciences, this debate is often colored by arguments that IRB review is ill-suited to or irrelevant for political and social sciences (Levine and Skedsvold 2008; Schrag 2010; Yanow and Schwartz-Shea 2008). Such arguments may minimize perceptions, widespread in other sciences, that a human subjects ethics approval assurance statement is a necessary component of ethical publications.

It is remarkable that many journals in the social sciences encourage authors to share their data, but that assurances of the ethical provenance of that shared data are not as frequently required. The editors of journals in our sample do not deal with publications from “high risk” human subjects research, such as clinical trials or genome wide association studies, but many of these journals publish results based upon other types of human subjects research, such as surveys, interviews, and experiments. Although an assurance statement of compliance with human subjects research ethics requirements does not guarantee that the data were gained under ethical circumstances, we are concerned that the human subjects data that is shared under these journals’ data sharing policies might violate standards for the ethical use of secondary data.

Conclusion

Our goal in this survey was not to answer the question whether the political and social sciences escape the unethical research problems known to occur in journals in other disciplines. Instead, we sought to learn how well prepared the journals in this field are to minimize such problems. And our results suggest that the journals in this field are not collectively well prepared for these challenges. Yet we surmise that these inconsistencies demonstrate only that the social and political sciences may be at an earlier stage of development of such standards, but not that the social and political sciences “lag behind” the biomedical or physical sciences.

Determining how journal editors use these assurance statements may provide further evidence to show the evolution of ethical publication practices in the political and social sciences. Future research on this topic might explore the patterns of publication ethics policy adoption in political and social sciences to determine if there is a phenomenon of policy learning or policy diffusion between biomedical, physical, social and political sciences. Such research may support efforts to establish cross-disciplinary and interdisciplinary norms for publication ethics specifically and research integrity broadly.