In their Comment about our previous paper (Mariethoz et al., 2021), Heyard et al. state that our “claims are not supported by the data and analyses reported in the article” and that our “analysis is not reproducible for other researchers using the same data source”. They question both our methodology and our conclusions, comparing them to their own previous work and conclusions, which are different. We contest these statements and argue why they are unfounded.

A central criticism of our analysis is the fact that we use time-integrated data. We agree that our study does not consider the temporality of funding and research outcomes, i.e. we do not differentiate pre-grant and post-grant outputs. First, attributing a publication to a specific grant is difficult, if not highly artificial, therefore such temporality is hard to establish. Secondly, and more importantly, the temporality of funding and publications does not matter for our conclusions, which focus on establishing whether a correlation between funding and outcomes exists or not. More concretely, we can envision two interpretations for the observed lack of correlation 1) funding does not result in more research outcomes, or 2) research findings and associated publications do not result in increased funding success. Our analysis cannot and does not intend to distinguish between these two interpretations, but in either case there is cause for questioning, which is the aim of our article.

Regarding reproducibility, our sources are public data clearly indicated in our paper (the SNSP P3 and Scopus databases). The criteria for selecting researchers, grants, and the funding period are clearly stated such that anyone can replicate our findings and figures. The only information we did not disclose is the names and affiliations of researchers, but even those could be recovered by replicating our methodology. Heyard et al. argue that “the 317 selected researchers were observed over ten years, but the calendar period is unclear”. Our article text as well as the caption of Figure 1 mention “computed over the last 10 years”, which corresponds to the period 2010–2019 (the 10 years before our paper was submitted). While we could have indicated more precisely (i.e. from 1.1.2010 to 31.12.2019), we do not believe that this level of accuracy would have any impact on the conclusions, given that the funding decisions are given twice a year, and the publication date of a paper is considered with a 1-year granularity.

In their comment, Heyard et al. write that “to be included, researchers had to have obtained more than CHF1000/year on average over these ten years, but this is not justified”. We take the opportunity to do it here. We did the deliberate choice to only consider academics that have been active in research during the studied period (as opposed to people who have left academia, moved outside Switzerland, retired, etc.). Most of the grants amounts in our study are between CHF 150′000 and over 1′500′000. In practice, to be under the CHF1000/year limit, a researcher must have only one grant that ends a few months after the beginning of the studied period. This rule only removed a single researcher, which is unlikely to affect our conclusions. This point is further discussed later in this reply, as it constitutes a central criticism of the work of (Heyard & Hottenrott, 2020).

Heyard et al. request additional precisions on how we selected researchers in our analysis. We only considered as grant holders people listed as “Responsible applicant” of SNSF Division II grants in the P3 database (i.e.the Principal Investigator, as stated in our paper). It therefore excludes co-PIs and collaborators, without further adjustment of the amounts, ensuring uniform consideration of all researchers.

Heyard et al. also qualify our study as “simplistic” because the data are longitudinal averages, and the statistical methods we use involve standard correlations. Our analysis is simple and straightforward, due to the nature of our data and our will to present clear, reproduceable and interpretable results. Heyard et al. use additional data from the SNSF on unsuccessful applicants, which are not publicly available. In our situation, it is not possible to compare granted and non-granted researchers, and hence difficult to define a control group. Because we restrict our analysis to a small and controlled dataset of a specific scientific domain (317 samples), a complex model with multiple covariables is not required to test for correlation and would not be statistically robust. We understand that the adjective “simplistic” is to be taken in comparison with the model used in the study of Heyard and Hottenrott (2020), which would be overly complex in our case. For instance, they pair funded researchers with unfunded ones of similar characteristics using a nearest-neighbor approximation, which requires a very large number of samples, as well as data on rejected grants that are unavailable to us. We argue that such a complex model lacks interpretability, in addition to conceptual flaws and intractable simplifications that are pointed out later in this reply.

The next criticism of our work regards possible other sources of funding available to researchers, although they recognize the significant methodological difficulties that it would involve. We acknowledge this but maintain that most researchers would still apply to SNF Division II projects for reasons that we mentioned in our paper: “the broad eligibility requirements, the absence of restrictions on resubmission and the high success rates mean that there is little rationale for a researcher not to apply for funding over a 10-year period. Furthermore, we only consider researchers that were funded at least once with this scheme, and who are therefore fully aware of the opportunities offered”. This is supported by our empirical experience that most researchers in the Swiss geoscience landscape use SNF Division II projects as a central source of funding.

The authors regret that our paper does not discuss the “fertilizer” effect of grant writing and point out the study by Ayoubi et al., (2019) focusing on SNSF Sinergia grants. Notwithstanding the fact that the Sinergia scheme is inherently collaborative, unlike Division II grants, and the important body of literature that points at a potential waste of resources related to grant writing (e.g. Kaplan, Lacetera et al. 2008, Fortin & Currie, 2013; Herbert, Barnett et al., 2013; Pier et al., 2018), this discussion is not the one we are having in our paper. As mentioned in the first and last paragraphs of our paper (and reiterated in the first paragraphs of this Reply), we focus on the weight given to grant income by hiring committees when they shortlist and appoint applicants. In our opinion, there are deep misunderstandings in the community of Swiss geoscientists regarding an expected link between funding success and research excellence, which we find important to demystify.

The authors then claim that we “ignore a larger stream of research that finds positive correlations between funding and research outputs”, but mostly cite their own work. It was not our intention to exclude a particular school of thought in our paper, especially since our article was discussed with the SNSF prior to submission (the authors’ employer), and suggestions of peer-reviewed references were included.

The section of the Comment titled “SNSF funding, productivity and dissemination: analysis of 8′527 researchers” discusses the authors’ own unpublished work (Heyard & Hottenrott, 2020), pointing out that their conclusions, using a different model and a different dataset, contradict ours. While their study differs from ours on several aspects, we wish to point out a number shortcomings that struck us:

  1. 1.

    The study exaggeratedly focuses on the number papers published by a researcher, which is not a reliable indicator of research quality. As we mention in our study, considering only the number of papers has been shown to be misleading, and using a multitude of career-integrated metrics is a more reliable measure of a researcher’s excellence (if such metrics can be considered at all reliable, which can be debated). Furthermore, increasing the number of published papers is not in our opinion the goal of a taxpayer-supported funding agency, especially in the context of the current unsustainable increase in the number of published papers and the emergence of predatory publishers (Butler, 2013).

  2. 2.

    One main shortcoming of Heyard and Hottenrott (2020) is the possibility of a bias in the data used. It is mentioned on page 10 that 12% of the researchers have been removed from the dataset because they did not have a unique ID in the Dimensions database, creating a potential sampling bias. Furthermore, it is indicated on page 14 that out of 8′527 remaining researchers considered, 1′583 did not publish any peer-reviewed papers in the preceding five years. This represents 18.5% of the individuals considered in the study that may not be research-active or could have left academia. Such potential biases could be sufficient to explain the main finding of one yearly additional publication for grant holders, which corresponds to a 20% increase as researchers publish in average 4.9 papers per year.

  3. 3.

    Considering only a binary measure of whether researchers are funded or not can be seen as simplistic because it hides large disparities in the levels of funding. As mentioned, grant amounts vary by more than one order of magnitude. Pooling together individuals that receive grants of CHF 150′000 and over CHF 1′500′000 is a drastic transformation of the data that makes the rest of the analysis dubious in our opinion.

  4. 4.

    In light of the above, the evidence supporting the conclusions of Heyard and Hottenrott (2020) appears questionable, and it is their study rather than ours that is not supported by data. The fact that their analysis relies on undisclosed data means that it is not reproducible.

  5. 5.

    The last section of the Comment (“A call for more research on research”) provides some general perspectives on future research needs, in our opinion not directly related to the main focus of our paper. While we agree on the general need for more research, we can refer to a recent and much more comprehensive review of these questions by De Peuter and Conix (2021).

  6. 6.

    To conclude, we also want to point that the first two authors of the Comment are directly employed by the SNSF. We wish to clarify that our intention is not to be critical towards the SNSF, but to rectify perceptions that can lead to biases in academic hiring and promotion committees in Earth and Environmental Sciences.