1 Science in the Social Arena

Expertise and scientific policy advice form an essential resource for modern government. Understanding what is the case in a certain area is widely considered indispensable for taking political action wisely. Virologists and epidemiologists counsel governments on public health policy, independent central banks direct financial politics, and climate scientists frantically struggle to coax governments into taking action. This widespread practice of scientific policy advice and expert guidance has prompted two kinds of worries among a wider audience. Parts of the general public harbor doubt about the relevance and reliability of the underlying scientific basis, while others are suspicious about the value judgments passed by experts and are afraid of a technocratic rule. In particular, various surveys have revealed that although science is widely respected in general terms, matters change when science affects daily life. A considerable fraction of the population suspect economic and political powers behind research in fields of practical impact such as nutrition, health, the environment or climate change. Science is taken to be subject to Big Money and Big Politics and not to be trustworthy for this reason. Studies in the fields mentioned are sometimes supposed to be unreliable because they are designed in a biased way or assumed to be irrelevant because they appeal to oversimplified conditions. Moreover, regarding value-judgments, people interviewed attributed a narrow science-and-technology perspective to experts which was believed to disregard the broader human viewpoint (European Commission 2010; Scientific American 2010; Wissenschaft im Dialog/Kantar Emnid 2017; Carrier 2017).

The worry underlying such complaints is that social, value-laden influences on science compromise the epistemic integrity of scientific knowledge. In economically important fields, science is assumed to be at the mercy of the company sponsoring a study and feared to produce shaky and biased results that do not merit public trust. Among the fields featuring in the “replication crisis” is biomedicine, in which many of the alleged breakthroughs turn out to be non-reproducible (Harris 2017). Not infrequently, one-sided interests and evaluations affect the study design and guide the interpretation such that results of the desired kind and impact are more likely to come up. Studies set up and performed in a slanted way suggest insufficiently verified recommendations (Biddle 2007; Michaels 2008; Reiss 2010, 231–237; Hicks 2014, 3278–3279; de Melo-Martín & Intemann 2018, 117).

In addition to commercial interest, a political mission may also contribute to skewing a study. Gilles-Eric Séralini fed rats with low doses of genetically modified maize for 2 years and found an elevated rate of cancer (Séralini et al. 2012). However, critics pointed out that the sample of rats used in the experiment was too small to obtain significant results and the strain of rat was likely to contract cancer sooner or later anyway (de Souza & Oda 2013). The standard protocol for cancer studies would have demanded a fivefold group size and a strain of rat less liable to contract cancer. In other words, the design of the study made it unsuited to assessing risks of cancer. The study would have indicated carcinogenic effects even if there were not any. Séralini has a long-term track record as anti-GMO activist and has accused the relevant industry of abusing the public as their guinea pigs. It might not be too far-fetched to assume that design and interpretation of the study was supposed to facilitate the emergence of data in support of Séralini’s political views (Carrier 2018, 162–163).

The prima-facie conclusion is that influences originating in the social arena and being imposed on science may spoil the trustworthiness of research. Given this unfortunate repercussion, I will turn to the opposite view advocating value-free science. In this approach, scientific authority is limited to epistemic matters, while social, political and economic value-judgments (or nonepistemic value-judgments) are the privilege of social bodies. This position is referred to as the value-free ideal (made more precise in Sect. 2). However, nonepistemic values give direction and relevance to scientific policy advice. Dropping such value-judgments from policy advice would make it mostly insignificant and useless. For this reason, many scholars have relinquished the value-free ideal, while running the risk of compromising the epistemic authority of science and making policy advice a partisan endeavor. The argument I develop seeks to uphold an adjusted version of the value-free ideal.

In Sect. 2, I present the value-free ideal in its widely adopted form. I argue in Sect. 3 that this ideal fails since it is neither possible nor recommendable to dismiss nonepistemic values from scientific policy advice. Thus, we seem to be caught in the dilemma that policy advice should stay aloof from value judgments while being inextricably shot through with values at the same time. I discuss two options for coming to grips with the challenge, namely, transparency (Sect. 4) and privileging nonepistemic values by epistemic means (Sect. 5), but eventually recommend a two-pronged strategy (Sect. 6): value-relevant, but still (approximately) value-free scientific policy advice can be given by adding social goals as separate premises (conditionalization) or by taking up such goals as political commissions. This means to presuppose rather than to promote certain nonepistemic values. Engaging with values is legitimate in a value-free framework as long as no particular stance is advertised as being distinguished by science. This scientific restraint in judging social values is further emphasized by supplying a plurality of value-laden policy packages so that policy makers are provided with a spectrum of alternatives. Legitimate scientific policy advice may expound a diversity of policy packages, each of which is laden with different values, and to leave the choice to politics. In this way, scientists could respect crucial features of the value-free ideal and still give useful advice.

2 The Traditional Picture: Value-Free and Objective Science

The introductory considerations suggest that the classical picture of value-free science might look attractive. According to this view, science provides the facts and politics or society make value-based decisions. For instance, Noretta Koertge distinguishes between the contexts of discovery, justification, and application and grants social values (or nonepistemic values or contextual values) access to discovery and application. Such values may provide fruitful heuristic hints and should guide the practical use to which science is put. However, nonepistemic values should not be part of the context of justification. When it comes to judgment, politics and religion should be kept out of the lab (Koertge 2000). The conception underlying the value-free ideal is a division of labor between science and social forces. Science is an epistemic authority only; economic, political, and moral value judgments, or social value judgments for short, fall outside scientists’ area of competence (Weber 1917, 499, 511, 526). The business of scientists is to provide adequate explanation and understanding of nature, while social considerations are fed in by the people. It is the prerogative of democratic bodies to make choices regarding good society, human flourishing, and economic aspirations. Nonepistemic considerations are legitimate in selecting research problems and guiding the search for useful procedures and devices. Yet, such considerations should not encroach on decisions about which explanations are empirically supported and which understanding is sufficiently checked and verified.

The present-day version of this value-free ideal recognizes that value-laden choices enter into the context of justification. But such values are epistemic (or cognitive): they concern features like scope or precision, predictive force or explanatory power, testability or coherence (Kuhn 1977, 321–322; McMullin 1983, 6–8, 18–20; Mitchell 2004, 249–251; Carrier 2008, 274–275; Betz 2013, 207; Hudson 2016; Gundersen 2020, 92). Epistemic values delineate what kind of knowledge science is supposed to strive for or what sort of knowledge is worth knowing.Footnote 1 By contrast, nonepistemic values aim at social utility. The concepts of epistemic and nonepistemic values have fuzzy boundaries, but there are clear examples and counterexamples. Tracking down the Higgs boson was driven by epistemic values, building light-emitting diodes (LEDs) of all colors was governed by nonepistemic values. It is worth emphasizing that there is no dichotomy between these kinds of values.Footnote 2 A given research undertaking can pursue epistemic and nonepistemic ends at the same time, as the achievements of Louis Pasteur famously reveal (Stokes 1997, 12–17, 71–74).Footnote 3

As a result, the prevailing version of the value-free ideal admits epistemic values to the context of justification and only insists on keeping nonepistemic values out of assessing the cognitive merits of theories. Furthermore, it is also considered unproblematic that nonepistemic values are invoked for choosing research topics and for applying scientific findings. The research agenda may legitimately be shaped by the desire to solve practical problems (Dorato 2004, 52–57; Büter 2015, 20; de-Melo-Martín & Intemann 2016, 501–502; ChoGlueck 2018, 705). By contrast, the concerns tied up with abandoning the value-free ideal are epistemic and political (de-Melo-Martín & Intemann 2016, 502–503). The legitimate authority of science is limited to epistemic matters. Scientists should not infringe on the prerogative of the people to set social values, and the people should not encroach on the epistemic integrity of science. Overstepping these bounds would be detrimental to both science and democratic rule. Science would become part of social strife and lose its authority.

This account suggests a division of labor for scientific policy advice. Science supplies the knowledge from which policy advice proceeds but abstains from advocating social or political values. Such values are fed in by social forces, and policy-relevant results are inferred by combining epistemic and nonepistemic considerations. This scheme is in harmony with the value-free ideal.

This idea of giving science-based policy advice in a value-free manner rallies practitioners. For example, Robert T. Lackey, an ecologist by profession, has argued that scientists should provide accurate information but stay neutral with respect to particular policies. For instance, the decline of a population of birds or fish in a specific area is a fact. Whether this fact warrants a certain ecological policy is an ought-question that goes beyond the legitimate purview of science. As a result, value-laden terms such as ‘degrading’ or ‘improving’ ecosystems or ‘good’ or ‘poor’ environmental conditions should be avoided in scientific policy advice. Appropriate terms are rather policy neutral such as ‘change, increase, decrease.’ Current ecological policy advice is replete with evaluations of the sort: human-caused extinctions are bad, ecosystems left unaffected by humans are good, diversity is to be appreciated, or indigenous species are preferable to invasive species. However, none of these value judgments has any basis in scientific knowledge (Lackey 2007).

This attitude bears a striking resemblance to Roger Pielke’s figure of the “science arbiter.” Pielke (2007, 2–6, 16) imagines a variety of ideal types of scientific policy advice, among them the science arbiter, who views herself as a mere repository of information about matters of fact. She responds to factual questions of politicians and decision-makers, but does not tell anything as to which political pathways are to be preferred. Any reference to nonepistemic values is eschewed.

The same approach characterizes the self-understanding of the German Radiation Protection Commission which is expected to give science-based advice on risks produced by ionizing and non-ionizing radiation. In their understanding, this advice needs to rely on scientific standards alone. In a debate in 2001 on the potential hazards associated with the long-term use of cell phones, the commission was anxious to demarcate its work from politics. They saw their duty in examining whether known causal processes or epidemiological results suggested any detrimental impact. This meant, conversely, that the commission refused to recommend a reduction of the maximum permissible radiation intensity on the basis of the precautionary principle. The general thrust of the principle is that if activities are likely to pose a significant risk, precautionary measures should be taken even if the relevant effects are not established scientifically. The commission argued that adopting the precautionary principle and applying it such that concrete threshold values ensue is tantamount to a political decision about the desirable level of protection. Yet, proposing any such values would mean to trespass the limits of legitimate scientific advice (Carrier & Krohn 2018, 58).

The question is whether good scientific policy advice can be explained on a value-free basis. What speaks in favor of this approach is the separation between epistemic and nonepistemic considerations and the concomitant division of labor between the epistemic contributions of scientists and the social evaluations as provided by society. The complaints about commercialization and politicization show that it is the intrusion of social forces into science that makes people suspicious about epistemic authority. Conversely, people also complained about the opposite feature of presumptuous scientists who violate the normative prerogative of democratic institutions. Experts dissimulate the social decisions involved in choosing a policy and thus infringe on the democratic rule. Such technocratic and expertocratic attitudes are also widely rejected likewise among scientists and philosophers of science (Hacker et al. 2019; Reiss 2019). It follows that the separation of kingdoms sounds like a good maxim for policy advice.

3 The Impact of Nonepistemic Values on the System of Knowledge

The question I pursue in this section is whether it is a sensible maxim to take science free of nonepistemic values as the sole basis of legitimate policy advice. Two major objections have been raised in the literature stating that following this maxim is neither possible nor sensible (ChoGueck 2018). The first objection concerns inductive risks or the way science is supposed to deal with uncertainty. In taking up and expanding Richard Rudner’s (1953) classic argument, Heather Douglas has pointed out that adopting assumptions at various levels incurs inductive risks. Any such adoption may turn out to be mistaken in light of future evidence. Accepting or rejecting an assumption should take into account the practical damage done by being wrong. The two relevant kinds of mistakes are false positives and false negatives, i.e., the risk of falsely adopting an erroneous hypothesis and the risk of falsely discarding a correct hypothesis. Douglas claims that the threshold of acceptance should be chosen by comparing and weighing the negative practical impact of the two kinds of potential errors involved.

Douglas’ important contribution to the debate consisted in showing that inductive risk emerges at a large number of stages in the research process, not only at the final stage of accepting a hypothesis. Relevant decisions are needed upon setting up and conducting an experiment, classifying samples, assessing the significance of sources of error, and interpreting the results. Douglas’ example is a study on the effect of dioxin on the emergence of cancer in rodents. Judging whether particular rat liver slides exhibit cancerous lesions needs to include the consequences of potential errors. Since the data did not clearly distinguish between different interpretations, adopting anyone needs to appeal to comparing and weighing the damage done by mistaken choices. This is why there is a legitimate role for nonepistemic values in the context of justification (Douglas 2000) and, consequently, in scientific policy advice.

Douglas’ argument rightly throws into relief that different risks need to be weighed by nonepistemic standards in order to reach politically relevant advice. Take the example of a new and hitherto unknown virus hitting the population. Applying Douglas’ recipe demands that any decision about the health risks associated with the virus include a comparison of the various adverse effects of being wrong. We might overestimate the health risks, lock down the economy prematurely, and create financial damage without justification. Alternatively, we may underestimate the threat posed by the virus, let social life proceed as usual and thereby cause unnecessary fatalities. As the Covid pandemic of 2020/21 has made obvious, scientific knowledge in itself fails to bring forth any unambiguous conclusion regarding policy-making. The same lesson emerges from the example of the radiation protection commission (see Sect. 2). The refusal of the commission to take social factors into account makes their consideration politically barren.

Second, the claim of inherently value-laden concepts (also called the “gap argument”) is advanced in favor of value-ladenness. The argument says that linking up pieces of evidence with theoretical states relies on value-laden background knowledge (Longino 1990, ch. 3; Elliott 2011, 62–66; Hicks 2014, 3274–3275, 3283; ChoGlueck 2018, 705–711). It should be clear, but is not always acknowledged (Hudson 2016, 169), that research differs with regard to value-ladenness. The search for a novel pain-reliever is guided and assessed by nonepistemic values, while the exploration of the astrophysical mechanism behind gamma-ray bursts is not. But this proviso is obviously moot in matters of policy advice. Questions of social concern are inherently laden with nonepistemic values.

For instance, ethical considerations are intrinsically implicated in studies on health and disease. Consider women’s health research. Between 1970 and 2000, the prevalent approach to addressing menopausal disorders took hormonal balance to be chiefly important and shifted social factors to the margins. Lack of estrogen was considered an illness in need of treatment. The generally accepted view at the time connected femininity and fertility so that the preservation of fertility became the criterion of success of any approach to dealing with menopausal disorders. This criterion boosted hormone replacement therapy. The situation changed after a rival sociopolitical approach suggested that menopausal states should be regarded as a normal biographical transition like puberty (Büter 2015, 23–25).

This example shows that research areas may be intrinsically value-laden so that it is impossible to leave value commitments to the social and political sphere. Differences in sociopolitical evaluation affect how the field is conceptually structured, how relevant and credible certain studies are taken to be, what looks like promising avenues of research, and how research endeavors are to be assessed. Had the sociopolitical picture of womanhood been different, menopause would perhaps not have been regarded as a disease in the first place and hormone replacement not as a cure (Büter 2015, 19–22).

Considerations like these illuminate the essential and constructive role of nonepistemic values in such research: they serve to determine relations of significance. The image of femininity is one such instance, in which criteria for judging the relevance of data and standards for assessing the success of certain approaches are provided by nonepistemic values. Such values create the distinction between what is important and what is negligible (Steele 2012, 900; Intemann 2015, 223–224). This is an in-principle feature because the demand to consider all possible evidence is self-defeating. This is why the complaint would be ill-conceived that the bias in health research was simply erroneous and the conclusion premature. Scientists cannot help approaching a field from a certain angle and the data are always incomplete. In addition, the decision between further double-checking results or using the existing outcome for advising policymakers is made by appealing to the relevance of this outcome for the practical question at hand (Steele 2012, 903).

Against such a value-laden backdrop, certain assumptions appear more plausible than others and are accepted on more patchy empirical grounds than alternative views. Since nonepistemic values shape the approaches toward a research field, they exert their influence way before the choice between theories needs to be made. Such values affect which theories are developed in the first place and which data are gathered. Even if only epistemic values were employed in the subsequent theory assessment, the choice available would still be impregnated by nonepistemic evaluations. As a result, nonepistemic values exert a heuristic influence on research endeavors. They may supply a research undertaking with a direction and hence play a constructive and fruitful role.

Accordingly, the deeper feature undermining the value-free scheme of policy advice is the interpenetration of the contexts of discovery, application, and justification. In particular, the contexts of discovery and application spill over into that of justification. Since nonepistemic values are legitimate and productive in the two former contexts, they inevitably, and rightly, affect the latter. The assessment of theoretical advances will always, and rightly, depend on the maturity of the relevant account and the degree of elaboration of alternatives. Hence, nonepistemic values used in selecting and elaborating research topics cannot help but influence the adoption of hypotheses (Okruhlik 1994, 201–203; Elliott & McKaughan 2009, 604–609). Think of the mentioned research on menopause. If you restrict attention to hormone levels, you inevitably end up with judging epistemic merit in these terms. In sum, it is the lack of distinctness between the legitimate domains of epistemic and nonepistemic values that rules out excluding nonepistemic values from the context of justification altogether.

The upshot is that there is a legitimate role for nonepistemic values to play in the context of justification, and that due to the influence of such values at multiple levels and in various respects, it is not feasible to entirely purge science from nonepistemic values. And if the scientific basis of policy advice is interspersed with nonepistemic values, this applies all the more to the advice itself. Yet, this means that the preceding considerations have led us into a predicament to the effect that nonepistemic values are likely to prompt partisan judgment and bias, for one, but at the same time play a positive role in structuring research and supplying relevance relations, for another. Let us explore possible ways out of this predicament.

4 Pinpointing Rather Than Expelling Value-Judgments

Given that it is neither possible nor recommendable to dismiss non-epistemic values from scientific policy advice, what, then, distinguishes the acceptable and wholesome impact of such values from their misleading and illicit influence? An important precondition of any such answer is to identify the values at work and to distinguish them from facts. The suggestion springing to mind instantly is transparency. If nonepistemic values rightly enter the scientific basis of policy advice, then these values should be laid on the table explicitly. This suggestion is supported by a large number of authors (Douglas 2009, 155; Kitcher 2011, 151–155; Elliott 2013, 382; de Melo-Martín & Intemann 2018, 14–15, 126–128), but sounds rather banal. However, it is not, in fact, and in order to realize its impact, it helps to envisage a concrete example, namely, so-called “Integrated Assessment Models,” which play an important role in giving advice on climate policy. Integrated Assessment Models combine climate models with economic models of the damage done by climate change. Changes in temperature and precipitation are assumed to generate costs and suffering. This impact is distributed unevenly across different regions of the earth. However, such models proceed via maximizing a utility function that indicates time-aggregated societal wealth. That is, a global situation is considered superior to another one if its comprehensive utility value is higher. This looks like a principle taken from economics, but is, in fact, a value decision about leaving the unequal distribution of wealth out of consideration. Instead, average quantities are exclusively taken into account. Nor do such Integrated Assessment Models pay attention to the different levels of prosperity among the people affected by climate change. For instance, the position of “prioritarianism” demands that additional weight be granted to the benefits or damages done to people who are worse-off. A given damage hurts the affluent less than it does the impoverished. The choice of focusing exclusively on averages for assessing utility waves aside all such considerations, and this certainly marks a value-laden decision (Schienke et al. 2011, 509–513; Frisch 2018, Sect. 3).

The point is that such models incorporate nonepistemic values of social and moral import, while passing the relevant assumptions off as descriptive and factual. It is an important goal of philosophical analysis to point out that moral values have been smuggled in at this juncture, to pinpoint these values, and to thereby illuminate the argumentative structure of the evaluation. This does not mean to dismiss all such nonepistemic choices but to make them explicit.

Here is another example. In fighting climate change, costs are to be borne in the present, while benefits occur only in the future. The question is whether postponing countermeasures and the expenses going along with them into the future improves or deteriorates the cost–benefit ratio. The critical item is how to factor in the time delay between spending money now and reaping the benefit much later. Such a time delay is captured in economics by discounting. It is usual practice to introduce discounts on future utility. The idea behind discounting is that an amount of money now in your pocket is more valuable than money you receive in the future. Conversely, a bill you need to pay today hurts more than a future invoice. Discounting makes present benefits more valuable than future benefits and future cost more acceptable than present cost.Footnote 4

In such a framework, the economic analysis of taking action against climate change depends decisively on the discount rates chosen. Contrasting choices emerged in the controversy between the economists Nicholas Stern and William Nordhaus in 2006/07. In his analysis, Nordhaus assumed a discount rate of 6% and concluded that present expenses to protect the climate will not be profitable. Assuming high long-term discount rates has a discouraging effect on taking action now because such present endeavors will never amortize. By contrast, Stern set the discount rate at 1.4% and inferred that combating climate change now would pay off economically and should be launched rather sooner than later (Broome 2008). As a result, Nordhaus adjusted his Integrated Assessment Model and transformed the discount rate into a parameter that allows for various choices. A low parameter value was labeled “Stern-discounting” by Nordhaus (2007).

The chief lesson to be learned from this example is, again, the tendency of scientists to consider a feature as a fact that is in reality a value-judgment. Nordhaus picked his value of the discount rate from the interests at the time, which makes this value appear as a matter of fact. However, as Stern rightly pointed out, setting the value of the discount rate is of high ethical import and therefore needs to be made by appeal to moral values. The reason is that this choice has a bearing on how the burden is shared among various generations and thus affects intergenerational justice. Such factors cannot be left to ephemeral oscillations of the financial markets (Stern 2007, 41–48; Broome 2008).

This consideration suggests that one of the problems associated with the appeal to political and economic values in scientific policy advice is the intrusion of such values in the guise of facts. It is their hidden influence that is pernicious and makes recommendations illicitly one-sided and misleading. The objective should not be to drive all nonepistemic values out of science-based guidance but to keep facts distinct from values and to make value-judgments explicit.Footnote 5

As suggested in Sect. 3, nonepistemic values serve a constructive goal; they determine relevance relations in many fields and thus contribute to structuring models conceptually. For instance, whereas climate models had been shaped earlier by the objective to understand small-scale effects such as cloud formation, stress has been shifted toward large-scale effects of clouds on temperature and precipitation. The reason is that the latter quantities are more important for taking action (Hillerbrand, 2014, 20–21). There is nothing to object to having climate models shaped by the goal to fight climate change. Nonepistemic values thereby establish significance relations between hypotheses and the data and consequently affect the empirical assessment of these hypotheses. Nothing is wrong here provided that the underlying value-choices have been made explicit and laid open.

These considerations suggest that it is an important virtue of scientific policy advice to make values visible and subject to explicit judgment. The non-trivial gain of transparency is respect for the fact-value distinction.Footnote 6 In contrast to Pielke’s science arbiter, I take it that good scientific policy advice is rife with nonepistemic values. Each such advice presupposes certain social values, assumes certain means, and explores whether the latter are suited to promote the former. However, good scientific policy advice minds the distinction between facts and values.

5 Privileging Nonepistemic Values for Epistemic Reasons

Given that nonepistemic values form an essential part of policy advice, how can we avoid the problems of bias and presumptuous impositions on the part of scientists that taint the invocation of values in scientific policy advice? While transparency is a worthwhile pragmatic maxim, it fails to address the underlying challenge of the legitimacy of admitting nonepistemic values to the judgment of policy-relevant claims. Thus, we need to dig deeper at this juncture.

Daniel Hicks (2014, 3290) has argued that nonepistemic values can be distinguished by assessing whether they promote or frustrate epistemic practices. As he claims, feminist values have advanced epistemic standards in archeology, while commercial values have deteriorated them in drug development. Feminist archeologists have replaced the “epistemological criteria of mainstream archaeology” with feminist values and thereby developed specifically non-androcentric “epistemological criteria” for assessing hypotheses. Feminist values are “synergistic” with epistemic standards in making sense of the fossil record (Hicks 2014, 3275–3277). Hicks’ contrary case is that commercial values have a detrimental impact on epistemic judgment in pharmacological research. Indeed, such values have led to biased study design, misleading data interpretation, and the suppression of unwelcome evidence. Commercial values in pharmaceutical research are “antagonistic” to “scientific values” (2014, 3277–3279, 3290–3291).

The trouble with this argument is that the recognized epistemic standards have been left unchanged by the pursuit of feminist or commercial projects, respectively. In Hicks’ judgment, feminist archeology exposed unfounded assumptions in the then-received theoretical framework and had the potential to be more empirically adequate (2014, 3277, 3291). Indeed, feminist philosophers of archeology agree and see their approach as being superior in light of conventional standards of judgment. Assigning women a more active role in the development of agriculture is assumed to produce a better accordance with the fossil data. Feminist archeology is claimed to be more coherent and comprehensive than the competing traditional account (Longino 1990, 128–130; Wylie 1996, 323, 329, 333). Such considerations are gender-neutral and not based on specifically feminist standards. Rather, androcentric archeology is said to lag behind in epistemic achievement as assessed by the conventional standards. No feminist “epistemological criteria” are employed.

In a similar vein, the epic dimensions of the complaints about the methodological flaws of commercially-driven research testify that the practices cited by Hicks are generally viewed as a violation of the received epistemic criteria. Biomedical research has been placed under a strict methodological regime of guidelines and protocols in the past years, which demonstrates that the traditional epistemic demands are widely endorsed. The vigorous response to their violation shows that they are still in force.

As a result, feminist archeology comes out superior on conventional epistemic values as empirical adequacy and coherence that do not bear any specifically feminist ring, while the defects of commercially-driven research are decried universally and thereby reinforce the demand to proceed by the books. What we do find in both cases is a heuristic influence of nonepistemic values. That is, certain directions of research are privileged by certain nonepistemic values. But the value-free ideal acknowledges that the topics addressed and the content of the accounts at hand may well be shaped by nonepistemic values (see Sect. 2). However, in neither case do nonepistemic values affect the epistemic standards considered as legitimate in the pertinent scientific community.

Still, Hicks is right in diagnosing a difference between the archeological and the pharmaceutical case. The influence of nonepistemic values was beneficial in the former case, but harmful in the latter. As I will argue in greater detail in the next section, underlying the synergistic effect in the archeological example was that feminist values served to break up a narrow and confined framework and created a broader approach to the topic. Conversely, an antagonistic effect of commercial values consists in limiting research endeavors to a small range of patentable drugs. Thus, what actually lurks behind judgments about the synergistic or antagonistic effects of values is the appreciation of plurality. What was epistemically worthwhile about nonepistemic values was their ability to induce attacking problems from various angles.

6 What Does Good Scientific Policy Advice Look Like?

Recall the predicament we are in. On the one hand, science is an epistemic endeavor and has no authority to make nonepistemic value-judgments. Moreover, making such judgments is taken by many people as infringing on the democratic privilege to choose how society and the economy should be organized. However, keeping nonepistemic values out of expert advice tends to invalidate such advice. Experts need to include policy-relevant values so as to structure the conceptual field at hand and give politicians a handle on the choices open to them. This means that good scientific policy advice can hardly avoid value judgments such as: “Preserving a variety of indigenous species is a good thing” or “Climate models should focus on global surface temperature and precipitation.” Research conducted in policy-relevant areas needs to rely on relations of significance that are established in part by social values (see Sect. 3).

Yet, there is a path in-between. The first step is to distinguish between promoting and presupposing a nonepistemic goal. It is one thing to commit oneself to a social objective and a completely different thing to set such a goal as a hypothetical condition. Presupposing evaluations means to state them explicitly as a separate premise.Footnote 7 Expert advice is conditionalized such that if certain goals were set, they could be accomplished by certain measures. For instance, a science-based recommendation can be prefixed by the normative premise that overestimating adverse side-effects is better than underestimating them. That is, choices à la Douglas are compatible with the value-free ideal if they are flagged as a separate condition. Presupposing a value is not tantamount to committing oneself to this value. A different and additional consideration is that even normative commitments are compatible with the value-free ideal if they are adopted by commission. Scientists are commissioned by policymakers to explore ways to achieve certain social goals. In such schemes, the values at hand are set from outside of science and scientists are authorized to make the pertinent value judgments. Regarding climate change, researchers have been appointed by politics to conceive measures to save the planet (or rather ways allowing for the survival of humankind). Scientists did not choose between studying ways to preserve humanity and to kick it over the cliff. They rather act on behalf of politics (or democratic choice). Policy advice along such lines does not promote any nonepistemic values on its own. Evaluation is suspended and advice is elaborated “as if” the values were embraced. No conflict with the value-free ideal arises.

Such a concept might appear unrealistic in view of Douglas’ insight that value-laden choices need to be made on various levels (see Sect. 3). They concern methodological details and classificatory subtleties; making such choices requires a deeper familiarity with the subject matter in question. Consequently, handing them over to politics seems neither appropriate nor feasible. However, the general principles underlying such choices could well be explained by appealing to conditionalization or political commission. These general principles include the precautionary principle, as compared to the principle of sound science, or the weight conferred on different risks. While the precautionary principle accepts likely risks as a reason for taking preventive action (see Sect. 3), the competing principle of sound science considers regulatory action as being justified only if positive evidence for risks is available (Hansson 2007, 265). Advisors could signpost such nonepistemic principles inherent in their studies as separate premises or as political commissions.

The noncommittal stance regarding nonepistemic values suggests drawing up alternative value-laden policy packages which combine facts, scientific accounts and nonepistemic premises.Footnote 8 Along these lines, advisors could elaborate a plurality of policy packages and tag them with provisos that express a non-committal attitude to their nonepistemic objectives. That is, such arrays may each employ a broad range of different political, economic or moral preferences. A set of different means-end scenarios may be conceived, each of which is structured by nonepistemic values. This job is not unlike another one of Pielke’s types of giving policy advice, namely, the honest broker. This figure seeks to integrate scientific knowledge with concerns of the decision-makers such that alternative possible courses of action ensue. The honest broker clarifies and expands the scope of choices available to decision-makers and thus broadens the range of choices open to them (Pielke 2007, 2–3, 17–19). The present account features the role of nonepistemic values in this endeavor. Alternative courses of action do not only respond differently to uncertainty, but may also be directed at different values. In combating an aggressive virus, emphasis can be placed on preserving public health, keeping the economy strong, or protecting individual liberty. Since nonepistemic value-judgments can be interpreted in a non-committal way, the honest broker can be integrated into the value-free ideal.Footnote 9

In addition, this combination of conditionalization and plurality allows us to account for the independence of policy advice. “Independence” here means to illuminate options and risks that may conflict with current social aspirations. Science-based advice should be more than taking science into the service of popular social goals. Such independence is hard to square with commissioning, that is, direct guidance by politics or public participation. Instead, independent advice could be achieved by unfolding a variety of policy packages, each prefixing a different set of social goals. Scientists should take courage to conceive alternative courses of action, which means, in particular, to resist the urge of politicians to get unambiguous advice. The expert ambition should rather be to enable politicians to make good fact- and value-based choices. Moreover, the spectrum of policy packages could be crafted such that it represents an even-handed approach on the whole (Lacey 2013, 79–81). Elaborating a range of diverse options is suited to make scientific policy advice nonpartisan and to avoid the biased adherence to powerful social forces. Policy advice given along these conceptual lines could proceed in agreement with the value-free ideal.

One of the reasons why scholars have largely given up the value-free ideal is that the procedure of simply listing facts (embodied in Pielke’s science arbiter) looks utterly unpromising. Consider Kevin Elliott’s (2011, 66–80) account of the value-ladenness of scientific policy advice. Elliott rightly rejects that scientists simply hand over uninterpreted data to policymakers and rightly denies that the evidence suggests one recommendation unambiguously. The conclusion Elliott draws is also alright: scientists should “consider the major societal ramifications of their work” (2011, 80). However, this reasoning does not imply that scientists are “forced to make practical decisions” or are “forced to recommend” such decisions (2011, 77). Rejecting the figure of the science arbiter does not invalidate the value-free ideal. Presupposing values and engaging with values is perfectly admissible within this framework. Conversely, the trouble with giving up value-freedom is that committing science to certain nonepistemic goals is liable to produce a politicized and biased science that would cease to be the ecumenical source of knowledge on which all parties at strife can rely. This would further fuel public suspicion that experts are hired guns and can be rented to fight for a political cause. Instead, recommendations could be part of alternative conditionalized policy packages, each characterized by different goals. As a result, adversary societal factions will see themselves represented somewhere in this range of suggestions, and I expect that this broad representation of conflicting opinions strengthens public trust in the overarching and impartial character of scientific policy advice. In this setting, the selection from among this menu is made by democratic bodies. No value-laden decisions need to be made by experts, and the value-free ideal could be upheld (or almost, see Sects. 3 and 7).

As an afterthought, there is an additional mode of engaging with values while respecting value-freedom. Scientists may set out to check the coherence of values and assess their odds of being implemented. Checking consistency and feasibility are value-relevant considerations that do not lay claim to any scientific authority as to the values analyzed. An example could be the judgment of climate scientists that, given the atmospheric conditions humankind has produced in the past 150 years, it is not possible to reconciliate the temperature goals laid down in the Paris accord of 2015 with continuing the fossil-carbon intensive ways of life that are socially cherished. Science could reveal, and even urge, that we cannot have it both ways. Value-free science or policy advice is not demanded to stay aloof from values completely and to swallow indiscriminately all value-commitments from the social realm.

7 The Scope and Boundary of the Value-Free Ideal

To summarize, the value-free ideal in its prevalent understanding does not require that science shun any invocation of nonepistemic values, let alone epistemic values. It only says that nonepistemic values should not play a role in the justification or acceptance of scientific hypotheses. Science is not in a position to privilege certain social goals by letting them influence what counts as scientific knowledge. But it is in accordance with the value-free ideal that research undertakings are conceptually shaped and driven in their goals by social ambitions. My claim is that scientific policy advice can be faithful to this ideal to a significant extent and still be useful to politics.

In order to make this maybe counterintuitive claim more plausible, let me recap the central steps of the argument. Giving good advice is inevitably interwoven with taking up and processing social aspirations and fears. Still, scientific policy advice need not subscribe to any such social goal and could rather treat it as proviso or as commission. In addition, good advice is characterized by opening up room for choice which is realized by developing a variety of policy packages. Each of these packages combines factual claims, risk assessments and social ambitions: each package should be coherent in the value-judgments it includes, reliable in its assumptions about matters of fact, relevant in that the pathways envisaged are feasible, dependable in seriously considering side-effects, and distinct in the projected achievements. In particular, each such package should be transparent about the values entering the account, respect the fact-value distinction and use it as a critical tool. And the spectrum of such policy packages should delineate different choices. Good scientific policy advice should not lend support to the creed that there is no alternative. Yet, in the end the choice is up to democratic bodies and not to scientists. In granting the decision to social bodies, scientific policy advice avoids the risk of appearing to illegitimately impose values on the public. I expect that respecting such scientific restraint makes the advice given more trustworthy.

I argued in Sect. 3 that the interpenetration of contexts is a limitation of the value-free ideal. Nonepistemic evaluations legitimately affect the research agenda and thus bear on the claims entertained and the evidence produced. The contexts of discovery and application creep into the context of justification and rightly wield influence over the epistemic judgments made. As a result, the body of scientific knowledge owes its composition in part to nonepistemic value judgments. The claim I make is that it is possible to give policy advice without infringing on the value-free ideal in a way categorically different from the confines of this ideal for scientific knowledge in general. The remaining difference is that in the latter case breaching value-freedom cannot always be avoided while in scientific policy advice the guidance through nonepistemic values is essential.

What does infringe on this ideal is scientists standing up for nonepistemic values. In this case, scientists do not respond to questions posed to them by society or politics; they raise questions themselves. However, while such endeavors do no longer qualify as scientific policy advice, they may still appear legitimate as a different kind of undertaking, namely, as assuming the responsibility of science. The traditional justification of such responsibility is epistemic superiority. Scientists look further in scientific matters of social relevance than ordinary people do, and these deeper insights place an additional accountability on their shoulders. Karl Popper labeled this doctrine “sagesse oblige”: she who looks further is to be held accountable for the repercussion of these additional insights (Koertge 2000, 48–49).

A frequent way of taking and implementing such responsibility is to ring warning bells early. Scientists alert the public to certain risks inherent in substances or processes that were unknown or not sufficiently heeded. Examples are scientists warning against the use of DDT, the depletion of the ozone layer, plastic garbage in the oceans, and, of course, climate change. In this role of raising attention to certain risks or, as the case may be, to opportunities, scientists do not act as counselors, but as public expert intellectuals. They do not lay out a set of alternatives but decry certain states of affairs or actively promote certain goals.

Couched in Pielke’s conceptual framework, this figure of the expert intellectual comes close to his “issue advocate” who tries to convince society and politics of one particular choice and seeks to compel a particular decision (Pielke 2007, 2–3, 10–11). There is nothing to object to such behavior, provided that issue advocacy is acknowledged (or, equivalently, Pielke’s “stealth issue advocacy” is eschewed) and not labeled as scientific policy advice. A scientist who assumes an active political role and stands up as a citizen, while still drawing on scientific knowledge, is not to be criticized. Scientists in this role are public intellectuals who intend to educate society and politics. If such engagement is performed transparently and without creating the misleading appearance of scientific policy advice, it is a praiseworthy endeavor. The chief point in this connection is that acting as an expert intellectual, and thus campaigning for certain nonepistemic values, is an endeavor different from giving good scientific advice. It is worthwhile all the same.