Received 17 July 2018; Accepted 29 July 2018

Abstract

This response addresses the excellent responses to my book provided by Heather Douglas, Janet Kourany, and Matt Brown. First, I provide some comments and clarifications concerning a few of the highlights from their essays. Second, in response to the worries of my critics, I provide more detail than I was able to provide in my book regarding my three conditions for incorporating values in science. Third, I identify some of the most promising avenues for further research that flow out of this interchange.

Part of an author-meets-critics book symposium on A Tapestry of Values: An Introduction to Values in Science by Kevin C. Elliott (Oxford University Press, 2017), with Douglas 2018, Kourany 2018, Brown 2018, and Elliott 2018


1 Introduction

I am very grateful to my critics for engaging with my book so thoughtfully. They have done a superb job of identifying my book’s major strengths and weaknesses, as well as the avenues it suggests for future scholarship. In this response, I will begin by providing some comments and clarifications in response to a few of my critics’ thoughts. Next, I will provide a more extensive discussion of an issue that deserves more careful attention, namely, my three conditions for incorporating values into science in an appropriate fashion. Finally, I will highlight some of the most promising avenues for further research that flow out of this interchange with my critics.

2 Comments and Clarifications

Let me begin by highlighting the metaphors that Heather employed in her assessment of my book; I thought they were delightful. She contrasts the view of science as an enterprise in bricklaying (with the values baked out of the bricks) with the view that scientists are engaged in weaving a tapestry (with value-laden threads permeating the tapestry). I heartily agree with her that the tapestry metaphor is valuable not only because it suggests that values are an integral component of science but also because it invites us to reflect on the institutional and social structures (the loom) that guide the weaving process. The last section of this response will return to these institutional structures.

I also agree with Heather’s assessment that the boundaries between the different kinds of value influences discussed throughout the book are fuzzy. I thought it was helpful to distinguish five different kinds of value influences for pedagogical and practical purposes, but I am not wedded to maintaining sharp distinctions between them. My primary goal was merely to encourage scientists and science-studies scholars to reflect about the many different ways in which values can influence science and to scrutinize whether particular types of value influences are appropriate in specific contexts.

Heather also suggested some better ways to formulate the distinction between direct and indirect roles for values in science. I am inclined to defer to her preferences, and I am especially keen on her suggestion that the distinction could be formulated in a way that accords with my preferred aims-based approach to values in science (Elliott and McKaughan 2014; Hicks 2014; Intemann 2015). She suggests that when values determine whether a choice is good it could be described as a direct role for values, whereas when values determine whether a choice is good enough it would be an indirect role. I think this definition of the direct and indirect roles for values helps to clarify exactly how her view (at least in the past) has differed from my own (see e.g., Elliott 2013). She has argued that values should not play a direct role in central aspects of scientific reasoning, such as assessing the quality of models or hypotheses (Douglas 2009). I have argued, however, that scientists can legitimately take non-epistemic considerations as well as epistemic considerations into account when deciding what counts as a good model or hypothesis (Elliott and McKaughan 2014); thus, I am saying that values can play a direct role in assessing the quality of models or hypotheses. In contrast, Heather has resisted this conclusion and limited values to the indirect role of deciding whether there is a good enough fit between a model or hypothesis and the purely epistemic standard to which it is being held.

Let me turn now to a perceptive point made by Matt. Based on his admirably close reading of the book, he noted that I appeared to waffle back and forth between a stronger and a weaker theory of values and value judgments. For example, my definition of a value is “something that is desirable or worthy of pursuit” (Elliott 2017, 11). This suggests a strong theory of values, such that one can reason about whether a value really is desirable. He notes, however, that I immediately turn to examples of what scientists actually value, and I frequently talk about “our values,” which suggests the weak view that values are just what people do in fact happen to desire. Matt thinks it is important to push for a strong theory of values, and he worries that my book is at best ambiguous and at worst falls into a weak theory, according to which we cannot reason about our ultimate values or aims.

The ambiguities that Matt has identified in my language reflect genuine tensions in my thinking. On one hand, I agree with Matt’s and Elizabeth Anderson’s perspective that we can reason about what is valuable (Anderson 2004; Brown 2013), and I am drawn to his Deweyan view that we can inquire not only about science but also about social and ethical value judgments (Brown in preparation). On the other hand, I think we need to take very seriously my book’s argument that science is permeated with contingent choices that affect the outcomes of inquiry (Elliott 2017, 13). This serves as one of my justifications for deliberately incorporating values in scientific reasoning, and Matt calls it the “contingency” argument in his own book (Brown in preparation). In my view, the same choices or contingencies that inevitably arise in scientific inquiries also permeate our inquiries about value judgments, but they do so in even more significant ways. Inquiry about value judgments depends on numerous background assumptions, and while empirical evidence is often relevant for assessing these assumptions, in most cases it is not decisive. Thus, just as we often encounter transient underdetermination when we engage in scientific inquiry, we face the same underdetermination with a vengeance when inquiring about value judgments.

Given this pervasive underdetermination in our reasoning about value judgments, I think it is indeed relevant to examine what people do in fact regard as desirable and what values are in fact widely held in our society. One way to evaluate the quality of our reasoning about value judgments is to examine how our assumptions and conclusions compare to those of others. Thus, while it would undoubtedly be a mistake to accept the preferred values of our society without subjecting them to critical scrutiny, it would also be a mistake to summarily dismiss our society’s values just because they differ from the conclusions of one’s own theoretical inquiry. In fact, I think one of the best ways to start an inquiry about what is in fact desirable is to look broadly at the range of things that people actually do desire. Nevertheless, I agree with Matt that a weakness of my book is its failure to develop a more complete account of how to reason about value judgments. The closest I came was in my account of the three condition for determining whether value influences are legitimate, to which I turn in the next section.

3 Transparency, Representativeness, and Engagement

Both Matt and Janet understandably worry that my book does not provide enough guidance about how to assess the value judgments that permeate science. Janet especially challenges me to elaborate on my three conditions: transparency, representativeness, and engagement. As I noted in my précis of the book, I avoided providing extensive philosophical discussions of these conditions because I wanted to keep the book accessible and interesting for a wide readership. Nonetheless, Janet is correct that my approach runs the risk of leaving readers without adequate tools for critically assessing the values that influence science. Let me take this opportunity to gesture toward how I would develop my conditions in more detail.

First, it is important to recognize that these conditions are not, in my view, strictly necessary or sufficient for incorporating values in science appropriately. Thus, these conditions operate more like “rules of thumb.” In some cases, it might be adequate for scientists to make the influences of values on their work as transparent as possible, without worrying about the additional conditions of representativeness and engagement. In other cases, transparency about values might be much more difficult to achieve, but the influences of values could be handled appropriately by achieving representativeness (i.e., employing value judgments that are widely recognized as being acceptable). Sometimes, it might be important to pursue a combination of these conditions, perhaps in conjunction with other conditions that are unique to specific situations.

Another important clarification is that these conditions can be achieved to varying degrees. As Heather rightly points out in her comments on the book, it is impossible to achieve perfect transparency about all the value judgments involved in scientific research; transparency is always achieved to a greater or lesser extent. Similarly, there can be more or less social agreement about the value judgments that inform scientific research, and the ethical principles underpinning particular value judgments can be more or less justified. Finally, the quality of engagement also comes in degrees; engagement exercises can be more or less open to all interested and affected parties, and the quality of the interaction between participants can vary as well. Thus, in order to use these conditions to evaluate whether value judgments are being handled appropriately, one must consider not only which conditions are most important but also the extent to which they need to be achieved in particular cases.

Let me turn next to how the conditions relate to one another. On Janet’s reading of my book, I make engagement primary, but she cautions that representativeness is arguably more fundamental insofar as it determines how engagement exercises should be structured. I would like to resist the call to make one condition more fundamental than the others; in my view, they are all important and they all have strengths that address weaknesses in the others.

Consider representativeness. I describe it in a fairly cursory way in the book:

[S]cientists and policymakers should strive to incorporate values that are representative of major social and ethical priorities. When clear, widely recognized ethical principles are available, they should be used to guide the values that influence science. When ethical principles are less settled, science should be influenced as much as possible by values that represent broad societal priorities. (Elliott 2017, 14–15, italics in original)

This condition is important because it reflects Matt’s concern with getting our values right or, when that goal is elusive, at least making our values democratically responsive. This condition works well when there are strong ethical reasons counting for or against particular value judgments, such as in the cases in my book where it was important to avoid sexism or racism. As I intimated in the previous section, however, I am skeptical that the ethical reasons in support of particular value judgments will be decisive in most cases. In other words, we will typically find either that different ethical theories support different conclusions about how to handle complex value judgments, or there will be uncertainty about what conclusions to draw from the available ethical theories. In my description of the representativeness condition, I suggest that we can respond to this problem by appealing to values that represent broad social priorities, but there are two reasons that this response will typically not be entirely satisfactory. First, there is such polarization in most contemporary societies that it will often be difficult to find values that are widely supported across the population. Second, we have seen throughout history that even in cases where values are widely accepted, we may later come to the conclusion that those values were ethically dubious.

Given the weaknesses of representativeness, the importance of engagement becomes obvious. In most cases we will face disagreement about which value judgments are most ethically responsible; thus, we need to find productive ways to work through these disagreements. Faced with this situation, engagement can be helpful in two ways. First, as Matt would emphasize, it can help promote productive inquiry about which value judgments are most justifiable. Second, it can provide fair ways of working through disagreements and deciding which value judgments should be employed even when different stakeholders cannot agree on which judgments are best. I am obviously a serious fan of engagement, insofar as I spent a chapter of my book illustrating the many ways in which it can be pursued. However, I also recognize that engagement has weaknesses, and Janet highlighted them perfectly in her comments on my book. Specifically, engagement exercises can generate very different results, depending on who participates in them and what procedures they follow (Kourany 2010). Thus, as Janet pointed out, efforts to use engagement to escape the ethical quandaries associated with representativeness may be doomed to fail. I would like to think that the situation is not as grim as it initially appears, insofar as there might be more potential for coming to agreement about what an ethically appropriate form of engagement would look like than to come to agreement about how other value judgments throughout science should be made. Nonetheless, my book does not fulfill this task of scrutinizing what forms of engagement are most appropriate under particular circumstances.

Transparency provides a potential avenue for overcoming the weaknesses of representativeness and engagement. The goal of transparency is for scientists to be as clear as possible about their “data, methods, models, and assumptions so that others can identify the ways in which their work supports or is influenced by particular values” (Elliott 2017, 14). The beauty of this approach is that it provides recourse for those who think that the conditions of representativeness or engagement have been used inappropriately. Even if someone disagrees with the value judgments that end up influencing science, transparency ideally enables them to recognize those influences and respond accordingly. At minimum, the awareness that a problematic value judgment has affected a particular piece of scientific research would enable one to dismiss it as something with which one does not agree. In response, one could pursue an alternative line of research that is informed by different value judgments. In some cases, one might not even have to develop an entirely new research project; one could use data provided by the original researchers to reconstruct the alternative conclusions one would draw by using one’s own preferred value judgments.[1] Thus, transparency enables people to “opt out” of science that is informed by value judgments with which they disagree.

I have devoted a good bit of effort in my previous work to defending the importance of transparency about value judgments and exploring how scientists can best achieve it (Elliott 2006, 2010; Elliott and Resnik 2014; McKaughan and Elliott 2013; McKaughan and Elliott 2018). Nevertheless, I must acknowledge that this condition has at least two major weaknesses. First, as Heather points out in her comments, transparency is always imperfect. (For this reason, she makes the suggestion that it would be better to use a term like ‘openness’ or ‘explicitness’ rather than ‘transparency’. I am open to these alternatives (no pun intended), but to avoid confusion I have continued to use the word ‘transparency’ throughout this response.) Science is permeated with too many value judgments for researchers to successfully disclose all of them, and it is doubtful that they will even recognize many of the judgments associated with their work. A second weakness of transparency is that it does not always enable critics to determine what alternative conclusions they would draw based on their own preferred value judgments. In some cases, one cannot use the information provided by the original researchers to reconstruct conclusions based on one’s own preferred value judgments, and it might not be practical or financially feasible to pursue a new research project (Schroeder forthcoming; see also the discussion of “one-shot science” in Brown 2017). Because of these weaknesses, the conditions of engagement and representativeness take on new significance. Engagement can alleviate the first weakness (lack of openness) by helping to identify value judgments that need to be disclosed and communicating them effectively to stakeholders who care about them. Both representativeness and engagement can help address the second weakness (difficulty reconstructing conclusions with alternative values) by ensuring that we use our scarce resources to fund research that is informed by the most justifiable values.

One of the central lessons that I would draw from this discussion is that the conditions of representativeness, engagement, and transparency are all important; we cannot focus solely on one of them to the exclusion of the others. They are all imperfect, and it is often necessary to shore up the weaknesses of one condition by appealing to the strengths of the other two. We also need to think more carefully about how high our standards should be for implementing these conditions. Given their significant weaknesses, it is important to reflect on how well we can realistically expect them to be implemented and whether there are additional conditions to which we can appeal.

4 Avenues for Further Scholarship

The preceding section has made it clear that my three conditions for incorporating values in science all deserve more reflection. With respect to representativeness, it is noteworthy that even though both Matt and Janet are calling for more careful reflection about which values count as “good,” their suggestions pull in somewhat different directions. Janet encourages more attention to the ways that traditional ethical theories can be used to assess difficult value judgments, while Matt encourages a more pragmatic approach to inquiring about values. While these suggestions are not mutually exclusive, one gets the feel from Matt that he is dubious about the potential for ethical theories to resolve our questions about values unless we test and prioritize them through a deeply contextual and deliberative process of inquiry. In contrast, Janet seems likely to worry that the outcomes of this deliberative process cannot be trusted unless it is itself guided by ethical principles from the very beginning. We can look forward to seeing how Matt’s own book, Science and Moral Imagination, will move this discussion forward (Brown in preparation).

As the tensions between Matt’s and Janet’s perspectives suggest, an ongoing question is how to balance representativeness with engagement. Kristina Rolin (2017) and Kristen Intemann (2017) recently engaged in an exchange that highlighted the issues at stake. Rolin argued that Helen Longino’s “social value management” ideal (according to which value judgments should be scrutinized by a scientific community that satisfies conditions for acceptable engagement) provides an adequate alternative to the value-free ideal (see Longino 1990, 2002). In contrast, Intemann argued that Longino’s ideal is inadequate in part because its approach to engagement does not go far enough to exclude ethically problematic values; on Intemann’s view, some social values should be endorsed independently of deliberative engagement (see also Kourany 2010). Thus, the extent to which particular values should be regarded as fundamental to scientific activity (as opposed to merely bubbling up out of engagement among stakeholders with many different perspectives) is a crucial topic for future research on values and science.

Transparency or openness about values also promises to be a fruitful research topic. Critics of the value-free ideal frequently acknowledge that if values are to be incorporated into science in a legitimate fashion, it is important to strive for openness about those value influences (Douglas 2008; Elliott 2009, 2011; Elliott and Resnik 2014; Stanev 2017). This emphasis dovetails with the “open science” movement that has recently attracted a great deal of attention (Royal Society 2012; Soranno et al. 2015). This movement promotes a wide range of initiatives, such as efforts to publish in open-access journals, to make data and computer code publicly available, to report research methods clearly enough to facilitate replication, to make the peer review process more transparent, and even to promote peer review of the entire research process rather than just reported results.

These are exciting developments, but they raise a number of questions. For example, it is not enough just to release data to the public; steps need to be taken to make this information usable and to facilitate critical evaluation of it. Especially if one of our ultimate goals is to facilitate the identification and analysis of values that inform research, it is important to consider what forms of openness will best achieve this goal. In addition, there can be significant disadvantages to transparency, such as when the recipients of information misinterpret it or when those with dubious motives try to use it as an excuse to make scientists spend unreasonable amounts of time fulfilling data requests so that they cannot effectively proceed with their research. Moreover, the US Environmental Protection Agency under the Trump administration appears to be using the incomplete availability of data from some studies as a highly questionable justification for excluding those studies from its decision making (Friedman 2018). Thus, steps to promote more open science need to be implemented in ways that mitigate the accompanying dangers.

My three conditions are not the only topics that deserve further attention from philosophers of science. I heartily agree with Heather that it is time for those studying values and science to place increasing emphasis on the institutional factors that embed values in science. For example, over the past fifty years there has been a very significant shift in research funding from the public sector to the private sector. Whereas roughly two-thirds of research funding came from the US federal government in the 1960s, now roughly two-thirds of research funding in the US (and throughout the world) comes from private industry (Krimsky 2003; National Science Board 2016; OECD 2016). This shift is at least partly the consequence of deliberate policy decisions that merit critical reflection (Biddle 2014; Krimsky 2003).

In addition to reflecting on the decisions that have led to an increasing proportion of private research funding, philosophers of science would do well to scrutinize the values embedded in privately-funded research (Holman and Elliott 2018). Many philosophers have pointed out the ways in which this research can express problematic value influences, such as a focus on profit to the exclusion of concerns about distributive justice (Reiss and Kitcher 2009) or epistemic quality (Goldacre 2012; Holman and Elliott 2018). Nevertheless, privately funded science can also express beneficial values such as a concern for reproducibility (Edwards 2016) and for the translation of basic research results into phenomena that have practical utility (Carrier 2011; Wilholt 2006). Much work remains to be done to determine the conditions under which privately-funded science is most useful and to identify ways to steer this research so that it reflects beneficial values rather than problematic ones.

Another important institutional trend in scientific research is the rise of citizen science (Elliott and Rosenberg forthcoming). Citizen involvement in research can take a wide variety of forms, ranging from cases where professional scientists maintain most of the control and merely include citizens as data collectors to cases where citizen groups guide the research and deliberately challenge traditional scientific norms (Cooper and Lewenstein 2016; Ottinger 2017). All these new structures for doing science provide fascinating opportunities for highlighting implicit value judgments and introducing new values into science (Wylie 2014). It would be a mistake, however, to uncritically accept all forms of citizen science as valuable in all cases. In some instances, the level of rigor needed in order to make good decisions may require following methodologies that professional scientists are in the best position to implement. Thus, we need more normative reflection about how best to structure and implement citizen science in particular contexts. My own inclination is to return to my aims-based approach and to consider how particular approaches help us or hinder us from achieving our epistemic and social goals (Elliott 2013; Elliott and McKaughan 2014; see also Hicks 2014; Intemann 2015).

5 Conclusion

The field of science and values is a vibrant research area, and my critics have highlighted an array of important issues that merit further discussion. Many of these relate in some way to my three conditions for legitimately incorporating values into science, but there are also important questions to ask about the institutional structures that play a powerful role embedding particular values in science. My book has only scratched the surface of the work that remains to be done.

Literature cited

  • Anderson, Elizabeth. 2004. “Uses of Value Judgments in Science: A General Argument, with Lessons from a Case Study of Feminist Research on Divorce.” Hypatia 19: 1–24.
  • Biddle, Justin. 2014. “Intellectual Property in the Biomedical Sciences.” In Routledge Companion to Bioethics, edited by J. Arras, E. Fenton, and R. Kukla, 149–161. London: Routledge.
  • Brown, James Robert. 2017. “Socializing Medical Research.” In Current Controversies in Values and Science, edited by Kevin Elliott and Daniel Steel, 147–160. New York: Routledge.
  • Brown, Matthew. 2013. “Values in Science Beyond Underdetermination and Inductive Risk.” Philosophy of Science 80: 829–839.
  • Brown, Matthew J. 2018. “Weaving Value Judgment into the Tapestry of Science.” Philosophy, Theory, and Practice in Biology 10 (10). https://doi.org/10.3998/ptpbio.16039257.0010.010.
  • Brown, Matthew. In preparation. Science and Moral Imagination. Draft available at: http://www.matthewjbrown.net/research/science-and-moral-imagination/.
  • Carrier, Martin. 2011. “Knowledge, Politics, and Commerce: Science under the Pressure of Practice.” In Science in the Context of Application: Methodological Change, Conceptual Transformation, Cultural Reorientation, edited by Martin Carrier and Alfred Nordmann, 11–30. Dordrecht: Springer.
  • Cooper, Caren, and Bruce Lewenstein. 2016. “Two Meanings of Citizen Science.” In The Rightful Place of Science: Citizen Science, edited by D. Cavalier and E. Kennedy, 51–62. Tempe, AZ: Arizona State University Press.
  • Douglas, Heather. 2007. “Inserting the Public into Science.” In Democratization of Expertise? Exploring Novel Forms of Scientific Advice in Political Decision-Making, edited by Sabine Maasen and Peter Weingart, 153–169. New York: Springer.
  • Douglas, Heather. 2008. “The Role of Values in Expert Reasoning.” Public Affairs Quarterly 22: 1–18.
  • Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.
  • Douglas, Heather. 2018. “From Tapestry to Loom: Broadening the Perspective on Values in Science.” Philosophy, Theory, and Practice in Biology 10 (8). https://doi.org/10.3998/ptpbio.16039257.0010.008.
  • Edwards, Aled. 2016. “Team Up with Industry: Combining Commercial and Academic Incentives and Resources Can Improve Science.” Nature 531: 299–302.
  • Elliott, Kevin. 2006. “An Ethics of Expertise Based on Informed Consent.” Science and Engineering Ethics 12: 637–661.
  • Elliott, Kevin. 2010. “Hydrogen Fuel-Cell Vehicles, Energy Policy, and the Ethics of Expertise.” Journal of Applied Philosophy 27: 376–393.
  • Elliott, Kevin. 2013. “Douglas on Values: From Indirect Roles to Multiple Goals.” Studies in History and Philosophy of Science 44: 375–383.
  • Elliott, Kevin. 2017. A Tapestry of Values: An Introduction to Values in Science. New York: Oxford University Press.
  • Elliott, Kevin C. 2018a. “Preìcis of A Tapestry of Values: An Introduction to Values in Science.” Philosophy, Theory, and Practice in Biology 10 (7). https://doi.org/10.3998/ptpbio.16039257.0010.007.
  • Elliott, Kevin, and Daniel McKaughan. 2014. “Non-Epistemic Values and the Multiple Goals of Science.” Philosophy of Science 81: 1–21.
  • Elliott, Kevin, and David Resnik. 2014. “Science, Policy, and the Transparency of Values.” Environmental Health Perspectives 122: 647–650.
  • Elliott, Kevin, and Jon Rosenberg. Forthcoming. “Philosophical Foundations for Citizen Science.” Citizen Science.
  • Friedman, Lisa. 2018. “The EPA Says It Wants Research Transparency. Scientists See an Attack on Science.” New York Times (March 26).
  • Goldacre, Ben. 2012. Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients. New York: Faber and Faber.
  • Hicks, Daniel. 2014. “A New Direction for Science and Values.” Synthese 191: 3271–3295.
  • Holman, Bennett, and Kevin Elliott. 2018. “The Promise and Perils of Industry-Funded Science.” Philosophy Compass: e12544.
  • Intemann, Kristen. 2015. “Distinguishing between Legitimate and Illegitimate Values in Climate Modeling,” European Journal for Philosophy of Science 5: 217–232.
  • Intemann, Kristen. 2017. “Feminism, Values, and the Bias Paradox: Why Value Management Is Not Sufficient.” In Current Controversies in Values and Science, edited by Kevin Elliott and Daniel Steel, 130–144. New York: Routledge.
  • Kourany, Janet. 2010. Philosophy of Science after Feminism. New York: Oxford University Press.
  • Kourany, Janet A. 2018. “Adding to the Tapestry.” Philosophy, Theory, and Practice in Biology 10 (9). https://doi.org/10.3998/ptpbio.16039257.0010.009.
  • Krimsky, Sheldon. 2003. Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research? Lanham, MD: Rowman and Littlefield.
  • Longino, Helen. 1990. Science as Social Knowledge. Princeton: Princeton University Press.
  • Longino, Helen. 2002. The Fate of Knowledge. Princeton: Princeton University Press.
  • McKaughan, Daniel, and Kevin Elliott. 2013. “Backtracking and the Ethics of Framing: Lessons from Voles and Vasopressin.” Accountability in Research 20: 206–226.
  • McKaughan, Daniel, and Kevin Elliott. 2018. “Just the Facts or Expert Opinion? The Backtracking Approach to Socially Responsible Science Communication.” In Ethics and Practice in Science Communication, edited by Susanna Priest, Jean Goodwin, and Michael Dahlstrom, 197–213. Chicago: University of Chicago Press.
  • National Science Board. 2016. Science and Engineering Indicators 2016. Arlington, VA: National Science Foundation (NSB-2016-01).
  • OECD. 2016. OECD Science, Technology, and Innovation Outlook 2016. Paris: OECD Publishing. https://doi.org/10.1787/sti/_in/_outlook-2016-en.
  • Ottinger, G., 2017. “Reconstructing or Reproducing? Scientific Authority and Models of Change in Two Traditions of Citizen Science.” In The Routledge Handbook of the Political Economy of Science, edited by D. Tyfield, R. Lave, S. Randalls, and C. Thorpe, 351–364. London: Routledge.
  • Pielke, Roger. 2007. The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge: University of Cambridge Press.
  • Reiss, Julian, and Philip Kitcher. 2009. “Biomedical Research, Neglected Diseases, and Well-Ordered Science.” Theoria 24: 263–282.
  • Rolin, Kristina. 2017. “Can Social Diversity Be Best Incorporated into Science by Adopting the Social Value Management Ideal?” In Current Controversies in Values and Science, edited by Kevin Elliott and Dan Steel, 113–129. New York: Routledge.
  • Royal Society. 2012. Science as an Open Enterprise. London: Royal Society.
  • Schroeder, Andrew. Forthcoming. “Which Values Should We Build into Economic Measures?” Economics and Philosophy.
  • Soranno, Patricia, Kendra Spence Cheruvelil, Kevin Elliott, and Georgina Montgomery. 2015. “It’s Good to Share: Why Environmental Scientists’ Ethics Are out of Date.” BioScience 65: 69–73.
  • Stanev, Roger. 2017. “Inductive Risk in Composite Outcome Measures.” In Exploring Inductive Risk: Case Studies of Values in Science, edited by Kevin C. Elliott and Ted Richards, 171–191. New York: Oxford University Press.
  • Wilholt, Torsten. 2006. “Design Rules: Industrial Research and Epistemic Merit.” Philosophy of Science 73: 66–89.
  • Wylie, Alison. 2014. “Community Based Collaborative Archaeology.” In Philosophy of Social Science: A New Introduction, edited by Nancy Cartwright and Eleonora Montuschi, 68–82. Oxford: Oxford University Press.

Notes

    1. Andrew Schroeder (forthcoming) helpfully discusses the difference between what he calls fixed values and user-assessable values. In the case of fixed values, the recipients of scientific information cannot easily determine how the results would differ based on alternative value judgments; in the case of user-assessable values, one can discern how the results would differ.return to text

    Acknowledgments

    I’m very thankful that Elyse Purcell initiated the idea of holding an Author Meets Critics session for my book at the 2018 Central APA Meeting, and I’m grateful to Matt Brown for running with the idea and organizing the session. I also want to thank Heather, Janet, and Matt for taking the time to engage with my scholarship; it’s a privilege to have such an impressive and thoughtful set of critics. Finally, I appreciate the willingness of the editors at PTPBio to publish our exchange; it was a pleasure to work with them.


    © 2018 Author(s)

    This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license, which permits anyone to download, copy, distribute, or display the full text without asking for permission, provided that the creator(s) are given full credit, no derivative works are created, and the work is not used for commercial purposes.

    ISSN 2475-3025