Skip to main content

GENERAL COMMENTARY article

Front. Psychol., 19 February 2015
Sec. Cognitive Science
This article is part of the Research Topic From is to ought: The place of normative models in the study of human thought View all 24 articles

Alleviating the concerns with the SDT approach to reasoning: reply to Singmann and Kellen (2014)

  • 1Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
  • 2School of Psychology, Cognition Institute, Plymouth University, Plymouth, UK

A commentary on
Concerns with the SDT approach to causal conditional reasoning: a comment on Trippas, Verde, Handley, Roser, McNair, and Evans (2014).

by Singmann, H., and Kellen, D. (2014). Front. Psychol. 5:402. doi: 10.3389/fpsyg.2014.00402

In their comment on our article (Trippas et al., 2014a), Singmann and Kellen (2014; henceforth SK) suggest that our use of signal detection theory (SDT) provides an uninformative characterization of the data, that our application of SDT to causal-conditional reasoning is unnecessary and misguided, and that the model does not provide a good fit of the data. We will address each of these points.

SK's concern that our use of SDT is uninformative rests on a single issue: how to interpret the shift in the location of the confidence points along the ROC when comparing the believable and unbelievable conditions (Trippas et al., 2014a; Figure 1), a shift that in real terms represents a greater tendency to accept believable arguments as “valid.” We find SK's focus on this aspect of the data surprising because it has no direct bearing on the main points of the article, which have to do with the changes in the shape and separation of the ROCs when we segregate the data in different ways (Trippas et al., 2014a, Figure 1, comparing the top and bottom panels). For this reason, we mentioned the confidence point shift only once, saying that it fits a pattern previously interpreted by Dube et al. (2010) as a shift in response bias, but which might also be due to a symmetric shift in the evidence distributions. This is a succinct way of stating what SK describe in great detail in their toy model. We have no problem with the fact that the confidence point shift has alternative interpretations. Although it is not integral to the thrust of the article, this aspect of the data is worth noting because the pattern is observed in other reasoning tasks and represents a point of continuity despite the apparent discontinuity in other aspects of the ROC data.

We gather that SK's focus on the unidentifiability issue is meant to be a critique of the SDT model in general. In our view, it is not a compelling critique. Following convention, we use “accuracy” to denote sensitivity, the ability to discriminate classes of items (valid from invalid). Accuracy depends on the relative distance between the valid and invalid distributions. If some factor were to increase the argument strength of invalid and valid arguments by exactly the same degree, accuracy would remain constant. This is a specific circumstance which the SDT model cannot distinguish from a shift in response bias. The model is, however, unambiguous in distinguishing between changes in accuracy from those that might be ascribed solely to response bias. This is where the theoretical power of the model lies (e.g., Trippas et al., 2013).

As for the question of response bias, the SDT model is widely used in domains like memory where criterion placement is an issue because theorists have a range of other tools at their disposal to deal with ambiguity (they can, for example, use manipulations that plausibly only affect response bias). In their final point, SK cite work in recognition memory (Morrell et al., 2002) to argue that trial-by-trial criterion shifts are implausible. These findings describe the specific case in which test stimuli are indistinguishable save for an internal signal of mnemonic strength. When the stimuli are overtly distinguishable on other dimensions, people seem quite capable of shifting their criterion from one trial to the next (Dobbins and Kroll, 2005; Aminoff et al., 2012). Whether people do use different response criteria when judging believable and unbelievable arguments remains an open question, but the memory literature provides ample reason to believe that it is plausible.

SK argue against the application of SDT to conditional reasoning because one can reach the same conclusions by examining raw acceptance rates. This misses the point of using a model like SDT, which is to view the data within a consistent, theoretically justified framework. The problems that can arise when raw acceptance rates are used to measure accuracy are well documented (Klauer et al., 2000; Dube et al., 2010; Heit and Rotello, 2014) and certainly apply here.

SK make a good point in observing that the fit of the model to Roser and colleagues' ROC curves is poor. The problem may lie in the application of the model to aggregated data. It is well known that G2 depends on sample size such that aggregate model fits very often lead to violations of absolute fit. One alternative approach is to evaluate model fit for each participant individually (Cohen et al., 2008). To demonstrate the problematic nature of assessing the model fit of aggregated samples in terms of G2 when sample sizes are large, we combined data from 131 participants from previously published work on belief bias (Trippas et al., 2013, 2014b). We fit the believable and unbelievable ROCs separately, both aggregated and on a per-participant basis. The aggregate fit to the believable ROC was borderline acceptable, G2(3) = 6.97, p = 0.07. For the unbelievable ROC, the fit was unacceptable, G2(3) = 50.6, p < 0.001. The individual fits paint a prettier picture: for the believable problems, only 9 out of 131 or less than 7% of the participants show a violation of fit (p < 0.05). The unbelievable-problems case fares even better, with only 4 out of 131 or about 3% of the participants producing ill-fitting data patterns. How can such drastically different patterns of fit emerge? Individual differences potentially play a large role: unbelievable problems elicit different reasoning strategies in different people (Trippas et al., 2013, 2014b,c), and aggregating across such data patterns will suggest that the model is inappropriate. We suspect that similar factors have contributed to the poor fits reported by SK.

SK's comments speak to a number of interesting issues that deserve to be raised in the wider discussion surrounding the SDT approach to modeling human reasoning. It is useful, however, to reiterate the point of our original article which seems to be lost in the discussion of side issues. A strict adherence to “normativism” often leads investigators to biased or misleading interpretations of phenomena (Elqayam and Evans, 2011). The default normative approach to the application of SDT to reasoning illustrates precisely this problem in the case of causal conditionals.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Aminoff, E. M., Clewett, D., Freeman, S., Frithsen, A., Tipper, C., Johnson, A., et al. (2012). Individual differences in shifting decision criterion: a recognition memory study. Mem. Cogn. 40, 1016–1030. doi: 10.3758/s13421-012-0204-6

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Cohen, A. L., Sanborn, A. N., and Shiffrin, R. M. (2008). Model evaluation using grouped or individual data. Psychon. Bull. Rev. 15, 692–712. doi: 10.3758/PBR.15.4.692

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Dobbins, I. G., and Kroll, N. E. A. (2005). Distinctiveness and the recognition mirror effect: evidence for an item-based criterion placement heuristic. J. Exp. Psychol. Learn. Mem. Cogn. 31, 1186–1198. doi: 10.1037/0278-7393.31.6.1186

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Dube, C., Rotello, C. M., and Heit, E. (2010). Assessing the belief bias effect with ROCs: it's a response bias effect. Psychol. Rev. 117, 831–863. doi: 10.1037/a0019634

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Elqayam, S., and Evans, J. S. (2011). Subtracting ‘ought’ from ‘is’: descriptivism versus normativism in the study of the human thinking. Behav. Brain Sci. 34, 233–248. doi: 10.1017/S0140525X1100001X

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Heit, E., and Rotello, C. M. (2014). Traditional difference-score analyses of reasoning are flawed. Cognition 131, 75–91. doi: 10.1016/j.cognition.2013.12.003

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Klauer, K. C., Musch, J., and Naumer, B. (2000). On belief bias in syllogistic reasoning. Psychol. Rev. 107, 852–884. doi: 10.1037/0033-295X.107.4.852

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Morrell, H. E. R., Gaitan, S., and Wixted, J. T. (2002). On the nature of the decision axis in signal-detection-based models of recognition memory. J. Exp. Psychol. Learn. Mem. Cogn. 28, 1095–1110. doi: 10.1037/0278-7393.28.6.1095

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Singmann, H., and Kellen, D. (2014). Concerns with the SDT approach to causal conditional reasoning: a comment on Trippas, Verde, Handley, Roser, McNair, and Evans (2014). Front. Psychol. 5:402. doi: 10.3389/fpsyg.2014.00402

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Trippas, D., Handley, S. J., and Verde, M. F. (2013). The SDT model of belief bias: complexity, time, and cognitive ability mediate the effects of believability. J. Exp. Psychol. Learn. Mem. Cogn. 39, 1393–1402. doi: 10.1037/a0032398

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Trippas, D., Handley, S. J., and Verde, M. F. (2014b). Fluency and belief bias in deductive reasoning: new indices for old effects. Front. Psychol. 5:631. doi: 10.3389/fpsyg.2014.00631

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Trippas, D., Verde, M. F., and Handley, S. J. (2014c). Using forced choice to test belief bias in syllogistic reasoning. Cognition 133, 586–600. doi: 10.1016/j.cognition.2014.08.009

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Trippas, D., Verde, M. F., Handley, S. J., Roser, M. E., McNair, N. A., and Evans, J. S. (2014a). Modeling causal conditional reasoning data using SDT: caveats and new insights. Front. Psychol. 5:217. doi: 10.3389/fpsyg.2014.00217

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Keywords: reasoning, signal detection theory, model fitting, model identifiability, belief bias, response bias

Citation: Trippas D, Verde MF and Handley SJ (2015) Alleviating the concerns with the SDT approach to reasoning: reply to Singmann and Kellen (2014). Front. Psychol. 6:184. doi: 10.3389/fpsyg.2015.00184

Received: 08 October 2014; Accepted: 05 February 2015;
Published online: 19 February 2015.

Edited by:

David E. Over, Durham University, UK

Reviewed by:

Shira Elqayam, De Montfort University, UK

Copyright © 2015 Trippas, Verde and Handley. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: trippas@mpib-berlin.mpg.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.