Explanation…demands a theory…that predicts effects of the manipulated variables on performance of each task. Crude distinctions between “systems” are seldom sufficient for this purpose. Further, once a sufficiently elaborate process model is in hand, it is not clear that the notion of a system is any longer of much use. Once the model has been spelled out, it makes little difference whether its components are called systems, modules, processes, or something else; the explanatory burden is carried by the nature of the proposed mechanisms and their interactions, not by what they are called. (Hintzman 1990, p. 121)

Evans and Over (2009) provided comments on our article (Marewski et al. 2009). Here, we respond to their major points.

Models of heuristics specify the proportion of correct and false judgments

In their critique of our article, Evans and Over’s (2009) first point is that “heuristics can often lead to biases as well as effective responding” (p. 2) and that we write “as if heuristics were invariably rational and error free” (p. 5). This is a most surprising conjecture.

Tversky and Kahneman (1974) correctly argued that heuristics are in general quite effective but sometimes lead to severe errors. But since they had no computational models of availability, representativeness, and anchoring, they could not spell out the “sometimes.” The fast and frugal heuristics framework has spelled out the sometimes by developing computational models of heuristics that allow for quantitative predictions about how many errors heuristics make, or how their performance compares to that of more complex models. Here are three examples.

First, Goldstein and Gigerenzer (2002) showed that when the recognition validity is 0.80 and a person recognizes half of the objects without having any further knowledge, then by relying on the recognition heuristic, this person would get it right in 65% of the cases. This means this person would get it wrong in 35% of the cases. This is an example of an analytical result about how many errors the use of a certain heuristic implies given a certain knowledge state of the person.

Second, across 20 different studies on predicting psychological, demographic, economic, and other criteria, take-the-best, tallying, multiple regression, and minimalist made correct predictions, on average, in 71, 69, 68, and 65% of the cases (Czerlinski et al. 1999, p. 105). This means that, on average, the strategies made errors 29, 31, 32, and 35% of the time, respectively. This is a simulation result that specifies the proportion of errors heuristics and the more complex strategy multiple regression make in prediction.

Third, consider the question faced by managers of how to tell whether a customer is still active or has become inactive in a large customer database. Wübben and Wangenheim (2008) reported that managers rely on one-reason decision making—specifically, the hiatus heuristic: If customers have not purchased anything for 9 (in one case, 6) months, conclude that they are inactive, otherwise active. In three different companies, this heuristic overall correctly classified 83, 77, and 77% of the customers, respectively, compared to a Pareto/NBD (negative binomial distribution) model, a standard optimization technique in this field, which classified 75, 77, and 74% of the customers correctly. This means the heuristic got it wrong in 17, 23, and 23% of the cases, while the optimization model got it wrong slightly more often. This was an empirical study that compared actual experts’ heuristics with an optimization model.

These examples illustrate that heuristics are not error free, and that formal models allow us to quantify the errors and compare them to the errors other models make. Therefore, it is hard to understand how Evans and Over (2009) can interpret our writing as claiming that heuristics are error free. By the way, the second and third result illustrate that it is not so infrequently in the real world that one-reason decision-making heuristics are faster, more frugal, and more accurate at the same time. This leads to a second misunderstanding.

Heuristics do not always imply effort-accuracy trade-offs

According to Evans and Over (2009), heuristics are “short-cut methods of solving problems that pay a cost in accuracy for what they gain in speed” (p. 5). With this they have repeated the standard account of heuristics, which we and others have shown to be incorrect. As the examples above illustrate and as we pointed out throughout our paper (Marewski et al. 2009), heuristics do not always imply effort-accuracy trade-offs. Computer simulations and experiments have shown that fast and frugal decision-making strategies can often lead to more accurate inferences than strategies that use more information and computation. The analysis of the situations in which this occurs is part of the study of ecological rationality (Gigerenzer and Brighton 2009). An organism (or system or organization, etc.) that is faced with uncertainty sometimes needs to ignore information to make good decisions, and therefore simplicity can pay in ways beyond allowing for faster decisions.

Logic can be easy

Evans and Over’s (2009) second major conjecture is that “the rules of logic and probability theory can sometimes be easy to apply” (p. 4). Sometimes—no problem. But if we remember correctly, research on reasoning has also emphasized that people systematically violate these rules. Prominent examples include the conjunction fallacy, the Wason selection task, base rate neglect, or the belief bias research of Jonathan Evans (e.g., Evans 2007).

Toward formal models instead of vague labels

Evans and Over’s (2009) final point is that the fast and frugal heuristics framework ignores research on dual-process theory, but that “the role of their fast and frugal heuristics can only [be] correctly understood within such a framework” (p. 4). We disagree. From our point of view, the dual-process framework that Evans and Over refer to is too vague to be useful. In the absence of formal models, none of the important results in the three initial examples given in this reply—such as when the recognition heuristic, take-the-best, or tallying lead to less-is-more effects, or the test of the hiatus heuristic against optimizing models—could have been derived. Gigerenzer and Regier (1996) and others (Keren and Schul 2009) have criticized in detail the jumbling together of various theories into a loose list of opposing labels. Indeed, it seems that much of the data and theory on a general dual-process framework is mired in debates about jargon. This use of jargon to redescribe jargon is a hallmark of theoretical stagnation (Kuhn 1962).

Conclusion

If we are to make scientific progress, we must move beyond naming and renaming vague ideas. General verbal distinctions such as “rule-based versus instance-based” do not represent progress beyond what was well documented nearly 40 years ago, in the 1970s, unless these labels can be substantiated in terms of formal models that precisely define what they mean and what they predict. As stressed by Hintzman (1990), what really matters is the precision with which psychological models are defined and not how they are labeled. In contrast to Evans (2008), who proposed replacing the labels System 1 and System 2 with the terms Type 1 and Type 2 processes, we therefore suggest that it is instead the dual-process framework with its two “black boxes” that should be replaced by computationally precise formal models. Scientific progress is not found in the accumulation of marketable labels. Instead, it requires the development of precise theories of psychological processes that lead to clear, testable, quantitative predictions.