Concepts and Definitions

This chapter is devoted to clarifying terminology and concepts that have been regularly cited and used in the last decades around clinical reasoning. Thus, this chapter represents a conceptual overview.

Success in clinical reasoning is essential to a physician’s performance. Clinical reasoning is both a process and an outcome (with the latter often being referred to as decision-making). While these decisions must be evidence based as much as possible, clearly decisions also involve patient perspectives, the relationship between the physician and the patient, and the system or environment where care is rendered. Definitions of clinical reasoning therefore must include these aspects. While definitions of clinical reasoning vary, they typically share the features that clinical reasoning entails: (i) the cognitive operations allowing physicians to observe, collect, and analyze information and (ii) the resulting decisions for actions that take into account a patient’s specific circumstances and preferences (Eva et al. 2007; Durning and Artino 2011).

The variety of definitions of clinical reasoning and the heterogeneity in research is likely in part due to the number of fields that have informed our understanding of clinical reasoning. In this chapter, a number of concepts from a broad spectrum of fields is presented to help the reader understand clinical reasoning and to assist the instruction of preclinical medical students. Many of these concepts reflect difficulties inherent to understanding how doctors think and how this type of thinking can be acquired by learners over time. Some provide hypotheses with more or less firm theoretical grounding, but a broad understanding of clinical reasoning requires an ongoing process of investigation.

Learning to Solve Problems in New Areas: Expanding the Learner Domain Space

Klahr and Dunbar proposed a model for scientific discovery (Klahr and Dunbar 1988) that may be helpful to understand how learners solve problems in unknown territory, such as what happens when a medical student starts learning to solve medical problems. The student has a learner domain space of knowledge that only partly overlaps, or not at all, with the expert domain space of knowledge, which is the space that contains all possible hypotheses a learner can generate about a problem. Knowledge building during inquiry learning can be considered as expanding the learner domain space to increase that overlap (Lazonder et al. 2008).

Early Thinking of Clinical Reasoning: The Computer Analogy

Building on the cognitive psychology work of Newell and Simon about problem-solving in the 1970s (Newell and Simon 1972), artificial intelligence (AI) computer models were created to resemble the clinical reasoning process, with programs like MYCIN and INTERNIST (Pauker et al. 1976). Analogies between cognitive functioning and the emerging computer capacities led to the assumption that both use algorithmic processes in the working memory, viewed as the central processing unit of the brain. Many predicted that like in chess, computer programs for medical diagnosis would quickly be developed and would perform superiorly to the practicing professional, outperforming the diagnostic accuracy of the best physician’s thinking. Four decades later, however, this has not yet happened and may be impossible. The emergence of self-driving cars as an analogy shows how humans can build highly complex machines, but at least this development in clinical reasoning has been much slower than many had thought it would (Wachter 2015; Clancey 1983). Robert Wachter, in a recent book about technology in health care, argues that, still better than computers, experienced physicians can distinguish between patients with similar signs and symptoms to determine that “that guy is sick, and the other is okay,” with the “the eyeball test ” or intuition , which computers have not been able to capture so far (page 95), just as a computer cannot currently analyze nonverbal information that is so critical to communication in health care. Clinical decision support systems (CDSS , containing a large knowledge base and if-then rules for inferences) have been used with some success at the point of care to support clinicians in decision-making, particularly in medication decisions, but, integrated with electronic health records, they have not been shown to improve clinical outcome parameters as of yet (Moja et al. 2014).

Abandoning Clinical Reasoning as a General Problem-Solving Ability

Expertise in clinical reasoning was initially viewed as being synonymous with acquiring general problem-solving procedures (Newell and Simon 1972). However, in a groundbreaking study, published as a book in 1978 ( Medical Problem Solving ), Elstein and colleagues found few differences between expert (attending physicians) and novice diagnosticians (medical students) in the way they solve diagnostic problems (Elstein et al. 1978). The primary difference appeared to be in their knowledge and in particular the way it is structured as a consequence of experience. Thus while medical students and practicing physicians generated a similar number of diagnostic hypotheses differential diagnosis of similar length, practicing physicians were far more likely to list the correct diagnosis. This insight replaced the era that was marked by the belief that clinical reasoning could be measured as a distinct skill that would result in superior performance regardless of the specifics of a patient’s presentation. Content knowledge was shown to be very important but still does not guarantee success in clinical reasoning. Variation in clinical performance is a product of the expert’s integration of his or her knowledge of the signs and symptoms of disease with contextual factors in order to arrive at an adaptive solution.

Deconstructing the Reasoning Process

In an overview in 2005, Patel and colleagues summarize the process of clinical reasoning in four stages: abstraction , abduction , deduction , and induction (Patel et al. 2005).

  • Abstraction can be viewed as generalization from a finding to a conclusion (hemoglobin <12 gm/dl in an adult male is labeled as “anemia”).

  • Abduction is a backward reasoning process to explain why this adult male should have anemia. “Abductive reasoning” was first coined as a term by logician C.S. Peirce in the nineteenth century to signify a common process when a surprising observation takes place that leads to a hypothesis (“The lawn is wet! Ergo, it has probably rained.”) and is based on knowledge of possible causations and must be tested (“but it could also be the neighbor’s sprinkler”). Abduction is considered to be a primary means of acquiring new ideas in clinical reasoning (Bolton 2015).

  • Deduction is the process of testing the hypothesis (e.g., of anemia) through confirmation by expected other diagnostic findings: if conditions X and Y are met, inference Z must be true.

  • Induction is the process of generalization from multiple cases and more applicable in research than in individual patient care: if multiple patients show similar signs and symptoms, general rules may be created to explain new cases.

Part of this process is forward-driven reasoning (hypothesis generation through data), and another part is backward-driven reasoning (hypothesis testing) (Patel et al. 2005).

Knowledge Representations to Support Reasoning

In a 1996 review, Custers and colleagues categorized the thinking about the way physician’s cognition is organized around clinical knowledge in three alternative frameworks and provided critical notes (Custers et al. 1996). These mental representations could have the form of prototypes , instances , or semantic networks . All three of these models have assets and drawbacks in their explanatory power for clinical reasoning. The prototype framework or prototype theory assumes that multiple encounters with related diseases lead physicians to remember the common denominators, resulting in single prototypes in long-term memory . The instances framework assumes that physicians actually remember the individual instances of patient encounters without abstraction, and context-specific (situation specific) information may be part of these instances. The semantic network theory posits the existence of nodes of information units, connected with other nodes in the network. The strength of the network and its nodes depends on the intensity of its use. Schemas and illness scripts are medically meaningful interconnected nodes that can be strengthened and adapted based on clinical experience.

Prototyping and Semantic Qualifiers

Georges Bordage introduced the term semantic qualifiers referring to the use of abstract, often binary, terms to help sort through and organize (e.g., chunk ) patient information. They are “useful adjectives” that represent an abstraction of the situational clinical findings (Chang et al. 1998). A commonly cited example of the use of semantic qualifiers is translating a patient who is presenting with knee swelling and pain into a presentation of acute monarticular arthritis. Note three semantic qualifiers – “acute,” “monoarticular,” and “arthritis.” The reason why these qualifiers are important is that the structure of clinical knowledge in the clinician’s mind is organized with such qualifiers, as claimed by Bordage. To enable recognition and linkage, the clinician must first translate what she hears and sees into such terminology (Bordage 1994). An assumption is that the clinician’s memory contains prototypes of diseases (Bordage and Zacks 1984), generalizable representations that enable recognition. Bordage stresses how semantically rich discourses about patients are associated with greater diagnostic accuracy (Bordage 2007).

Illness Script Theory

Custers recently summarized scripts as high-level conceptual knowledge structures in long-term memory , representing general event sequences, in which the individual events are interconnected by temporal and often causal or hierarchical relationships (“usually diabetes type II occurs a older age, a overweight is associated; late symptoms might include vascular problems in the retina, in the lower limbs and in other places”). Scripts are activated as integral wholes in appropriate contexts that should contain relevant variables, including clinical findings in the patient. “Slots” in the reasoning process can be filled with information present in the actual situation, retrieved from memory, or inferred from the context (Custers 2015). Illness scripts , first introduced by Barrows and Feltovich, are believed to be chunks in long-term memory that contain three components, enabling conditions (past history and causes) , fault (pathophysiology), and consequences (signs and symptoms) (Feltovics and Barrows 1984), and are elaborated further by Schmidt and Boshuizen (1993). Illness scripts are stored in long-term memory as units with temporal (i.e., sequential) components, as a film script of unfolding events, and patients are remembered as instances of a script. With experience, physicians build a larger repertoire of illness scripts and more elaborated scripts.

Illness scripts are shaped by experience and continually refined throughout one’s clinical practice. When an experienced physician initially sees a patient, his or her verbal and nonverbal information is thought to immediately activate relevant illness scripts. This effortless, fast thinking, or nonanalytic process is referred to as script activation . In some cases, only one script is activated, and in these cases, one may arrive at the correct diagnosis (e.g., “type II diabetes mellitus”). In other cases, multiple scripts are activated, and then theory holds that we choose the most likely diagnosis by comparing and contrasting alternative illness scripts that were activated (through analytic or slow thinking). Early learners may not activate any scripts when they initially see a patient, and experts may activate one or several illness scripts .

Encapsulation of Knowledge and the Intermediate Effect

With increasing clinical information stored as illness scripts in the long-term memory of the physician, diagnostic reasoning should steadily become more accurate. However, studies have shown that more novice clinicians (e.g., those just out of training such as recent graduates from residency education) sometimes outperform physicians who have been in practice for some time (e.g., “experts”) on the recall of details from clinical cases seen. This finding was coined by Schmidt and Boshuizen as the intermediate effect (Schmidt and Boshuizen 1993). While inexperienced clinicians may consciously use pathophysiological thinking when solving clinical problems, the frequent use of similar thinking pathways leads to efficient shortcuts, and after a while it may no longer be possible to unfold these pathways. The pathophysiological knowledge about the disease becomes encapsulated into diagnostic labels or high-level simplified causal models that explain signs and symptoms (Schmidt and Mamede 2015).

System 1 and 2 Thinking as Dual Processes

Dual process theory refers to two processes that are thought to apply during reasoning (Croskerry et al. 2014). Briefly, dual process theory argues that we have two general thought processes. Fast thinking (sometimes called System I thinking or “nonanalytic” reasoning) is believed to be quick, subconscious, and typically effortless. An example of a fast thinking strategy is pattern recognition (Eva 2005). An example of pattern recognition in medicine would happen when a physician examines a patient with palpitations and immediately recognizes the cardinal features or “pattern” of Graves’ disease, when also observing exophthalmia, fine resting tremor, and thyromegaly. Slow or analytic thinking (System 2 thinking) on the other hand is effortful and conscious. An example of System 2 thinking would be working through a patient’s acid base status (e.g., calculating an anion gap, using Winter’s formula, and calculating a delta-delta gap). Dual process theory has recently been popularized in the book Thinking, Fast and Slow by Daniel Kahneman (2011). More recent work with dual process theory argues that both of these processes are used simultaneously, e.g., it’s not one or the other but rather one uses a combination of both fast and slow thinking in practice. In other words, fast and slow thinking can be viewed as a continuum (Custers 2013). Efficient clinical work requires fast thinking. The capacity of the working memory would be overloaded if analytic reasoning were required for all decisions in patient care (Young et al. 2014).

Case Specificity and Context Specificity

In Elstein and colleagues ’ seminal work on medical problem-solving (Elstein et al. 1978), researchers noted that physician performance on one patient or case did not predict performance on a subsequent content area or case, giving rise to the phenomenon of case specificity. These findings would be quite surprising if medical problem-solving were a general skill.

A second vexing problem in practice is the more recently highlighted phenomenon of context specificity. Context specificity refers to the finding that a physician can see two patients with the same chief complaint and the same (or nearly identical) symptoms and physical findings and have the same diagnosis, yet, in different contexts, arrive at different diagnoses (Durning et al. 2011). The context can be helpful to arrive at the correct diagnosis (Hobus et al. 1987) or harmful and lead to error (Eva 2005). In other words, something other than the “essential content” is driving the physician’s clinical reasoning. Durning and Artino hold that the outcome of clinical reasoning is driven by the context, which includes the physician, the patient, the system, and their interactions (Durning and Artino 2011). The notion of system includes appointment length, appointment location, support systems, and clinic staffing (Durning and Artino 2011) and stresses the importance of the situation. One example of “situativity ” is situated cognition , which breaks down an activity like clinical reasoning into physician, patient, and environment as well as interactions between these components. Clinical reasoning is believed to emerge from these factors and their interactions. Another example of situativity, situated learning , stresses participation in an activity and identity formation as learning versus the acquisition of generalized facts.

Clinical Reasoning and the Development of Expert Performance

Despite the finding that clinical reasoning is content-dependent and context-dependent, expertise in diagnostic and therapeutic reasoning in general varies among physicians even with similar experience. Some internists are considered better diagnosticians and some surgeons better operators that others. It remains useful to think of what leads to superb performance, as education can be a part of it (Asch et al. 2014). Indeed, many scholars prefer the term expert performance as opposed to expertise when referring to clinical reasoning as the former acknowledges the many nuances to this ability that we have outlined in this chapter.

For procedural performance, repetitive practice is key. Competence in colonoscopy requires experience with 150–200 colonoscopies under supervision (Ekkelenkamp et al. 2016). That competence improves with practice is not surprising and known from, for instance , in chess (De Groot 1978). Anecdotally, in the 1960s the Hungarian educational psychologist László Polgár was determined to raise his yet unborn children to become highly skilled in a specific domain and chose chess. All three daughters received careful, highly intensive training, from very young age on, and have become world-top chess players, two of which are currently considered the world’s best female chess players. Psychologist Ericsson has generalized the idea that, rather than innate talent, deliberate practice is key to expert performance (Ericsson et al. 1993). He distinguishes three subsequent mental representations: a planning phase with clear performance goals, a translation to execution, and a representation for monitoring how well one does. Applications in medical training have been described (Ericsson 2015) but have mainly focused on procedures. Clinical reasoning may benefit from deliberate practice, and the work of Mamede et al., using deliberate practice, shows how reasoning can benefit as well (Mamede et al. 2014).

Reflection During Diagnostic Thinking

Donald Schön coined the terminology of reflection in action and reflection on action , as a description of thinking of high-level professionals (Schön 1983). Knowing what to do when you do it may not require much effort if actions are routine, but professionals with nonroutine tasks may often face small problems or questions that require instant adaptive action. Schön maintains that reflection-in-action must be practiced by learners becoming professionals. Mamede and colleagues developed the method of “structured reflection ” to improve students’ diagnostic reasoning (Mamede et al. 2010, 2014a, b). Structured reflection in the context of clinical reasoning means that problem-solvers explicitly match a patient’s presentation (case) against every diagnosis they consider for that case. Mamede et al. demonstrated a beneficial effect of this approach. Detailed comparison of a patient’s signs and symptoms with the already available and activated illness scripts and noticing similarities and discrepancies appears to be the mechanism behind this restructuring of knowledge as a consequence of structured reflection. The authors recommend deliberate reflection as a tool for learning clinical reasoning (Schmidt and Mamede 2015).

Bias and Error in Clinical Reasoning

The quality of clinical reasoning is often expressed in how few errors a physician makes. Some errors are typical enough to receive a label and stem from various sources of bias. In 2003 Kempainen et al. published a helpful overview of typical biases that happen in clinical reasoning and that should be attended to in education, which include the following (Kempainen et al. 2003):

  • Availability bias . A differential diagnosis is influenced by what is easily recalled, creating a false sense of prevalence.

  • Representative bias (or judging by similarity ). Clinical suspicion is influenced solely by signs and symptoms and neglects prevalence of competing diagnoses.

  • Confirmation bias (or pseudodiagnosticity ). Additional testing confirms suspected diagnosis but fails to test competing hypotheses.

  • Anchoring bias. Inadequate adjustment of a differential diagnosis in light of new data resulting in a final diagnosis unduly influenced by the starting point.

  • Bounded rationality bias (or search satisficing ). Clinicians stop searching for additional diagnoses after the anticipated diagnosis is made leading to a premature closure of the reasoning process.

  • Outcome bias . A clinical decision is judged on the outcome rather than on the logic and evidence supporting the decision.

A limitation of this approach is that when the reasoning is believed to be successful, biases are not typically recognized, and when looking at a case in hindsight, many mistakes can easily be labeled as caused by “bias.” Indeed, so-called biases actually may serve as heuristics to guide successful behavior (Gigerenzer and Gaissmaier 2011; Gigerenzer 2007). In a recent overview, Norman and colleagues conclude that interventions directed at error reduction through the identification of heuristics and biases have no effect on diagnostic errors . Instead, most errors seem to originate from a limited knowledge based of the clinician (Norman et al. 2017).

Neuroscience and Visual Expertise in Clinical Reasoning

While neuroscience is quickly uncovering many cognitive processes, clinical reasoning has hardly been subject of such studies. More recently however a new line of research has evolved which seeks to explore the biologic underpinnings of clinical reasoning. Indeed, an Achilles heel of clinical reasoning is that it is less subject to introspection or visualization, and thus these new methods such as functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG) are emerging and show particular promise for enhancing our understanding of System 1 thinking. One of the first publications in this domain is from Durning et al. who studied brain process with functional MRI techniques in novices and experts solving clinical problems through vignette-based multiple choice questions. Many parts of the brain were activated. The researchers observed activity in various regions of the prefrontal cortex (Durning et al. 2015). While preliminary, fMRI may be a promising route of future investigation.

A new and related avenue of investigation is that of visual expertise (Bezemer 2017; van der Gijp et al. 2016). Medicine is a highly visual profession, not only for specific disciplines such as radiology, pathology, dermatology, surgery, and cardiology but also in primary care (Kok and Jarodzka 2017). Visually observing a patient, human tissue, or a representation of it, and recognizing abnormality, may not easily be expressed in words but can instantly lead to a System 1 recognition.

In Sum

The intention of this chapter was to provide an overview of theoretical concepts, frequently used terms, and a number of significant thinkers and authors in this domain, all of which underlie our current understanding of clinical reasoning to support the teaching of students about clinical reasoning in the preclinical period and beyond.

While much of the cited literature appeared after the model of case-based clinical reasoning was first created in 1992 (ten Cate 1994), and some aspects apply to clinical rather than preclinical education, none of the recommendations that could be drawn for this chapter would conflict the CBCR approach.

Although it is apparent that there are still numerous gaps in our collective understanding of clinical reasoning, it is also clear that progress into a more thorough understanding of clinical reasoning is advancing.