Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter January 8, 2014

What can be done to increase the use of diagnostic decision support systems?

  • Eta S. Berner EMAIL logo
From the journal Diagnosis

Abstract

This essay explores the reasons why diagnostic decision support systems are underutilized despite growing concern about diagnostic errors. Factors related to the motivation to use the systems, clinician cognition, system design and implementation, as well as the absence of feedback in routine clinical care are discussed. Suggestions for design and implementation strategies for diagnostic decision support systems that can increase appropriate utilization are discussed.

Introduction

With the increased interest in preventing diagnostic errors, the use of diagnostic decision support systems (DDSS) seems an obvious solution to the problem of diagnostic errors. The use of clinical decision support systems is part of the criteria for receiving incentives under the meaningful use regulation, which provides an additional reason for health care systems to implement them [1]. Yet despite the potential value of DDSS in assisting with diagnoses, over the decades that DDSS have been available, they have not been widely used [2–5]. This essay will examine the factors that could lead to lack of use and will propose some potential solutions. Some of the ideas have been previously discussed by the author in other publications and presentations [6–10], but they are integrated in this essay and supplemented with support from the recent research literature.

DDSS have traditionally been used when a clinician experiences a puzzling diagnostic problem and seeks a consultation from the system. Usually the physician needs to enter relevant patient data into the system and the DDSS will use that patient information, compare the data to its knowledge base and present the physician with a list of potential diagnoses. The physician then reviews the list for relevance to the case at hand.

Although different DDSS use different knowledge sources, algorithms, and user interfaces, the following features are common to all DDSS: (1) the physician experiences a diagnostic puzzle that is sufficient to motivate consultation with the DDSS, (2) the system provides suggestions of possible diagnoses, and (3) the physician evaluates the suggestions. It is the author’s contention that each of these features common to DDSS (and certainly the combination) can lead to underuse of DDSS, but that they also provide a framework for better design of systems to promote optimal use.

Physician perception of diagnostic puzzles

For physicians to be motivated to seek out the assistance of DDSS, they need to feel that they are faced with a case for which they either do not have, or are not sure of, the answer. The assumption that such uncertainty happens with every patient is clearly not an accurate picture of medical practice. Although it is difficult to find good empirical support, what is known as the 80–20 rule has been said to apply to the cases an experienced physician sees in his or her practice. That is, about 80% of the patients have conditions that are fairly easily diagnosed. For this large majority of cases, not only are DDSS not needed, using them would be an inefficient use of the physician’s time and might even lead to expensive and unnecessary additional diagnostic testing. However, it is a challenge to determine which are the truly easy to diagnose cases and which only seem that way.

Of the remaining patients, those that the physician recognizes as having puzzling presentations would be an appropriate use of DDSS, but often the physician uses other means to deal with them, such as ordering additional tests, consulting experts or seeking out more information. The other patient cases, which look like routine cases, but are not, are probably the ones most likely to be misdiagnosed [11, 12], but because these appear to the physician to be like the easy-to-diagnose cases, the clinician may not recognize their difficulty initially, leading to inappropriate treatment and delay. In fact, often the physician will work by trial and error and if the patient gets better after the second or third trial of a diagnosis/treatment, the previous trials may not even be perceived as errors. Thus, it is understandable that many physicians do not perceive a frequent need for DDSS based on their own experience. Even though there are cases that would be appropriate for using a DDSS (either because they are perceived as challenging or because the physician erroneously thinks they are easy), the treating physician may not perceive the need to use the system and it is this perception, rather than the actual need, that will determine use of the DDSS.

One might expect that the publicity around the extent of diagnostic errors would lead physicians to use DDSS, even if their own experience did not prompt such use. However, the data on the extent of diagnostic errors [9], while greatly concerning to those who have experienced the errors and alarming to the public, have not led physicians to embrace DDSS to any great degree. In part this may be because the data are discrepant from the physicians’ experiences, especially because the studies on errors use errors per the number of patients over a particular time interval to calculate the error rate, while the physician uses errors over the total number of encounters over a lifetime to assess his/her own sense of the extent of errors. And by that calculation, the physician’s own error rate can seem miniscule and may not be worth the effort to learn to use DDSS for what seems like such a small problem.

For the above reasons, even physicians who are very good at recognizing what they know and do not know could tend to see little need for DDSS. However, we also know that many physicians are not well-calibrated in regard to confidence and accuracy of their diagnoses, meaning that they often think they are correct when they are not. Podbregar et al. found that physician confidence in their clinical diagnoses was unrelated to their accuracy as shown on autopsy [13]. Friedman et al. examined physician confidence when using diagnostic decision support systems and also found frequent overconfidence [14]. In a recent study, Meyer et al. found that physicians were almost as confident about their diagnoses on difficult cases as they were with easy ones, but were much less accurate. On these difficult cases the physicians were also not interested in seeking additional information to assist them [11]. If one does not perceive that a case is a diagnostic challenge and if one is confident in one’s diagnosis, there is little motivation to seek out the assistance of DDSS.

The cognitive biases that influence diagnoses – in particular, premature closure, availability bias and confirmation bias – have been well-documented [15–17]. These biases as well as other heuristics are difficult to eradicate because much of the time they are effective [18]. That is, the typical approach of generating and confirming diagnoses fairly rapidly, will, most of the time, as described above, yield a correct diagnosis. Wears and Nemeth have commented that errors, diagnostic and otherwise, are often obvious only in hindsight [19]. In fact, if a physician arrives at a diagnosis quickly with only a few pieces of the potentially available patient data, if the diagnosis is correct, the physician is hailed as a great diagnostician, but if it is wrong, it is considered premature closure. The problem is at the time of the initial diagnostic formulation it is difficult to tell which is the case and unfortunately, feedback on patient outcomes, which could prompt re-examination of the problem, is rare [20, 21].

If there were no time pressures, perhaps, even if they do not think there is a problem, physicians might use diagnostic decision support systems to provide suggestions for “what else it could be,” but given the time pressures common in clinical care, and the lack of a perceived need, exhortations to reconsider one’s initial thinking or to use DDSS often fail to elicit changes in behavior.

Design of DDSS and physician evaluation of suggestions

One might argue that rather than requiring physicians to seek out the advice of a DDSS one could design DDSS to push the diagnostic advice to the physician. Currently most DDSS require, at least to some extent, that physicians seek out the advice. Research on the use of other types of clinical decision support where advice is pushed, such as the drug interaction programs, has shown that a phenomenon known as alert fatigue often occurs [22]. Alert fatigue occurs when physicians get so many alerts that are not valid that they begin to ignore all of the alerts, even the important ones. There is often information that a physician knows about the patient that could potentially rule out some diagnoses but this information may not be able to be accommodated by the DDSS. For this reason the DDSS suggestions usually aim to be sensitive and comprehensive, rather than highly specific, relying on the physician to ignore the obviously wrong suggestions.

The problem of lack of specificity has been shown in many DDSS where the system provides a large number of suggestions, many of which are either blatantly incorrect or are so obvious that the physician has already thought of them (and either dismissed them, or is already working them up). In fact, it is because “common diseases are common”, that students are advised “when you hear hoofbeats, think horses not zebras.” Like a good medical student, a good DDSS will also list the common diseases first, but the impact of this may be that the more obscure diseases will be very far down the list of potential diagnoses. This position will make it likely that they will be dismissed or the busy physician will not take the time to review many either obvious or extremely unlikely diseases to find the buried valuable suggestion.

In addition, if the DDSS are stand-alone systems, there is a need to enter the patient data into the DDSS and separately into the electronic health record (EHR). Fortunately, systems are becoming more integrated into electronic health record systems, but because the DDSS vocabulary might not match the EHR vocabulary there is likely to be at least some additional data entry, even if some duplication can be eliminated. Thus, the extra time for data entry coupled with the time needed to review a lengthy list of suggestions, many of which are perceived as not relevant, lead to a lack of motivation to use the DDSS even if a diagnostic puzzle is perceived.

To summarize the discussion thus far, for a variety of reasons (1) the average physician does not feel the need to seek out the advice of a DDSS on a routine basis; (2) time and cost pressures as well as the perception that the DDSS has a low likelihood of suggesting something that would make a major difference in treatment, lead to low use of DDSS; and (3) in ambulatory care there are rarely feedback systems in place that can compensate for an initially incorrect diagnosis and prompt DDSS consultation. Yet the problem of diagnostic error continues to need addressing. In the next section the author will provide some suggestions on what is needed.

Suggestions

The first suggestion is to develop systems that are better integrated into the EHR rather than relying on double data entry of diagnostically relevant data. As we move toward standards for representing clinical data and have more structured clinical notes and reports, it may be easier to have DDSS operate in the background and use the data from the EHR to formulate initial diagnostic hypotheses. The diagnoses can be updated as more data are accumulated and the DDSS should be accessible to the user at any time, should the user desire consultation.

Second, rather than waiting for the physician to perceive uncertainty and ask for information, the DDSS should also provide unsolicited suggestions, but only at certain times. Those suggestions should only be provided if it clear from the diagnostic work-up and orders that the physician is going in a direction where a potentially likely diagnosis will be missed. As an example, if the same laboratory test or procedure that the physician is ordering for one diagnosis would also discover another diagnosis that the DDSS found, but the physician did not recognize, it may not be necessary to remind the physician of the other diagnosis, since it will be obvious in the test results. However, if a test were needed to confirm or rule out a key diagnosis that the DDSS had as high priority and the physician did not order that test, only then should the DDSS alert the physician to the other possible diagnosis. What this means is DDSS need to include both diagnoses and work-up strategies in their knowledge bases, DDSS need to interact with the CPOE system, and the alerts should be provided just before work-up orders are finalized. This very targeted advice at the time of ordering would be a departure from the way we typically think of the use of DDSS.

In addition, the DDSS should be refined to be more specific than most DDSS in the past have been, or at least display their results in ways that make it easier for physicians to review and assess the diagnostic suggestions. Coupled with the above suggestion for limited and targeted diagnostic advice, this may be easier in the future as there is more knowledge of, and interest in, usability of health information technology [23]. Such an approach would provide, as Osheroff and his colleagues have advocated, the right information in the right format at the right time [24] and would most likely avoid the problem of alerting unnecessarily.

Finally, as has been advocated by several sources, there needs to be more automated follow-up and feedback to physicians [25, 26]. The right type of system would have user-friendly ways of documenting a follow-up interval when the patient should have improved if the diagnosis and treatment were appropriate. The designated time to follow-up would trigger automated ways of contacting the patient, checking on outcomes, and alerting the physician if the patient were not improving as expected. Not only should the physician be alerted if there is no improvement, but there should be an automated link to the DDSS or other resources to provide alternative diagnostic hypotheses.

If such a system were able to be implemented it would provide a mechanism for modifying diagnoses and therapy in a timely manner, it could mitigate the harm to patients from diagnostic errors, and it might even help physicians become better calibrated, increase the physician’s knowledge and lead to more reflective practice. There would still be problems that might slip through such a system, but hopefully they will be few and with the feedback system in place they should be able to be caught early.

Lobach, in a recent editorial, has said that we still do not really know how to effectively deploy clinical decision support systems [27]. Diagnostic decision support systems have had even less research done on them than therapeutic systems and are less frequently used in practice. Given the greater recognition of the problem of diagnostic errors, it is time to re-examine the role diagnostic decision support systems can play in reducing diagnostic errors, how the systems should be designed, and what type of research is needed to determine if we have succeeded.


Corresponding author: Eta S. Berner, Ed.D, Professor, Health Informatics, Director, Center for Health Informatics for Patient Safety/Quality, Department of Health Services Administration, School of Health Professions, Professor, Department of Medical Education, School of Medicine, University of Alabama at Birmingham, 1705 University Blvd. #590J, Birmingham, AL 35294-1212, USA, Phone: +(205)975-8219, Fax: +(205)975-6608, E-mail:

The author appreciates the comments of Lazar Stankov, PhD and the anonymous reviewer who reviewed an earlier draft of this article.

  1. Conflict of interest statement The author declares no conflict of interest.

References

1. Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. N Engl J Med 2010;363:501–4.10.1056/NEJMp1006114Search in Google Scholar PubMed

2. Yu F, Houston TK, Ray MN, Garner DQ, Berner ES. Patterns of use of handheld clinical decision support tools in the clinical setting. Med Decis Making 2007;27:744–53.10.1177/0272989X07305321Search in Google Scholar PubMed

3. Rosenbloom ST, Geissbuhler AJ, Dupont WD, Giuse DA, Talbert DA, Tierney WM, et al. Effect of CPOE user interface design on user-initiated access to educational and patient information during clinical care. J Am Med Inform Assn 2005;12:458–73.10.1197/jamia.M1627Search in Google Scholar PubMed PubMed Central

4. Grant RW, Campbell EG, Gruen RL, Ferris TG, Blumenthal D. Prevalence of basic information technology use by U.S. physicians. J Gen Intern Med 2006;21:1150–5.10.1111/j.1525-1497.2006.00571.xSearch in Google Scholar PubMed PubMed Central

5. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. J Am Med Assoc 2005;293:1223–38.10.1001/jama.293.10.1223Search in Google Scholar PubMed

6. Berner ES. Mind wandering and medical errors. Med Educ 2011;45:1068–9.10.1111/j.1365-2923.2011.04072.xSearch in Google Scholar PubMed

7. Berner ES. Diagnostic decision support systems: why aren’t they used more and what can we do about it? AMIA Annual Symposium proceedings/AMIA Symposium AMIA Symposium. 2006:1167–8.Search in Google Scholar

8. Berner ES. Diagnostic decision support systems: how to determine the gold standard? J Am Med Inform Assn 2003;10:608–10.10.1197/jamia.M1416Search in Google Scholar PubMed PubMed Central

9. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;121(5 Suppl): S2–23.10.1016/j.amjmed.2008.01.001Search in Google Scholar PubMed

10. Berner ES, Moss J. Informatics challenges for the impending patient information explosion. J Am Med Inform Assn 2005;12:614–7.10.1197/jamia.M1873Search in Google Scholar PubMed PubMed Central

11. Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette Study. JAMA Intern Med 2013. PubMed PMID: 23979070. Epub 2013/08/28. Eng.Search in Google Scholar

12. Ely JW, Kaldjian LC, D’Alessandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Family Med: JABFM 2012;25:87–97.10.3122/jabfm.2012.01.110174Search in Google Scholar PubMed

13. Podbregar M, Voga G, Krivec B, Skale R, Pareznik R, Gabrscek L. Should we confirm our clinical diagnostic certainty by autopsies? Intens Care Med 2001;27:1750–5.10.1007/s00134-001-1129-xSearch in Google Scholar PubMed

14. Friedman CP, Gatti GG, Franz TM, Murphy GC, Wolf FM, Heckerling PS, et al. Do physicians know when their diagnoses are correct? Implications for decision support and error reduction. J Gen Intern Med 2005;20:334–9.10.1111/j.1525-1497.2005.30145.xSearch in Google Scholar PubMed PubMed Central

15. Kahneman D, Slovic P, Tversky A. Judgment under uncertainty:heuristics and biases. New York: Cambridge University Press, 1982.10.1017/CBO9780511809477Search in Google Scholar

16. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003;78:775–80.10.1097/00001888-200308000-00003Search in Google Scholar PubMed

17. Voytovich AE, Rippey RM, Suffredini A. Premature conclusions in diagnostic reasoning. J Med Educ 1985;60:302–7.10.1097/00001888-198504000-00004Search in Google Scholar PubMed

18. Eva KW, Norman GR. Heuristics and biases – a biased perspective on clinical reasoning. Med Educ 2005;39:870–2.10.1111/j.1365-2929.2005.02258.xSearch in Google Scholar PubMed

19. Wears RL, Nemeth CP. Replacing hindsight with insight: toward better understanding of diagnostic failures. Ann Emerg Med 2007;49:206–9.10.1016/j.annemergmed.2006.08.027Search in Google Scholar PubMed

20. Committee on Identifying and Preventing Medication Errors. Preventing medication errors: quality chasm series. Aspden P, Wolcott J, Bootman JL, Cronenwett LR, editors. Washington, DC: The National Academies Press, 2007.Search in Google Scholar

21. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008;121(5 Suppl):S38–42.10.1016/j.amjmed.2008.02.004Search in Google Scholar PubMed

22. van der Sijs H, Baboe I, Phansalkar S. Human factors considerations for contraindication alerts. Stud Health Technol Inform 2013;192:132–6.Search in Google Scholar

23. Tang P. Summary of April 21, 2011 HIT Policy Committee Hearing on Electronic Health Record Usabilty 2011. Available from: http://www.healthit.gov/sites/default/files/pdf/hitpc-ac-wg-usability-letter-06-08-2011.pdf. Accessed October 24, 2013.Search in Google Scholar

24. Sirajuddin AM, Osheroff JA, Sittig DF, Chuo J, Velasco F, Collins DA. Implementation pearls from a new guidebook on improving medication use and outcomes with clinical decision support. Effective CDS is essential for addressing healthcare performance improvement imperatives. J Healthc Inform Manag 2009 Fall;23:38–45.Search in Google Scholar

25. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? N Engl J Med 2010;362:1066–9.10.1056/NEJMp0911734Search in Google Scholar PubMed

26. Willig JH, Krawitz M, Panjamapirom A, Ray MN, Nevin CR, English TM, et al. Closing the feedback loop: an interactive voice response system to provide follow-up and feedback in primary care settings. J Med Syst 2013;37:9905.10.1007/s10916-012-9905-4Search in Google Scholar PubMed PubMed Central

27. Lobach DF. The road to effective clinical decision support: are we there yet? BMJ (Clinical research ed). 2013;346:f1616.10.1136/bmj.f1616Search in Google Scholar PubMed

Received: 2013-9-10
Accepted: 2013-10-15
Published Online: 2014-01-08
Published in Print: 2014-01-01

©2014 by Walter de Gruyter Berlin/Boston

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/dx-2013-0014/html
Scroll to top button