This article has Open Peer Review reports available.
Using value of information to guide evaluation of decision supports for differential diagnosis: is it time for a new look?
© Braithwaite and Scotch; licensee BioMed Central Ltd. 2013
Received: 13 May 2013
Accepted: 6 September 2013
Published: 11 September 2013
Decision support systems for differential diagnosis have traditionally been evaluated on the basis of criteria how sensitively and specifically they are able to identify the correct diagnosis established by expert clinicians.
This article questions whether evaluation criteria pertaining to identifying the correct diagnosis are most appropriate or useful. Instead it advocates evaluation of decision support systems for differential diagnosis based on the criterion of maximizing value of information.
This approach quantitatively and systematically integrates several important clinical management priorities, including avoiding serious diagnostic errors of omission and avoiding harmful or expensive tests.
In clinical care, there has been much effort on decreasing medical errors including diagnostic errors of omission (DEO). In fact, DEOs account for a large proportion of medical adverse events and form the second-leading cause for malpractice suits against hospitals . Improving differential diagnostic (DDX) decision support tools has great potential to reduce DEOs [2–4]. However, DDX decision support tools are often constructed with the goal of identifying a single correct diagnosis, which does not necessarily diminish DEOs unless the tool has optimal performance characteristics. This article discusses whether an alternative criterion for evaluating DDX tools, in particular maximization of value of information , might yield DDX tools that are more effective at reducing DEOs [1, 6], than ones designed to detect the best diagnosis [4, 7–11].
The value of DDX tools
Decision support tools exist to facilitate better decisions, and better health decisions are those choices that minimize morbidity and mortality, in concordance with patient preferences and principles of shared decision making . Accordingly, value of information (VOI) may be a particularly suitable framework for evaluating DDX decision support tools. This is because it considers the monetarized benefit of morbidity and mortality that can be prevented through improvements in decisions that are made possible by new information, after considering the costs and harms of obtaining that information. In the context of DDX decision support tools, applying VOI can be viewed as a quantitative means of integrating several desirable goals. These include maximizing the morbidity and mortality of DEOs that may be averted by DDX tools, and minimizing the incremental costs and harms from the diagnostic tests that these tools may induce.
Brief summary of VOI
VOI is a framework developed by Claxton [5, 13] that has its conceptual roots in decision analysis and economics. A detailed description of VOI is beyond the scope of this paper. However, VOI assessment of an informatics intervention can be viewed as a three-step mathematical calculation: (1) “How would health outcomes change because of different decision making that would result from using the intervention?”, (2) “What is the monetarized value of that change in health outcomes?” and (3) “How does #2 change after considering the costs of the intervention and considering the downstream consequences of its use?” Accordingly, if health outcomes would be improved by the post-intervention decision making compared to the pre-intervention decision making, (question 1), the expected value of information (EVI) would be numerically higher (or less negative) (question 2). However, if incremental costs of using the intervention are greater than those of not using the intervention including differences in diagnostic tests ordered and their downstream consequences (including false positives and complications), then the EVI would be numerically lower or more negative (question 3).
Consider a 64 year-old man who presents to an emergency department with severe chest pain but without dyspnea or other pain. While there are literally hundreds of potential diagnoses that could be considered by a DDX tool, the tool is most clinically useful if it initially restricts its attention to the subgroup of those diagnosis that are “actionable”. Here, “actionable” refers to a situation in which rapidly identifying a particular diagnosis could lead to decisions that would improve morbidity and mortality. Conversely, delaying the identification of that diagnosis would lead to decisions that worsened morbidity and mortality. This definition is similar to Ramnarayan’s definition of “clinically relevant diagnoses”  but can be specified in terms of morbidity and mortality, and therefore is more closely linked to VOI. (“Actionability” can be defined as the expected value (EV) of rapid treatment minus the EV of delayed or no treatment. It is a distinct concept from “import” , which is an approximation of the true positivity of a test with respect to a disease in the differential diagnosis, and does not necessarily contain any information about the incremental morbidity and mortality that may be prevented by a timely diagnosis.) For example, diagnoses of myocardial infarction, pulmonary embolism, dissecting aortic aneurysm, and pericarditis would be highly actionable because of the morbidity and mortality burden that could be mitigated through prompt action. On the other hand, diagnosis of non-dyspneic pleuritis would be less actionable because potential mechanisms of pleuritis (e.g. malignancies), while capable of causing serious morbidity and mortality, would not necessarily lead to great reductions in morbidity and mortality by quick identification and response. Similarly, other possible diagnoses such as panic attacks, neuralgic pain from zoster or pain from costochrondritis would be even less actionable, unless it dramatically reduced the probability of an actionable diagnosis through considerations of physiological incompatibility.
Relationship between VOI criteria and DDX decision support tools
Limitations of prior efforts to use VOI to evaluate DDX
Downs et al. (1997) first proposed applying a VOI to evaluate DDX processes . However, their approach had multiple limitations which our current approach improves upon, and therefore our approach may be more feasible. First, they calculated VOI using utilities that were not elicited using standard decision analytic approaches for health states (e.g., time tradeoff) , and consequently their VOI calculations did not necessarily reflect potential morbidity and mortality improvement. For example, viral pneumonia correctly diagnosed was given a higher utility than bacterial pneumonia correctly diagnosed, even though correct diagnosis of bacterial pneumonia would be expected to improve quality and quantity of life, and therefore to improve health-state based utility, much more than correct diagnosis of viral pneumonia. In contrast, we suggest using health state-based utilities, which comport more readily with applications of VOI to medical decision making.
Second, their approach yielded the unrealistic result that the VOI of most information-seeking was zero because they did not quantify the down-side to additional information gathering: time, patient discomfort, and complications. All of these would be expected to lead to a negative VOI even when no informative diagnostic information is produced. Although they attempted to address the limitation subsequently  by modifying their VOI algorithm to “calculate the average of the expected utility across all the diagnoses, then subtract the expected utility of the diagnosis with the highest utility”, this modification presumes that VOI reflects changes in the EV of particular diseases more directly than changes in the EV of the most favored decision, unlike subsequent uses of VOI . In contrast, we recommend valuating negative as well as positive consequences of additional information gathering, and we advocate using VOI based on the EV of the most favored decision .
Third, their approach does not identify a point at which the diagnostic process should naturally stop, which is a concept with great clinical value. If most of the information comprising the diagnostic process has an EVI of zero, the diagnostic process could go on indefinitely. In contrast, because we consider negative EVIs, a decline in the EVI curve would be expected to occur at some point (for example, when clinicians are adopting “shotgun” diagnostic strategies that may have side effects), signaling that the diagnostic phase should conclude.
Possible VOI metrics for assessing performance of DDX decision support tools
An ideal diagnostic process would minimize preventable morbidity and mortality to the patient, and would also minimize cost, time, and discomfort. The value of EVI at the end of the diagnostic process is a candidate for distinguishing a more favorable diagnostic trajectory from a less favorable diagnostic strategy and clinical management. For example, compare the EVI endpoint for an optimal diagnostic pathway (Curves C in Figure 1) compared to the EVI endpoint for typical, unaided diagnostic pathway (Curves A in Figure 1). A DDX decision support tool that leads to a greater proportional reduction, and therefore raises the EVI endpoint of Curve A closer to that of Curve C, would be preferred to a DDX tool that leaves the VOI endpoint of Curve A more distant from Curve C. Alternatively, area under the curve EVI might also be a suitable metric for assessing the performance of DDX tools. It is important to note that simply ordering all tests simultaneously to get rapid results would not optimize an EVI endpoint, because the resulting monetary cost, side effects, and time of these tests would lower the EVI trajectory.
Limitations of VOI-optimized DDX tools
A DDX decision support tool optimized using VOI criteria would need to consider whether it is feasible to address more than one actionable diagnosis simultaneously, and this answer might vary for logistical reasons (e.g. number of operational CT scanners) between settings, institutions, and even between different shifts at the same institutions. For example, a DDX tool would need to factor in the possibility that it may be impossible to rule-out a myocardial infarction and a dissecting aortic aneurysm at a particular facility the same time, and therefore the DDX tool would ideally have the capability to guide prioritization of diagnostic decisions in accord with their relative increase in EVI. Further limitations of VOI-optimized DDX tools include an absence of a clear path towards enhancing shared decision making or considering individual patient preferences, and a potential lack of engagement with the types of cognitive errors and biases that lead to suboptimal decision making in the first place .
Whether a DDX decision support tool gets the diagnosis right may not be as important as whether the tool is able to reduce preventable morbidity and mortality by reducing DOEs. VOI analyses in general, and EVI trajectories in particular, may be useful metrics for evaluating whether a DDX tool is able to pursue the important objective of reducing DOEs as well as other important objectives pertaining to minimizing harm and unnecessary expenditures.
- Ramnarayan P, Roberts GC, Coren M, Nanduri V, Tomlinson A, Taylor PM, Wyatt JC, Britto JF: Assessment of the potential impact of a reminder system on the reduction of diagnostic errors: a quasi-experimental study. BMC Med Inform Decis Mak. 2006, 6: 22-10.1186/1472-6947-6-22.View ArticlePubMedPubMed CentralGoogle Scholar
- Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D, Purves I: Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ. 2002, 325 (7370): 941-10.1136/bmj.325.7370.941.View ArticlePubMedPubMed CentralGoogle Scholar
- Smith WR: Evidence for the effectiveness of techniques to change physician behavior. Chest. 2000, 118 (2 Suppl): 8S-17S.View ArticlePubMedGoogle Scholar
- Berner ES, Maisiak RS, Heuderbert GR, Young KR: Clinician performance and prominence of diagnoses displayed by a clinical diagnostic decision support system. AMIA Annu Symp Proc. 2003, 76-80.Google Scholar
- Claxton K, Posnett J: An economic approach to clinical trial design and research priority-setting. Health Econ. 1996, 5 (6): 513-524. 10.1002/(SICI)1099-1050(199611)5:6<513::AID-HEC237>3.0.CO;2-9.View ArticlePubMedGoogle Scholar
- Ramnarayan P, Kapoor RR, Coren M, Nanduri V, Tomlinson AL, Taylor PM, Wyatt JC, Britto JF: Measuring the impact of diagnostic decision support on the quality of clinical decision making: development of a reliable and valid composite score. J Am Med Inform Assoc. 2003, 10 (6): 563-572. 10.1197/jamia.M1338.View ArticlePubMedPubMed CentralGoogle Scholar
- Berner ES, Webster GD, Shugerman AA, Jackson JR, Algina J, Baker AL, Ball EV, Cobbs CG, Dennis VW, Frenkel EP: Performance of four computer-based diagnostic systems. N Engl J Med. 1994, 330 (25): 1792-1796. 10.1056/NEJM199406233302506.View ArticlePubMedGoogle Scholar
- Friedman CP, Elstein AS, Wolf FM, Murphy GC, Franz TM, Heckerling PS, Fine PL, Miller TM, Abraham V: Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: a multisite study of 2 systems. JAMA. 1999, 282 (19): 1851-1856. 10.1001/jama.282.19.1851.View ArticlePubMedGoogle Scholar
- Miller RA: Evaluating evaluations of medical diagnostic systems. J Am Med Inform Assoc. 1996, 3 (6): 429-431. 10.1136/jamia.1996.97084516.View ArticlePubMedPubMed CentralGoogle Scholar
- Turner CW, Lincoln MJ, Haug P, Williamson JW, Jessen S, Cundick K, Warner H: Iliad training effects: a cognitive model and empirical findings. Proc Annu Symp Comput Appl Med Care. 1991, 68-72.Google Scholar
- Lincoln MJ, Turner CW, Haug PJ, Warner HR, Williamson JW, Bouhaddou O, Jessen SG, Sorenson D, Cundick RC, Grant M: Iliad training enhances medical students’ diagnostic skills. J Med Syst. 1991, 15 (1): 93-110. 10.1007/BF00993883.View ArticlePubMedGoogle Scholar
- PCORI: Patient-centered outcomes research institute. National priorities for research and research agenda. 2012Google Scholar
- Griffin S, Welton NJ, Claxton K: Exploring the research decision space: the expected value of information for sequential research designs. Med Decis Making. 2010, 30 (2): 155-162. 10.1177/0272989X09344746.View ArticlePubMedGoogle Scholar
- Miller RA, Pople HE, Myers JD: Internist-1, an experimental computer-based diagnostic consultant for general internal medicine. N Engl J Med. 1982, 307 (8): 468-476. 10.1056/NEJM198208193070803.View ArticlePubMedGoogle Scholar
- Downs SM, Friedman CP, Marasigan F, Gartner G: A decision analytic method for scoring performance on computer-based patient simulations. Proc AMIA Annu Fall Symp. 1997, 667-671.Google Scholar
- Gold MR, Siegel JE, Russell LB, Weinstein MC: Cost-Effectiveness in Health and Medicine. 1996, New York: Oxford University PressGoogle Scholar
- Downs SM, Marasigan F, Abraham V, Wildemuth B, Friedman CP: Scoring performance on computer-based patient simulations: beyond value of information. Proc AMIA Symp. 1999, 520-524.Google Scholar
- Kassirer JP, Kopelman RI: Cognitive errors in diagnosis: instantiation, classification, and consequences. Am J Med. 1989, 86 (4): 433-441. 10.1016/0002-9343(89)90342-2.View ArticlePubMedGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/13/105/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.