This article has Open Peer Review reports available.
Dual processing model of medical decision-making
© Djulbegovic et al.; licensee BioMed Central Ltd. 2012
Received: 18 June 2012
Accepted: 21 August 2012
Published: 3 September 2012
Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease.
We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice.
We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice.
We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories).
Dual processing theory is currently widely accepted as a dominant explanation of cognitive processes that characterizes human decision-making [1–9]. It assumes that cognitive processes are governed by so called system I (which is intuitive, automatic, fast, narrative, experiential and affect-based) and system II (which is analytical, slow, verbal, deliberative and logical) [1–10]. The vast majority of existing models of decision-making including expected-utility theory, prospect theory, and their variants assume a single system of human thought . Recently, formal models for integrating system I with system II models have been developed [3, 11]. One such attractive model-Dual System Model (DSM)- has been developed by Mukherjee . Here, we extend Mukherjee’s DSM model to medical field (DSM-M) by linking it to the threshold concept of decision-making [12–15]. We also take into account decision regret, as an exemplar of affect or emotion that is involved in system I decision-making , and which is of particular relevance to medical decision-making [16–19]. Regret was also selected for use in our model because any “theory of choice that completely ignores feeling such as the pain of losses and the regret of mistakes is not only descriptively unrealistic but also might lead to prescriptions that do not maximize the utility of outcomes as they are actually experienced” [1, 20].
As more than 30% of medical interventions are currently not appropriately applied, mostly as over - or- undertreatment [21–23], we illustrate how the DSM-M model may be used to explain the practice patterns seen in the current medical practice. Our DSM-M model is primarily an attempt to describe how medical decisions are made. As a descriptive model its validation will require comparing its outputs to actual choices made by patients and clinicians and their verbalized reactions to our model. We conclude the paper by providing some testable empirical predictions.
A dual system model
Where C represents a decision-making situation (“choice”), n - number of outcomes, p i - probability of the i th outcome, x i , of the selected choice. V I represents valuation of decision under autonomous, intuitive, system I-based mode of decision-making and V II , which can be a utility function, represents valuation under a deliberative, rule-based, system II mode of decision-making. k-is a scaling constant, and γ [0 to 1] is the weight given to system I and can be interpreted as the relative extent of involvement of system I in the decision-making process . System II is not split into two subsystems advocated by some , but is assumed to adhere to the rationality criteria of expected utility theory (EUT) as also advocated by modern decision science [11, 28]. γ is assumed to be influenced by a number of processes that determine system I functioning. Mukherjee emphasized the following factors as the important determinants of system I functioning : individual decision-making and thinking predispositions [ranging from expected utility theory (EUT) “maximizers” to system I driven “satisficing” with no regard to probabilities but with editing or selection of outcomes of interest] , affective nature of outcomes (the higher the affective nature of outcomes, the higher is γ) and framing and construing the decision-making task (decisions for the self will likely have higher γ, as well as decision problems that are contextualized and those requiring immediate resolution or are made under time pressure; the last four describe circumstances characteristic of medical decision-making). Easily available information, our previous experience, the way in which information is processed (verbatim vs. getting the “gist” of it)  as well as memory limitations  are also expected to affect γ. γ is, therefore, expected to be higher when information about probabilities and outcomes are ambiguous or not readily available, or when a very severe negative prior outcome is recalled [2, 32, 33]. On the other hand, when such data are available their joint evaluation by system II will reduce γ. In general, the factors that define the process of system I can be classified under 4 major categories: a) affect, b) evolutionary hard-wired processes, responsible for automatic responses to potential danger in such a way that system I typically gives higher weight to potentially false positives than to false negatives (i.e. humans are cognitively more ready to wrongly accept the signal of potential harms than one that carries the potential of benefit), (c) over-learned processes from system II that have been relegated to system I (such as the effect of intensive training resulting in the use of heuristics, or “rules of thumb” or practice guidelines as one of the effort-saving cognitive strategies. NB although guidelines may be the products of analytic system II processes their applications tends to be a system I process.), and (d) the effects of tacit learning .
Mukherjee’s DSM model draws upon empirical evidence demonstrating that decision-makers in an affect-rich context are generally sensitive only to the presence or absence of stimuli, while in affect-poor contexts they rely on system II to assess the magnitude of stimuli (and probabilities) . Hence, the salient feature of the model is that that system I recognizes outcomes only as being possible or, not. Every outcome that remains under consideration gets equal weight in system I. On the other hand, system II recognizes probabilities linearly without distortions, according to the expected utility paradigm.
As a result, dual valuation processing often generates instances where subjective valuations are greater at lower stimulus magnitudes (i.e. when decision-making relies on feeling, or evolutionary hard-wired processes such as when the signal may present danger) while rational calculation produces greater value at high magnitudes . DSM is capable of explaining a number of the phenomena that characterize human decision-making such as a) violation of nontransparent stochastic dominance, b) fourfold pattern of risk attitude, c) ambiguity aversion, d) common consequences effect, e) common ratio effect, f) isolation effect, g) and coalescing and event-splitting effect .
where 0 < m I ≤ 1 Note that satisfies risk aversion for gains and risk seeking for losses and that the term for system II p i x i is linear without risk distortions.
As noted by Mukherjee , the estimation of the parameters in Equation 2) is a measurement exercise, which needs to be evaluated in the future empirical research. Consequently, the functions V II (x) and V I (x) could be changed, depending on the decision-making setting and decision-maker’s goals. Similarly, parameter m may not be the same for all outcomes.
Modification of DSM for medical decision-making
We posit that among the emotions that can influence valuation of outcomes in system I processing, regret plays an important role [1, 2], while system II processes are dominated by rational, analytical deliberations according to EUT . We can define regret (Rg) as the difference (loss) in the utilities of the outcome of the action taken and that of the action we should have taken, in retrospect [16–19, 35] but operating at the system I level only (see Figure 2).
This means that if the probability of disease is above p t the decision-maker favors treatment; otherwise, a competing management alternative (such as “No Treatment”) represents the optimal treatment strategy. Note that k can be typically set at 1, as we do it here. Also note that the first part of equation is equivalent to the threshold expression described in EUT framework [13, 14, 36]; the second expression modifies system II’s EUT-based decision-making process in such a way that if benefits are experienced higher than harms, the threshold probability is always lower than EUT threshold. However, if a decision-maker experiences H I >B I , the threshold probability is always higher than the EUT threshold (see below for discussion in the context of medical example). Note that γ and the ratio only contribute to the extent of magnitude the dual threshold is above or below the classic EUT threshold. That is, γ and the ratio do not change the quality of relationship between dual threshold and EUT threshold: whether dual threshold will be above or below the EUT threshold depends only on a ratio.
It should be noted that the identical derivations can be obtained by applying the concept of expected regret (instead of EUT) [16–19, 35]. Although it can be argued that regret is a powerful emotion influencing all cognitive processes (as so called, “cognitive emotion”) [37, 38], and so it may function at level of both system I and system II , most authors recognize the affect value of regret [2, 10]. Hence, we assumed that regret functions at system I level . Therefore, in our model we restrict the influence of regret to system I. Incidentally, our Equation 3) can also be derived from the general Mukherjee’s DSM model even if regret is not specifically invoked .
Although Equation 3) implies exact calculations, it should not be understood as one that provides precise mathematical account of human decision-making. Rather, it should be considered more as a semi-quantitative or qualitative description of the way physicians may make their decisions. First, this is because system I does not perform exact calculations, but rather relies on “gist” [30, 31] for assessment of benefits and harms in more qualitative manner. The mechanism depends on associations, emotions (so called, “risk as feelings” estimates ), as well as memory, and experience [2, 5, 8, 31]. In this sense, the second part of Equation 3) that relies on system I can be understood as the qualitative modifier (“weight”), which, depending on the system I’s estimates of benefits and harms increases or decreases the first part of equation (which is dependent on system’s II precise usage of evidence for benefits and harms). Second, the threshold probability itself should be considered as an “action threshold”- at some point, a physician decides whether to administer treatment or not. Typically, she contrasts the estimated probability of disease against the threshold and acts: if the probability of disease is above the “action threshold”, the physician administers the treatment; if it is below, she decides not to give treatment. So, one way to interpret Equation 3) is to consider physician’s estimate of “gist” of the action threshold: if in her estimation, overall benefits of treatment outweigh harms, and she considers that it is “likely” that the probability of disease is above the threshold probability, then she would act and administer treatment. If the physician assesses that it is “unlikely” that the probability disease is above the “action threshold”, then she would not prescribe the treatment.
The behavior of DSM-M model
Mukherjee’s additive model as described above . It can be categorized as a variant of parallel-competitive theory as it assumes that system I and II processes proceed in parallel, but does include parameter γ, which can trigger greater or smaller activation of system I. Mukherjee’s model, however, does not explicitly model the choices in terms of categorical decisions (i.e. accept vs. do not accept a given hypothesis), which is a fundamental feature of dual-processing models .
System I and system II operate on a continuum , but in such a way that system I never sleeps . A final decision depends on the activation of both systems I and II . It has been estimated that about 40-50% of decisions are determined by habits (i.e. by system I) . This is also a variation of parallel-competitive theory; it should be noted that latest literature is moving away from this model [5, 27].
The final decision appears to depend both on the system I and system II in such a way that system I is the first to suggest an answer and system II endorses it . In doing so, system II can exert the full control over system I (such as when it relies on the EUT modeling) or completely fail to oversee functioning of system I (e.g., because of its ignorance or laziness) . Therefore, according to this model, decisions are either made by system I (default) or system II (which may or may not intervene). This is a default-interventionalist model.
The variation of the model #3 is the so called “toggle model”, which proposes that decision-maker constantly uses cognitive processes that oscillate between the two systems (toggle) [6, 7, 9]. This is a variant of default-interventionalist model.
Note that γ is continuous in our model, but it can be made categorical [0,1] if the “toggle” theory is considered to be the correct one. In this case, a logical switch can be introduced in the decision tree to allow toggling between the two systems. Most importantly, by linking Mukherjee’s additive model with the threshold model, we provide the architecture for reconciling parallel competitive theories with default-interventionalist theories. We do it by making explicit that decisions are categorical (via threshold) at certain degree of cognitive effort (modeled via γ) parameter . That is, the key question is what processes determine acceptance or rejection of a particular (diagnostic) hypothesis. Our model shows that this can occur if we maintain parallel-competing architecture of Mukherjee’s additive model but assume a switch, yes or no answer, whether to accept or reject a given hypothesis (at the threshold). It is evaluation of the (diagnostic) event with respect to the threshold that serves as the final output of our decision-making and reasoning processes. As our model shows, this depends on assumption of parallel working of both system I and system II, and the switch in control of one system over another according to default-interventionalist hypothesis. Note that depending on activation of γ parameter and assessment of benefits (gains) and harms (losses) the control can be exerted by either system: sometimes it will be the intuitive system that it will exert the control and our action will take the form “feeling of rightness” ; sometimes, it will be system II that it will prevail and drive our decisions. Thus, we succeed in uniting parallel competitive with default-interventionalist models by linking Mukherjee’s additive model with the threshold model for decision-making.
As discussed above, many factors can activate the switch such as the presence or absence of empirical, quantitative data, the context of decision making (e.g. affect poor or rich), the decision maker’s expertise and experience, etc. In addition, extensive psychological research has demonstrated that people often use a simple heuristic, which is based on the prominent numbers as powers of 10 (e.g., 1,2,5,10,20,50,100,200 etc.) . That is, although system I does not perform the exact calculations, it still does assess “gist” of relative benefits and harms, and likely does so according to “1/10 aspiration level”  (rounded to the closest number) in such a way that the estimates of benefits/harms ratio change by 1,2,5, 10, etc. orders of magnitude. Therefore, in this section we consider several prototypical situations: 1) when γ = 0, 0.5, or 1; 2) when BII> > HII, BII = HII and BII < <HII; and 3) when regret of omission (BI) < < regret of commission (HI), BI = HI, or BI> > HI
First, note that γ=0, when the numerator of the left fraction in the Equation 6 ( Additional file 1: Appendix) is zero, i.e., when , or solving for p, we obtain , which is exactly the value of the EUT threshold for the probability at which the expected utilities of the two options are the same. This will correspond to model #3 above, in which system II exerts full control over decision-making. Therefore, when γ = 0, we have the classic EUT and therapeutic threshold model. In this case, regret does not affect the EUT benefits and harms, and . If BII> > HII, pt approaches zero and a decision-maker will recommend treatment to virtually everyone. On the other hand, if BII = HII, pt equals 0.5 and she might recommend treatment if the disease is as likely as not. Finally, if BII < < HII, pt approaches 1.0, and the decision-maker is expected to recommend treatment only if she is absolutely certain in diagnosis.
At the other extreme, if γ = 1, we have the pure system I model (corresponding to model #3 above, which solely relies on system I processes). Note the value of γ=1, when the denominator of the second fraction in Equation 6 ( Additional file 1: Appendix) equals one, or when the expression , i.e., when B I =H I . Under these conditions, it is fairly obvious that the system I assessments become irrelevant if the perceived net benefit of the treatment is equal to the perceived net harm. When γ=1, regret avoidance becomes the key motivator, not EUT’s benefits and harms. Note that in system I p is not related to γ in terms of the valuation (Equation 1). Under these circumstances only decision-making under system I operate and the analytical processes of system II are suppressed (Equation 1) as seen in those decision-makers who tend to follow intuition only, or are extremely affected by their past experiences without considering new facts on the ground. That is, differences in probability do not play any role in such decisions, because a person who only uses system I doesn’t consider probability as a factor.
Finally, if γ = 0.5, the decision maker is motivated by EUT and by regret avoidance (model #2 listed above). In this case, the benefits (BII), harms (HII), regrets of omission (BI) and commission (HI) are all active players. These three cases are presented in Table 1 (see Additional file 2) which shows threshold probabilities for γ = 0.5 and objective data indicating a high benefit/harms ratio (). Also shown is how the threshold probability depends on individual risk perception. If HI> > HI, it magnifies effect of BI/HI (see Equation 3), which results in extreme behavior in sense of increasing likelihood that such a person will either always accept (as pt<0) or reject treatment (as pt>1). For HI < <HII, the impact on the way system I processes benefits and harms is not that pronounced and influences the EUT threshold to much smaller extent.
Illustrative medical examples
Clinical examples abound to illustrate applicability of our model. To illustrate the salient points of our model, we chose two prototypical examples where there is close trade-offs between treatments’ benefits and harms.
Example #1: treatment of pulmonary embolism
Pulmonary embolism (PE) (blood clot in the lungs) is an important clinical problem that can lead to significant morbidity and death . Even though many diagnostic imaging tests exist to aid in the accurate diagnosis of PE, the tests are often inconclusive, and physicians are left to face the decision whether to treat patient for presumptive PE, or attribute the patient’s clinical presentation (such as shortness of breath and/or chest pain) to other possible etiologies. There exists an effective treatment for a PE, which consists of the administration of 2 anticoagulants (blood thinners): heparin followed by oral anticoagulants such as warfarin [46, 47]. Heparin (unfractionated or low-molecular weight heparins) are highly effective treatments associated with relative risk reduction of death from PE by 70-90% in comparison to no treatment [46, 47]. This converts into the absolute death reduction as: net benefits, B II =17.5% to 22.5% (calculated as 25% morality without heparin minus 7.5% to 2.5% with heparin) [17, 18, 46, 47]. However, these drugs are also associated with a significant risk of life-threatening bleeding; net harms range from H II =0.037% (a typical scenario) to 5% (a worst-case scenario) depending on the patients’ other comorbid conditions [17, 18, 47, 48]. Thus, net benefits/net harms range from 60.8 (22.5/0.037) (best case) to 3.5 (17.5/5)(worst case scenario). If we apply a classic EUT threshold [13, 14, 36], which relies solely on system II processes, we observe that the probability of pulmonary embolism above which the physician should administer anticoagulants ranges from 1.6% (best case) to 22.2% (worst case scenario). However, ample clinical experience has demonstrated that few clinicians would consider prescribing anticoagulants at such low probability of PE . In fact, most experts in the field recommend giving anticoagulants when probability of PE exceeds 95% [49–51]. We have previously suggested that this is because regret associated with administering unnecessary and potentially harmful treatments under these circumstances likely outweighs regret associated with failing to administer potentially beneficial anticoagulants [17–19]. We now show how this argument can be made in the context of dual processing theory. Indeed, some physicians may feel that the risk of bleeding may be much higher, particularly in case of a patient who recently experienced major hemorrhage. The physician may not have data readily available to adjust her EUT, system II-based calculations. Rather, she employs the system I-based reasoning, globally assessing the benefits and harms of treatments under her disposal. Importantly, these are personal, intuitive, affect-based, subjective judgments of the values of outcomes that are influenced by memory limitations and recent experiences and that may not be objectively based on the external evidence [2, 30–33]. In addition, it is well documented that the physicians’ recent experience leads to a type of bias, known as primacy effect, that is governed by system I [2, 33]. If the last patient with PE whom the physician took care of had severe bleeding, system I may be primed in such a way that it will likely conclude that harms outweigh benefits. In our case of PE, if her reasoning is dominated by system I (operating, say, at γ level of 0.77 according to model #2 listed above, see Section “The behavior of DSM-M model”) in a such way that the physician concludes that if harms is larger than benefits by 10%, then the threshold probability above which she will treat her patient suspected of PE exceeds 95% [as easily demonstrated after plugging in the benefits/harms values in Equation 3) . Note that this calculation describes circumstances under which the physician would adhere to the contemporary practice guidelines i.e. to prescribe anticoagulants when PE exceeds 95% [49–51]. It should be further noted that if γ value is only slightly higher (≥0.78), the physician will require the absolute certainty to act (i.e. the threshold ≥1).
Example #2: treatment of acute leukemia
Acute myeloid leukemia (AML) is a life-threatening disease, which, depending on the aggressiveness of disease can be cured in the substantial minority of patients. To achieve a cure, patients are typically given induction chemotherapy to bring the disease into remission, after which another form of intensive therapy - so called, consolidation treatment - is given. To achieve a cure in patients with more aggressive course of disease such as those classified as intermediate- and poor-risk AML based on cytogenetic features of disease, allogeneic stem cell transplant (alloSCT) is recommended . However, the cure is not without price- many patients given alloSCT as a consolidation therapy die due to treatment. A decision dilemma faced by a physician is whether to recommend alloSCT, or alternative treatment, such as chemotherapy or autologous SCT, which has lower cure rate but less treatment-related mortality. In intermediate-risk AML, for example, credible evidence shows that, compared with chemotherapy allogeneic alloSCT result in better leukemia-free survival (LFS) by at least 12% at 4 years (LFS with alloSCT =53% vs 41% with chemotherapy/auto SCT) . Treatment-related mortality is much higher with alloSCT by 16%, on average (19% with alloSCT vs. 3% with chemotherapy/autoSCT) . This means that based on objective data, and using rational EUT model, we should recommend alloSCT for any probability of AML relapse ≥57.1% . Therefore, treatment benefits and harms are, on average, very close. Because of this, the driving force to recommend alloSCT is the physician’s estimates of the patient’s tolerability of alloSCT: if she assess that the patient will not be able to tolerate alloSCT, the physician will not recommend transplant. Conversely, if she thinks that the patient will be able to tolerate allo SCT, the physician will recommend it. Although there are objective criteria to evaluate a patient’s eligibility for transplant, the assessment to the large extent depends on physicians’ judgment and experience . That is, the assessment of patient’s eligibility for transplant depends both on the objective data on benefits and harms (system II ingredients) and intuitive, gist type of judgment (characteristics of system I). As discussed above, system I does not conduct the precise calculations. Rather, it relies on “gist” or on simple heuristics such as those that are based on powers of 10 (e.g., 1,2,5,10,20, etc.) . The physician, therefore, adjusts the threshold above or below based on her intuitive calculations. For instance, it is often the case that the physician whose patient recently died during the transplant is more reluctant to recommend the procedure even to those patients who, otherwise, seems fit for it. In doing so, the physician in fact modifies her/his dual system threshold upwards. In our example, let’s assume that the physician judges that the harms of alloSCT for a given patient is twice as large as reported in the studies where patients were carefully selected for transplant . That, in our case, would mean that mortality due to alloSCT is 32% (instead of 16%). We can now plug these numbers in Equation 3) (BII = 0.12, HII = 0.16, BI = 0.12, HI = 0.32).
Note that the physician can make this judgment at various level of activation of system I. If the decision is predominantly driven by system I judgment then our physician’s threshold according to Equation 3) is greater than 100% for all circumstances in which γ value exceeds 55%. That means that under these circumstances of system I activation, the physician will never recommend transplant. The opposite can occur for those physicians whose experience is not affected by poor patients’ outcomes. Under such circumstances, the physician may judge the patient to be in such a good condition that she may re-adjust the reported treatment-related transplant risk to be as half of those observed risks in the published clinical studies (i.e. 8%). The new numbers required to determine the threshold according to Equation 3 are: BII = 0.12, HII = 0.16, BI = 0.12, HI = 0.08. If the physician relies excessively on system I, as often seen in busy clinics where decisions are routinely made on “automatic pilot”, the dual threshold drops to zero (for all γ >89%). That means, that the physician will recommend alloSCT to all her/his patients under these circumstances.
As discussed above, we provide the precise calculations only to illustrate the logic of decision-making. The process should be understood more along semi-quantitative or qualitative description of clinical decision-making. Although currently the Equation 3) allows entry of almost any value for benefit and harms, it is probably the case that benefit and harms as perceived by system I are based on “1/10 aspirational level” , so that only values of 1,2,5,10, 20 etc. should be allowed. This is, however, empirical question that should be answered in further experimental testing; therefore, at this time, we decided not to provide the exact boundaries of the values for benefit and harms that can be entered in Equation 3 (see Discussion). Note also that these calculations are decision-maker specific, and although we illustrate them from the perspective of the physician, the same approach applies to the patient, who ultimately has to agree –based on her own dual cognitive processing- on the suggested course of treatment actions.
Models of medical decision-making belong to two general classes-descriptive and prescriptive. The former, which the DSM-M exemplifies, attempt to explain why decision makers take or might take certain actions when presented with challenging decision problems abundant in contemporary medicine. The latter, exemplified by the normative therapeutic threshold models [13, 14] prescribe the choice options that a rational decision maker should take. We have defined the first formal dual-process theory of medical decision-making by taking into consideration the deliberative and the experiential aspects that encompass many of the critical decisions physicians face in practice. Mathematically, our model represents an extension of Mukharjee’s additive Dual System Model  to the clinical situation where a physician faces frequent dilemmas: whether to treat the patient who may or may not have the disease, or choose one treatment over another for prevention of disease that is yet to occur. Our model is unique in that incorporates an exemplar of strong emotion, decision regret, as one of the important components of system I functioning. We focused on regret because previous research has shown that people often violate EUT prescribed choice options in an effort to minimize anticipated regret [1, 2, 20]. Although we use the more common psychological term “regret,” the concept is analogous to Feinstein’s term “chagrin” . In fact, explicit consideration of post-choice regret in decision making has been considered an essential element in any serious theory of choice and certainly dominates many clinical decisions [1, 2, 20]. We also reformulated the original model using the threshold concept- a fundamental approach in medical decision-making [13, 14, 36]. The threshold concept represents a linchpin between evidence (which presents on the continuum of credibility) and decision-making, which is a categorical exercise (as choice options are either selected or not) [13, 14, 36]. Using an example such as pulmonary embolism, we have shown how the extended model can explain deviations from outcomes predicted by EUT, and account for the variation in management of pulmonary embolism . In general, it is possible that the huge practice variation well documented in contemporary medicine [56–61], can be, in part, due to individual differences in subjective judgments of disease prevalence and “thresholds” at which physicians act. [17, 18, 62]. This may be because quantitative interpretations of qualitative descriptors such as rarely, unlikely, possible, or likely  differ markedly among individuals and hence “gist” representations of a given clinical situation can vary widely among different physicians . We are, of course, aware that many other factors contribute to variation in patient care including the structure of local care organizations, the availability of medical technologies, financial incentives etc . Our intent in this article is to highlight, yet another important factor- individual differences in risk assessment as shaped by different mechanisms operating within a dual process model of human cognitive functioning .
It is interesting to examine circumstances under which we always treat (p t ≤ 0) or never treats (p t ≥ 1). Equation 1 ( Additional file 2: Table S1) shows that when objective evidence indicates that benefits outweigh harms, and when this is further augmented by the decision-maker’s risk attitude in such a way that it magnifies system I’s valuation of benefits and harms, then we can expect to continue to witness further overtreatment in clinical practice (as pt drops to zero) . However, when the decision-maker perceives the benefits smaller than harms, then the threshold increases; consequently, the decision-maker will require higher diagnostic certainty before acting (Figure 3 & Figure 4). This may occur during extrapolation of research results from the group averages to individual patients, when empirical evidence about BII and HII is considered to be unreliable, when the decision-maker is risk averse, or when his or her cognitive processes are biased through the distorting effects of recent experience, memory limitations or other forms of biases well described in the literature [2, 31, 33]. This discussion illustrates how the “rationality of action” may require a re-definition, one encompassing both the formal principles of probability theory and human intuitions about good decisions [5, 68]. Our goal here is not to demonstrate that one approach is conclusively superior to the other- we are merely outlining the differences in the current physicians’ behavior from the perspective of dual processing theory.
Despite the growing recognition of the importance of dual processing for decision-making [2, 5], a few formal models have been developed to try to capture the essence of the way we make decisions. Because different authors focus on different aspects of a multitude of decision-making processes, Evans has recently pointed out that there are many dual processing theories  which fall into two main groups [27, 40] parallel competitive theories and default-interventionalist theories. While the exact accounts of cognitive processes between these two groups of theories differ , as discussed above (Section The behavior of DSM-M Model), we, for the first, time provide a platform, albeit the theoretical one, for reconciling parallel competitive theories with default-interventionalist theories.
Nevertheless, our main goal is to define a theoretical model for medical decision-making; such a model may enable creation of new theoretical frameworks for future empirical research. Future research, obviously, involves extension of the model described herein to more complex clinical situations beyond relatively simple two-alternative situation, even if the latter is frequently encountered in practice. Particularly interesting will be the extension of our dual processing model to include the use of diagnostic tests as the number of new diagnostic technologies continues to explode. Finally, and most importantly, the model presented here needs empirical verification. This limitation is not unique to our model, however, and this criticism can be leveled against most current medical decision-making models, which are rarely, if ever, subjected to empirical verification.
Our model heavily relies on Mukherjee’s model , and is accurate to the extent his additive dual processing model is correct (Figure 1, Equations 1 & 2). Also, note that we have extended Mukherjee’s DSM model by omitting his scaling constant k and using general utility expressions, rather than a single parameter monotonic power function. As discussed above, many factors can activate the switch of system II. In fact, Kahneman warns  that “because you have little direct knowledge what goes on in your mind, you will never know that you might have made a different judgment or reached a different decision under very slightly different circumstances”. This implies that the multiple factors affecting the gamma parameter cannot be directly modeled. A possible solution –and area for future research building on the psychological “fuzzy trace theory” -would be to employ a fuzzy logic model to assess the values of γ (and threshold) as a function of multiple fuzzy inputs .
The complexity described here notwithstanding, we believe that the empirical verification of our current dual processing model is feasible. Even without direct modeling of all factors affecting γ parameter, our model generates empirically falsifiable qualitative predictions as it clearly identifies circumstances under which the decision threshold is increased or decreased as a function of activation of system I (γ parameter). Using simulation to imitate the various real-life decision-making scenarios  offers most logical avenue toward the first empirical testing of our model.
Our model also holds promise in medical education. As highlighted in Introduction, modern knowledge of cognition has taught us that most people, including physicians process information using both system I (fast, intuitive) and system II (slow, deliberative) reasoning at different times but few investigators have examined how to teach physicians to integrate both modes of reasoning in arriving at therapeutic strategies. On the diagnostic side, many investigators [6, 71] have examined clinical reasoning and proposed how experienced physicians move between system I and system II, although most early papers used different terminology. The integration of system I and system II in therapeutic decision making in medicine has been less well examined. A number of investigators have proposed approaches to using and teaching system II reasoning, including the use of decision models . Although this is taught in some schools it has not yet taken medical education by storm . In the field of economic analysis Mukerjee has proposed a theoretical means of combining system I and system II reasoning. In this paper, we build on Mukurjee’s work and show how the integration of system I and system II therapeutic reasoning can form a basis for teaching students and experienced physicians to recognize and integrate system I and system II reasoning. Our model uniquely captures most salient features of (medical) decision-making, which can be effectively employed for didactic purposes. It is believed that by recognizing separate roles of system II and the influence of system I mechanisms on the way we make decisions, we can be in a better position to harness both types of processes toward better practice of making clinical decisions [2, 9].
We hope that our model will stimulate new lines of empirical and theoretical work in medical decision-making. In summary, we have described the first dual processing model of medical decision-making, which has potential to enrich the current medical decision-making field dominated by expected utility theory.
We want to thank to Dr Shira Elqayam of De Montfort University, Leicester, UK for the most helpful comments, in particular to introducing us to a notion of the parallel competitive vs. default-interventionalist dual processing theories and pointing the way how our model can help reconcile these two competing theoretical frameworks.
Presented as a poster at: 14th Biennial European Conference of the Society for Medical Decision Making (SMDM Europe 2012) Oslo, Norway, June 10–12, 2012.
Supported by the US DoA grant #W81 XWH 09-2-0175 (PI Djulbegovic).
- Kahneman D: Maps of bounded rationality: psychology for behavioral economics. Am Econ Rev. 2003, 93: 1449-1475. 10.1257/000282803322655392.View ArticleGoogle Scholar
- Kahnemen D: Thinking fast and slow. 2011, Farrar, Straus and Giroux, New YorkGoogle Scholar
- Evans JSTBT: Hypothethical thinking. Dual processes in reasoning and judgement. 2007, Psychology Press: Taylor and Francis Group, New YorkGoogle Scholar
- Stanovich KE, West RF: Individual differences in reasoning:implications for the rationality debate?. Behavioral and Brain Sciences. 2000, 23: 645-726. 10.1017/S0140525X00003435.View ArticlePubMedGoogle Scholar
- Stanovich KE: Rationality and the Reflective Mind. 2011, Oxford University Press, OxfordGoogle Scholar
- Croskerry P: Clinical cognition and diagnostic error: applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract. 2009, 14 (Suppl 1): 27-35.View ArticlePubMedGoogle Scholar
- Croskerry P: A universal model of diagnostic reasoning. Acad Med. 2009, 84 (8): 1022-1028. 10.1097/ACM.0b013e3181ace703.View ArticlePubMedGoogle Scholar
- Croskerry P, Abbass A, Wu AW: Emotional influences in patient safety. J Patient Saf. 2010, 6 (4): 199-205. 10.1097/PTS.0b013e3181f6c01a.View ArticlePubMedGoogle Scholar
- Croskerry P, Nimmo GR: Better clinical decision making and reducing diagnostic error. J R Coll Physicians Edinb. 2011, 41 (2): 155-162. 10.4997/JRCPE.2011.208.View ArticlePubMedGoogle Scholar
- Slovic P, Finucane ML, Peters E, MacGregor DG: Risk as analysis and risk as feelings: some thoughts about affect, reason, risk, and rationality. Risk Anal. 2004, 24 (2): 311-322. 10.1111/j.0272-4332.2004.00433.x.View ArticlePubMedGoogle Scholar
- Mukherjee K: A dual system model of preferences under risk. Psychol Rev. 2010, 177 (1): 243-255.View ArticleGoogle Scholar
- Djulbegovic B, Hozo I, Lyman GH: Linking evidence-based medicine therapeutic summary measures to clinical decision analysis. MedGenMed. 2000, 2 (1): E6-PubMedGoogle Scholar
- Pauker S, Kassirer J: Therapeutic decision making: a cost benefit analysis. N Engl J Med. 1975, 293: 229-234. 10.1056/NEJM197507312930505.View ArticlePubMedGoogle Scholar
- Pauker SG, Kassirer J: The threshold approach to clinical decision making. N Engl J Med. 1980, 302: 1109-1117. 10.1056/NEJM198005153022003.View ArticlePubMedGoogle Scholar
- Djulbegovic B, Desoky AH: Equation and nomogram for calculation of testing and treatment thresholds. Med Decis Making. 1996, 16 (2): 198-199. 10.1177/0272989X9601600215.View ArticlePubMedGoogle Scholar
- Djulbegovic B, Hozo I, Schwartz A, McMasters K: Acceptable regret in medical decision making. Med Hypotheses. 1999, 53: 253-259. 10.1054/mehy.1998.0020.View ArticlePubMedGoogle Scholar
- Hozo I, Djulbegovic B: When is diagnostic testing inappropriate or irrational? Acceptable regret approach. Med Decis Making. 2008, 28 (4): 540-553. 10.1177/0272989X08315249.View ArticlePubMedGoogle Scholar
- Hozo I, Djulbegovic B: Will insistence on practicing medicine according to expected utility theory lead to an increase in diagnostic testing?. Medical Decision Making. 2009, 29: 320-322. 10.1177/0272989X09334370.View ArticleGoogle Scholar
- Hozo I, Djulbegovic B: Clarification and corrections of acceptable regret model. Medical Decision Making. 2009, 29: 323-324.View ArticleGoogle Scholar
- Kahneman D, Wakker PP, Sarin RK: Back to Bentham? Explorations of experienced utility. Q J Econ. 1997, 112: 375-405. 10.1162/003355397555235.View ArticleGoogle Scholar
- Berwick DM, Hackbarth AD: Eliminating waste in US health care. JAMA: The Journal of the American Medical Association. 2012, 307 (14): 1513-1516. 10.1001/jama.2012.362.View ArticlePubMedGoogle Scholar
- Manchikanti L, Falco FJ, Boswell MV, Hirsch JA: Facts, fallacies, and politics of comparative effectiveness research: part 2 - implications for interventional pain management. Pain Physician. 2010, 13 (1): E55-E79.PubMedGoogle Scholar
- Manchikanti L, Falco FJ, Boswell MV, Hirsch JA: Facts, fallacies, and politics of comparative effectiveness research: part I. Basic considerations. Pain Physician. 2010, 13 (1): E23-E54.PubMedGoogle Scholar
- Hsee CK, Rottenstreich Y: Music, pandas and muggers: on the affective psychology of value. J Exp Psychol. 2004, 133: 23-30.View ArticleGoogle Scholar
- Rottenstreich Y, Hsee CK: Money, kisses, and electric shock: On the affective psychology of risk. Psychol Sci. 2001, 12: 185-190. 10.1111/1467-9280.00334.View ArticlePubMedGoogle Scholar
- Evans JSTBT: Thinking Twice. Two Minds in One Brain. 2010, Oxford University Press, OxfordGoogle Scholar
- Evans JSTBT: Dual-process theories of reasoning: contemporary issues and developmental applications. Dev Rev. 2011, 31: 86-102. 10.1016/j.dr.2011.07.007.View ArticleGoogle Scholar
- Edwards W, Miles R, vonWinterfeld D: Advances in decision analysis. From foundations to applications. 2007, Cambridge University Press, New YorkView ArticleGoogle Scholar
- Simon HA: Information processsing models of cognition. Ann Review Psychol. 1979, 30: 263-296.View ArticleGoogle Scholar
- Reyna VF, Brainerd CJ: Dual processes in decision making and developmental neuroscience: a fuzzy-trace model. Dev Rev. 2011, 31 (2–3): 180-206.PubMedPubMed CentralGoogle Scholar
- Reyna VF, Hamilton AJ: The importance of memory in informed consent for surgical risk. Med Decis Making. 2001, 21 (2): 152-155. 10.1177/0272989X0102100209.View ArticlePubMedGoogle Scholar
- Kahneman D, Tversky A: The psychology of preferences. Sci American. 1982, 246: 160-173. 10.1038/scientificamerican0182-160.View ArticleGoogle Scholar
- Tversky A, Kahneman D: Judgements under uncertainty: heuristics and biases. Science. 1974, 185: 1124-1131. 10.1126/science.185.4157.1124.View ArticlePubMedGoogle Scholar
- Djulbegovic B, Hozo II, Fields KK, Sullivan D: High-dose chemotherapy in the adjuvant treatment of breast cancer: benefit/risk analysis. Cancer Control. 1998, 5 (5): 394-405.PubMedGoogle Scholar
- Djulbegovic B, Hozo I: When should potentially false research findings be considered acceptable?. PLoS Medicine. 2007, 4 (2): e26-10.1371/journal.pmed.0040026.View ArticlePubMedPubMed CentralGoogle Scholar
- Djulbegovic B, Hozo I: Linking evidence-based medicine to clinical decision analysis. Med Decision Making. 1998, 18: 464-abstractGoogle Scholar
- Zeelenberg M, Pieters R: A theory of regret regulation 1.0. J Consumer Psychol. 2007, 17: 3-18. 10.1207/s15327663jcp1701_3.View ArticleGoogle Scholar
- Zeelenberg M, Pieters R: A theory of regret regulation 1.1. J Consumer Psychol. 2007, 17: 29-35. 10.1207/s15327663jcp1701_6.View ArticleGoogle Scholar
- Tsalatsanis A, Hozo I, Vickers A, Djulbegovic B: A regret theory approach to decision curve analysis: a novel method for eliciting decision makers’ preferences and decision-making. BMC Medical Informatics and Decision Making. 2010, 10 (1): 51-10.1186/1472-6947-10-51.View ArticlePubMedPubMed CentralGoogle Scholar
- Evans JSTBT: On the resolution of conflict in dual process theories of reasoning. Think Reasoning. 2007, 13 (4): 321-339. 10.1080/13546780601008825.View ArticleGoogle Scholar
- Hammond KR: Human judgment and social policy. irreducible uncertainty, inevitable error, unavoidable injustice. 1996, Oxford University Press, OxfordGoogle Scholar
- Duhigg C: The power of habit: why we do what we do in life and business. 2012, Random House, New YorkGoogle Scholar
- Thompson VA, Prowse Turner JA, Pennycook G: Intuition, reason, and metacognition. Cogn Psychol. 2011, 63: 107-140. 10.1016/j.cogpsych.2011.06.001.View ArticlePubMedGoogle Scholar
- Brandstatter E, Gigerenzer G: The priority heuristic: making choices without trade-offs. Psychol Rev. 2006, 113: 409-432.View ArticlePubMedPubMed CentralGoogle Scholar
- Sox HC: Better care for patients with suspected pulmonary embolism. Ann Intern Med. 2006, 144 (3): 210-212.View ArticlePubMedGoogle Scholar
- Barritt DW, Jordan SC: Anticoagulant drugs in the treatment of pulmonary embolism. A controlled trial. Lancet. 1960, 1: 1309-1312.View ArticlePubMedGoogle Scholar
- Segal JB, Eng J, Jenckes MW, Tamariz LJ, Bolger DT, Krishnan JA, Streiff MB, Harris KA, Feuerstein CJ, Bass EB: Diagnosis and treatment of deep venous thrombosis and pulmonary embolism. AHRQ Publication No 03-E016. 2003, Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, Washington, DCGoogle Scholar
- Linkins L-A, Choi PT, Douketis JD: Clinical impact of bleeding in patients taking oral anticoagulant therapy for venous thromboembolism: a meta-analysis. Ann Intern Med. 2003, 139 (11): 893-900.View ArticlePubMedGoogle Scholar
- Roy PM, Durieux P, Gillaizeau F, Legall C, Armand-Perroux A, Martino L, Hachelaf M, Dubart AE, Schmidt J, Cristiano M: A computerized handheld decision-support system to improve pulmonary embolism diagnosis: a randomized trial. Ann Intern Med. 2009, 151 (10): 677-686.View ArticlePubMedGoogle Scholar
- Roy P-M, Colombet I, Durieux P, Chatellier G, Sors H, Meyer G: Systematic review and meta-analysis of strategies for the diagnosis of suspected pulmonary embolism. BMJ. 2005, 331 (7511): 259-10.1136/bmj.331.7511.259.View ArticlePubMedPubMed CentralGoogle Scholar
- Hull RD: Diagnosing pulmonary embolism with improved certainty and simplicity. JAMA. 2006, 295 (2): 213-215. 10.1001/jama.295.2.213.View ArticlePubMedGoogle Scholar
- Koreth J, Schlenk R, Kopecky KJ, Honda S, Sierra J, Djulbegovic BJ, Wadleigh M, DeAngelo DJ, Stone RM, Sakamaki H: Allogeneic stem cell transplantation for acute myeloid leukemia in first complete remission: systematic review and meta-analysis of prospective clinical trials. Jama. 2009, 301 (22): 2349-2361. 10.1001/jama.2009.813.View ArticlePubMedPubMed CentralGoogle Scholar
- Cornelissen JJ, Van Putten WL, Verdonck LF, Theobald M, Jacky E, Daenen SM, van Marwijk Kooy M, Wijermans P, Schouten H, Huijgens PC: Results of a HOVON/SAKK donor versus no-donor analysis of myeloablative HLA-identical sibling stem cell transplantation in first remission acute myeloid leukemia in young and middle-aged adults: benefits for whom?. Blood. 2007, 109 (9): 3658-3666. 10.1182/blood-2006-06-025627.View ArticlePubMedGoogle Scholar
- Djulbegovic B: Principles of reasoning and decision-making. Decision Making in Oncology Evidence-based management. Edited by: Djulbegovic B, Sullivan DS. 1997, Churchill Livingstone, Inc, New York, 1-14.Google Scholar
- Feinstein AR: The ‘chagrin factor’ and qualitative decision analysis. Arch Intern Med. 1985, 145: 1257-1259. 10.1001/archinte.1985.00360070137023.View ArticlePubMedGoogle Scholar
- Detsky AS: Regional variation in medical care. N Engl J Med. 1995, 333: 5890590-View ArticleGoogle Scholar
- Dilts DM: Practice variation: the Achilles’ Heel in quality cancer care. J Clin Oncol. 2005, 23 (25): 5881-5882. 10.1200/JCO.2005.05.034.View ArticlePubMedGoogle Scholar
- Eddy DM: Variations in physician practice: the role of uncertainty. Health Aff. 1984, 3 (2): 74-89. 10.1377/hlthaff.3.2.74.View ArticleGoogle Scholar
- Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL: The implications of regional variations in medicare spending. Part 2: health outcomes and satisfaction with care. Ann Intern Med. 2003, 138 (4): 288-298.View ArticlePubMedGoogle Scholar
- Sirovich BE, Gottlieb DJ, Welch HG, Fisher ES: Regional variations in health care intensity and physician perceptions of quality of care. Ann Intern Med. 2006, 144 (9): 641-649.View ArticlePubMedGoogle Scholar
- Zhang Y, Baicker K, Newhouse JP: Geographic variation in medicare drug spending. N Engl J Med. 2010, 363 (5): 405-409. 10.1056/NEJMp1004872.View ArticlePubMedPubMed CentralGoogle Scholar
- Hozo I, Djulbegovic B: Explaining variation in practice: acceptable regret approoach. 2006, 28th Annual Meeting of the Society for Medical Decision Making, BostonGoogle Scholar
- Shaw NJ, Dear PR: How do parents of babies interpret qualitative expressions of probability?. Arch Dis Child. 1990, 65 (5): 520-523. 10.1136/adc.65.5.520.View ArticlePubMedPubMed CentralGoogle Scholar
- Reyna VF: Theories of medical decision making and health: an evidence-based approach. Med Decis Making. 2008, 28 (6): 829-833. 10.1177/0272989X08327069.View ArticlePubMedPubMed CentralGoogle Scholar
- Djulbegovic B, Paul A: From efficacy to effectiveness in the face of uncertainty: indication creep and prevention creep. JAMA. 2011, 305 (19): 2005-2006. 10.1001/jama.2011.650.View ArticlePubMedGoogle Scholar
- Ofri D: The emotional epidemiology of H1N1 influenza vaccination. N Engl J Med. 2009, 361 (27): 2594-2595. 10.1056/NEJMp0911047.View ArticlePubMedGoogle Scholar
- Poland GA: The 2009–2010 influenza pandemic: effects on pandemic and seasonal vaccine uptake and lessons learned for seasonal vaccination campaigns. Vaccine. 2010, 28 (Supplement 4(0)): D3-D13.View ArticlePubMedGoogle Scholar
- Krantz DH, Kunreuther HC: Goals and plans in decision making. Judgement and Decision Making. 2007, 2 (3): 137-168.Google Scholar
- Zimmerman HJ: Fuzzy set theory anf its applications. 1996, Kluwer, Boston, 3View ArticleGoogle Scholar
- Society for simulation in healthcare: http://ssih.org/about-simulation (Last accessed: August 27,2012)
- Kassirer JP, Kopelman RI: Learning clinical reasoning. 1991, Williams & Wilkins, BaltimoreGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/12/94/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.