A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decision-making
© Tsalatsanis et al; licensee BioMed Central Ltd. 2010
Received: 23 July 2010
Accepted: 16 September 2010
Published: 16 September 2010
Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA.
First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker.
We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease.
We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly in those clinical situations when the best management option is the one associated with the least amount of regret (e.g. diagnosis and treatment of advanced cancer, etc).
Decision making is often governed by uncertainty that inevitably affects the overall decision process. In their efforts to model uncertainty, decision theorists have proposed many methodologies with the majority of them having been based on statistics and probability[1–4], information theory and entropy, or possibilistic approaches such as fuzzy logic[6, 7].
In clinical medical research, much effort has been invested in developing decision support systems for diagnosis and treatment of various clinical conditions such as management of infectious diseases in an intensive care unit, chronic prostatitis, or liver surgery[8–12]to name a few examples. Most of these systems are based on probabilistic prediction models. Even though prediction models have been shown to be generally superior and potentially complementary to physicians' prognostications [13–15], historically they have not fulfilled decision makers expectations to help improve decision-making. One reason for this is that most probabilistic medical decision support systems are based on expected utility theory that humans often violate[14, 16, 17]. In addition, most models in medicine do not incorporate decision-makers' preferences, which in addition to having reliable evidence, is the key to rational decision-making[18–20].
The goal of this paper is to develop a novel decision-making approach that incorporates the decision maker's attitudes towards multiple treatment strategies. Our goal is addressed through the following three specific aims. First, we deviate from the traditional expected utility theory in an attempt to satisfy both formal criteria of rationality and human intuition about good decisions[18–22]. We employ regret theory, since regret is a cognitive emotion that combines both rationality and intuition, which are key elements for decision-making[22, 23], to develop a novel methodology for eliciting decision makers' personal preferences. Consequently we reformulate decision curve analysis (DCA)[24, 25] from the regret theory point of view to evaluate alternative treatment strategies and to integrate both evidence on prognosis and treatment with the decision maker's attitudes and preferences[26–28]. Finally, we identify circumstances under which a decision maker tolerates a wrong decision.
To implement our approach, we first compute the threshold probability at which the decision maker is indifferent between alternative actions, based on the level of regret one might feel when he/she makes a wrong decision. We then employ the regret based DCA to identify the optimal strategy for a particular decision maker. The optimal strategy is the one that brings the least regret in the case that it is, in retrospect, wrong. We also show how to employ a prediction model to estimate the probability of disease for a patient and contrast it with the decision maker's threshold probability. Finally, we incorporate the concept of acceptable regret in the decision process to identify the conditions under which the decision maker tolerates a potentially wrong decision.
Decision analysis based on regret theory
In Figure 1, p = P(D +)is the probability associated with the presence of the disease as estimated by a prediction model;1 - p = P(D -)is the probability associated with the absence of the disease, and, U i , i ∈ [1,4], are the utilities corresponding to each outcome. For example, U 1 is the utility of administering treatment to a patient who has the disease (e.g. treat when necessary), and U 2 is the utility of administering treatment to a patient who does not have the disease (e.g. administering unnecessary treatment). Note that we use the term "treatment" in the generic sense of health care intervention, which may indicate therapy, procedure, or a diagnostic test.
The probabilistic nature of prognostication models complicates significantly the decision process. For example, if a prediction model estimates the probability of a patient having a disease equal to 40%, it is unclear whether this patient should receive treatment or not. A solution from the point of view of the classical decision theory is to employ the concept of threshold probability P t , which is defined as the probability at which the decision maker is indifferent between two strategies (e.g. administer treatment or not)[27, 29, 30]. Based on the threshold concept, the patient should be treated if p ≥ P t and should not be treated otherwise.
However, since in most cases decisions are made under uncertainty and can never be 100% accurate [23, 26, 28, 31–34]. Thus, after a decision has been made one may discover that another alternative would have been preferable. This knowledge may bring a sense of loss or regret to the decision maker[23, 26, 28, 31–34]. Regret can be particularly strong when the consequences of wrong decisions are life threatening or seriously influence the quality of the patient's life.
Formally, regret can be expressed as the difference between the utility of the outcome of the action taken and the utility of the outcome of the action that, in retrospect, should have been taken [23, 26, 28, 31–34]. Regret can be felt by any party involved in the decision-making process (e.g. patients receiving treatment, patient's proxies or physicians administering treatment). For the rest of this paper we assume that the decision maker is the treating physician.
We first employ regret theory to estimate the threshold probability, P t , at which the physician is indifferent between alternative management strategies (e.g. administer treatment or not). In order to accomplish this, we describe regret in terms of the errors of (1) not treating the patient who has the disease, and (2) treating the patient who does not have the disease.
Equation 1 effectively captures the preferences of the decision maker towards administering or not administering treatment. At the individual level, equation 1 shows how the threshold probability relates to the way the decision maker weighs false negative (i.e. failing to provide necessary treatment) vs. false positive (i.e. administering unnecessary treatment) results[24, 25].
Note that the fraction is undefined for U 4 - U 2 = 0, which means that in this situation there is no regret associated with administering unnecessary treatment. Under these circumstances, P t = 100%, indicating that treatment is justified only in case of absolute certainty of disease (p = 100%), a realistically unachievable goal.
Elicitation of threshold probability
There are numerous techniques for eliciting the decision maker's preferences regarding treatment administration . None of them has been proven to be better than the other. We argue that any attempt to measure people's preferences and risk attitudes should be derived from an underlying theory of decision-making that can be applied to a problem or a class of the problems at hand. We approach elicitation of preferences by capturing people attitudes (e.g. physicians') through threshold probabilities. Normatively, a threshold probability reflects indifference between two alternative management strategies.
There are few commonly used methods to assess the value of this indifference for a decision maker such as the standard gamble, and the time trade-off [35–37]. The problem is that both standard gamble and time trade-off are time-consuming, cognitively more complex and are shown that can lead to biased estimates of people's preferences [36, 37]. An alternative method is to use rating scales, such as visual analog scales (VAS), which are considerably easier to administer and better understood by the participants. The problem with analog scales, however, is that they cannot capture health state trade-offs[36, 37].
The proposed method retains the simplicity of VAS but it takes into account the consequences of possible mistakes in decision-making by utilizing two visual analog scales. The first scale aims to assess the regret associated with potential error of failing to administer beneficial treatment ("regret of omission"). The second scale measures the regret of administration of unnecessary treatment ("regret of commission"). Using these two scales we can capture trade-offs and compute the threshold probability at which a decision maker is indifferent between two alternative management strategies.
We employed the two visual analog scales with typical 100 points [35–37]anchored by no regret and maximal regret. This is modeled after pain assessment limiting the maximum possible pain that a person can experience . Accordingly, we can elicit threshold probabilities by asking the physician to weigh the regret associated with wrong decisions (e.g. giving unnecessary treatment vs. failure to administer necessary treatment) using a numerical (0 to 100) scale. The questions may be narrowly defined related to specific outcomes (e.g., survival/mortality, heart attack etc.). We should, however, note that most treatments are associated with multiple dimensions, some good and some bad. This is a fundamental reason why no universally accepted method for assessment of decision-makers' preferences has been developed so far. It is very difficult, if not impossible, to accurately determine the trade-offs across multiple outcomes that can be permuted in a number of ways. A solution to this problem is to capture the decision-maker's global or "holistic" perception toward treatment. By asking questions about trade-offs in this way, we directly address both cognitive mechanisms-intuitive and deliberative- of the decision process. This, in turn, can lead to more accurate assessment of the decision makers' preferences.
For example, to elicit the physician's threshold probability, we may ask the following questions:
1. On a scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate the level of your regret if you failed to provide necessary treatment to your patient (i.e. did not give treatment that, in retrospect, you should have given)? [Note that the answer to this question corresponds to the (U1-U3) expression in equation 1)].
2. On a scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate the level of your regret if you had administered unnecessary treatment to your patient (i.e. administered treatment that, in retrospect, should have not been given)? [Note the answer to this question corresponds to the (U4-U2) expression in equation 1).]
Thus, the physician would be unsure as to whether to treat or not the patient if the patient's probability of disease as computed by the prediction model was 33%. Thus, the recommended action, which is based on elicitation of the decision-maker preferences, is directly derived from the underlying theoretical model.
Regret based decision curve analysis (DCA)
Decision-makers may be presented with many alternative strategies that can be difficult to model. A simple, yet powerful approach that is based on experience of a typical practicing physician is to compare the strategy based on modeling with those scenarios when all or no patient is treated. That is, the clinical alternatives to the prediction model strategy is to assume that all patients have the disease and thus treat them all, or to assume that no patient has the disease and thus treat none. In this case the clinical dilemma a physician faces when considering treatment is threefold: (1) treat all the patients ("treat all"), (2)treat no patients ("treat none"), and (3) use a prediction model and treat a patient if p ≥ P t ("model").
The optimal decision depends on the preferences of the decision maker as captured by the threshold probability. We use Decision Curve Analysis (DCA) [24, 25] to identify the range of threshold probabilities at which each strategy ("treat all", "treat none", and "model") is of value. Traditional DCA uses the (net expected) benefits associated with each strategy to recommend the best strategy [24, 25]. In this work, we consider that the optimal strategy is the one that brings the least regret in case it is proven wrong, retrospectively.
Here, FN (probability of false negatives) represents the conditional probability P(p < P t |D +)of not treating the patient who has the disease.
FP (probability of false positives) is the conditional probability P(p ≥ P t |D -)of treating the patient who does not have the disease.
TP = 1 - FN = P(p ≥ P t|D +)(probability of true positives): Probability of treating the patient who has the disease.
TN = 1 - FP = P(p < P t|D -) (probability of true negatives): Probability of not treating the patient who does not have the disease.
Note that these are exactly the same formulas as those derived by Vickers and Elkin  who employ the expected-utility model in "decision curve analysis" (DCA). The regret based derivation, however, is mathematically more parsimonious. The original DCA formulation required several mathematical manipulations making the simplicity of regret approach more attractive. In addition, as argued throughout the manuscript, the regret formulation may have additional decision-theoretical advantages as it enables experiencing consequences of decisions both at the emotional (system 1) and cognitive (system 2) level[23, 40].
Equations 10 and 11 above are useful when calculating NERD as a function of P t . The probabilities P(p ≥ P t ∩ D +), P(p ≥ P t ∩ D -), P(p ≥ P t ∩ D +), and P(p ≥ P t ∩ D -) are estimated as follows:
P(p ≥ P t ∩ D +) ≈ the number of patients who have the disease and for whom the prognostic probability is greater than or equal to P t (with #TP = number of patients with true positive results, , where n is the total number of patients in the study).
P(p ≥ P t ∩ D -) ≈ the number of patients who do not have the disease and for whom the prognostic probability of disease is greater than or equal to P t (with #FP = number of patients with false positive results, .
P(p < P t ∩ D +) ≈ the number of patients who have the disease and for whom the prognostic probability of disease is less than P t (with #TN = number of patients with true negative results, .
P(p < P t ∩ D -) ≈ the number of patients who do not have the disease and for whom the prognostic probability of disease is less than P t (with #FN=number of patients with false negative results, .
When computing NERD[Treat none, treat all] we assume that all patients have the disease, thus #TP is the number of people who actually have the disease and #FP is the number of people who do not have the disease but are given treatment. On the other hand, when computing NERD[Treat none, Model]from equation 10 and, NERD[Treat all, Model]from equation 11, #TP, #FP, #TN, and #FN are computed for each threshold probability assuming that a patient has the disease if the prognostic probability is greater than or equal to the threshold probability and does not have the disease, otherwise.
Select a value for threshold probability.
Assuming that patients should be treated if p ≥ P t and should not be treated otherwise, compute #TP and #FP for the prediction model.
Calculate the NERD(Treat none, Model)using equation 10.
Calculate NERD(Treat all, Model)using equation 11.
Compute the NERD(Treat none, Treat all)using equation 10 where #TP is the number of patients having the disease and #FP is the number of patients without disease who got treatment.
Repeat steps 1 - 6 for a range of threshold probabilities.
Graph each NERD calculated in steps 3-5 against each threshold probability.
Based on the Regret DCA methodology, the optimal decision at each threshold probability is derived by comparing each pair of strategies through their corresponding NERDs according to the transitivity principle (i.e., if A > B, B > C then A > C). Thus, if NERD(strategy 1, strategy 2) > NERD(strategy 2, strategy 3) > 0 then strategy 2 is better than strategy 1, and strategy 3 is better than strategy 2. Therefore, strategy 3 is the optimal strategy.
No decision model can guarantee that the recommended strategy will be the correct one. Therefore, we can always make a mistake and recommend treatment we should not have, or fail to recommend treatment we should have administered . However, there are situations where the regret resulting from a wrong decision will be tolerable. These situations are best described under the notion of acceptable regret [26, 28, 31]. Formally, acceptable regret,Rg 0, is defined as the portion of utility a decision maker is willing to lose/sacrifice when he/she adheres to a decision that may prove wrong [26, 28, 31, 32]. For example, a physician may regret administering unnecessary treatment to a patient but he/she can "still live with" the consequences of this decision if she/he judged them to be trivial or inconsequential.
represents the prognostic probability below which the physician would comfortably withhold treatment that may prove beneficial, in retrospect.
Note that equations 20 and 21 express acceptable regret in terms of probabilities while equations 17-19 define it in terms of NERD. Hence, the outputs of these equations are not the same; rather, they complement each other.
Elicitation of acceptable regret
In most cases the decision maker does not have a complete understanding of benefits lost or harms inflicted and cannot assign a precise number to them. For this reason, we do not suggest inquiring directly about the value of r. Instead, we propose eliciting r through the decision-maker's responses to specific clinical scenarios. For example, we propose the following approach:
Assume that you have 100 patients with the same probability of disease as the patient you are currently treating . You need to decide whether each of these patients should receive treatment or not. Since no prediction model is 100% accurate, it is expected that you will make some mistakes in your treatment recommendations (e.g. you may recommend treatment to a patient who does not need it, or fail to recommend treatment to a patient who needs it).
1. We are now interested in knowing your tolerance toward administering unnecessary treatment i.e. we want to learn what the magnitude of the unavoidable error you can live with is by inflicting potentially harmful treatment on a patient. Note that if you say that your acceptable regret is zero, this means that you can only make decision if you absolutely certain that your recommendation is correct.
Out of the number (100-y) of patients who should have not received treatment, how many patients would you tolerate treating? (The answer is used to compute r h ).
2. We are interested in knowing your tolerance toward failing to provide necessary treatment i.e. we want to learn what the magnitude of unavoidable error you can live with is by forgoing potentially beneficial treatment. Note that if you say that your acceptable regret is zero, this means that you can only make decision if you absolutely certain that your recommendation is correct.
Out of the number (100-x) of patients who should have been treated, how many patients would you tolerate not treating? (The answer is used to compute r b ).
It is unnecessary to ask the decision maker to answer both questions. We suggest asking only the question related to the recommendation the physician is about to make e.g. if the recommendation is about administering treatment, then the decision maker should be asked the second question, while if it is about not giving treatment, then he/she can ask the first question.
The value of acceptable regret is plotted in the regret DCA graph to visually facilitate the decision making process. At a specific threshold probability all strategies for which |NERD| ≤ Rg 0 are considered equivalent in regret, according to the definition in the previous section.
We will employ a prostate cancer biopsy example to demonstrate the applicability of our approach. Prostate cancer biopsy is an invasive and uncomfortable procedure, which can be painful and is associated with a risk of infection. However, it is often necessary for diagnosis of prostate cancer, one of the leading causes of cancer death in men.
Men are typically biopsied for prostate cancer if they have an elevated level of prostate-specific antigen (PSA). However, most men with a high PSA do not have prostate cancer. This has led to the idea that statistical models based on multiple predictors (PSA, age, family history, other markers) might be used to predict biopsy outcomes and hence aid biopsy decisions for individual patients. A physician seeing a patient with an elevated PSA has three possible options: go for biopsy, refuse biopsy or look up his probability in a statistical model and then make a decision.
NERD(biopsy none,model) > 0 therefore, the model is preferred to the strategy biopsy none.
NERD(biopsy none, biopsy all) > 0 therefore, the strategy biopsy all is preferred to the strategy biopsy none.
NERD(biopsy all,model) > 0 therefore, the model is preferred to the strategy biopsy none
Repeating the same procedure for all threshold probabilities, we can see that deciding based on the statistical model is the optimal strategy (i.e. results in the minimum expected regret) for threshold probabilities between 8% and 43%. For threshold probabilities between 42% and 95%, the optimal strategy is to biopsy no patients, while for 0% to 8% both model and biopsy all strategies are optimal.
To interpret these results, we have to consider how a typical physician values the harms of a false negative (missing a cancer) and a false positive (an unnecessary biopsy) result. If regret associated with unnecessary biopsy is felt to be worse than missing cancer, then according to equation 1, the threshold probability is greater than 50%. However, it is unlikely that a physician would consider an unnecessary biopsy to be worse than missing a cancer, so the threshold probability for biopsy must be less than 50%. Thus, a reasonable range of threshold probabilities might indeed be between 8% - 43% as suggested by our model. As the model is superior across this entire range, we can conclude that, irrespective of the physician's exact preferences, making a biopsy decision based on the statistical model will lead to lower expected regret than an alternative such as biopsying all or no men. Based on discussions with clinicians, we believe that a reasonable range of threshold probability is 10% - 40%. As the regret associated with the model strategy is lowest across this entire range, we can recommend use of the model. Nonetheless, we do not have a complete sample of all physician preferences and it is possible that a physician may have a probability outside of this range.
which means that the strategies "biopsy none" (biopsy no patients) and "model" are equivalent in regret. Therefore, the prediction model does not offer any better information and thus, it can be disregarded.
This section describes the overall decision process regarding prostate cancer biopsy. The process begins with elicitation of the threshold probability from the treating physician and continues with evaluation of the available strategies based on regret DCA (Figure 4). Then, if necessary, the probability of cancer based on the available prognostic model is computed and contrasted with the threshold probability. Finally, the concept of acceptable regret is employed to arrive at the strategy which is the most tolerable to the decision maker who always faces possibilities of making wrong decisions. For the remainder of this section the normal font text corresponds to the author comments. The text in bold and underlined font corresponds to questions to, and answers from the physician respectively. The italic text is notes to the reader. We demonstrate the applicability of our approach using hypothetical answers from two physicians.
Interview with the physician to elicit his/her threshold probability.
On the scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate your level of regret if you failed to provide necessary treatment?
On the scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate your level of regret if you administered unnecessary treatment?
Physician #1: 10, Physician #2: 60. This value corresponds to U 4 - U 2 from equation 1.
Using the graph in Figure 4, identify the optimal strategy for the computed threshold probability.
NERD(biopy all, model) > 0, the strategy "model" is better than the strategy "biopsy all"
NERD(biopsy none, model) > 0, the strategy "model" is better than the strategy "biopsy none"
NERD(biopsy none, biopsy all) > 0, the strategy "biopsy all" is better than "biopsy none".
Therefore, the optimal strategy is the "model" which corresponds to biopsy based on the probability of cancer predicted by the statistical model. The next step is to compute the patient's probability of cancer and contrast it with the threshold probability.
Compute the cancer probability for the specific patient based on the statistical model.
If the cancer probability is greater than or equal to the threshold probability, then the surgeon should biopsy the patient.
If the cancer probability is less than the threshold probability, then the surgeon should not biopsy the patient.
Elicitation of the level of acceptable regret.
Assume that you have 100 patients, all with probability of cancer equal to 20% (the same as your patient). This means that out of 100 patients, 20 patients will have cancer while 80 will not have cancer. You need to decide whether each of these patients should undergo biopsy or not. Since no prediction model is 100% accurate, it is expected that you will make some mistakes in your recommendations (e.g. you may recommend biopsy to a patient who does not need it, or fail to recommend biopsy to a patient who may need it).
a. The physician considers biopsy (Physician #1):
Out of the 20 patients who should be biopsied, for how many patients would you tolerate not recommending a necessary biopsy? 1.
This answer corresponds to and acceptable regret Rg b = r b (U 1 - U 3)= 0.05 * 0.5 = 0.025. The optimal strategy at P t = 16% is to use the statistical model (Figure 4). For P t = 16% and Rg b = 0.025 all NERDs are greater than acceptable regret, thus the optimal strategy remains the statistical model.
The physician does not consider biopsy (Physician #2).
Out of the 80 patients who should not undergo biopsy, for how many patients would you tolerate recommending an unnecessary biopsy? 40.
The answer provided by the Physician #2 corresponds to and acceptable regret Rg h = r h (U 4 - U 2) = 0.5 * 0.6 = 0.3.
The optimal strategy for P t = 46% is to biopsy no patients (Figure 4). Also, for p t = 46% and Rg h = 0.3, we have: |NERD(biopsy none, biopsy all)| = | -0.639 > Rg h , |NERD(biopsy none, model)| = | - 0.003| < Rg h and |NERD(biopsy all, model) = 0.6364 > Rg h . This means that the strategies "biopsy none" and "model" are equivalent in regret. In practical terms no additional effort is justified for using the statistical model.
Physician #1 considers recommending biopsy to his/her patient. Based on equation 21, the physician would tolerate not recommending a biopsy for any prognostic probability below P treat none = r b = 5%.
Physician #2 considers not recommending biopsy to his/her patient. Based on equation 20, the decision maker would tolerate recommending an unnecessary biopsy for any prognostic probability above P treat all = 1 - r h = 50%
Currently, there is no agreed upon method for how preferences regarding multiple objectives that typically go in opposite directions (i.e. most medical interventions are associated both with benefits and harms) should be elicited. We have presented and demonstrated an approach to decision making based on regret theory and decision curve analysis. The approach presented in this paper relies on the concept of the threshold probability at which a decision maker is indifferent between strategies, to suggest the optimal decision [27, 29, 30]. Unlike the approaches described in the classic threshold papers [27, 29, 30], our approach is based on the notion that the value of threshold probability is clearly subjective and depends on the personal preferences of the decision maker. We elicit threshold probabilities based on the regret one may feel in case that the chosen strategy is proven wrong, in retrospect. Although one can narrow down the approach to specific medical outcomes, we believe that eliciting preferences in a global, holistic way is more useful if our approach is to be used in the actual practice.
We believe that the model described here has a direct practical application in overcoming many difficulties related to linking evidence with patient's preferences to arrive at the optimal decision- the issues that plagued the field of decision-making. The problem of eliciting preferences and integrating them in a coherent decision is not a simple one. We argue that the approach we are advocating here represents a contribution to the field of decision making, be should not be seen as the panacea to medical decision making. However, we anticipate our methodology to be suitable for medical decision primarily associated with trade-offs between quality and quantity of life.
Over that last couple of decades, many attempts have been made to develop the best method to take these considerations in real-life settings. Unfortunately, as explained, no approach has succeeded . We believe that the reason for this is that most approaches to elicit decision maker's preferences as well as to help improve decision-making have relied on a rational framework based on expected utility theory. However, modern cognitive theories (within so called dual-processing theory) have convincingly demonstrated that human decisions rely both on intuition (system 1) and analytical, deliberative process (system 2) in balancing risks and benefits in the decision-making process [22, 40, 45]. We believe that rational decision-making should take into account both formal principles of rationality and human intuition about good decisions [46, 47]. The key is to preserve rational framework, while allowing anticipation of the effect of decision on emotions (while avoiding biases associated with intuitive thinking) . One way to accomplish this is to use the cognitive emotion of regret to serve as a link between system 1 (i.e. intuitive system) and system 2 (i.e. deliberative, analytical cognitive system). By anticipating consequences of our actions and circumstances under which we can live with our mistakes, we bring together both aspects of cognition that may lead to better and more satisfactory decision-making.
Specifically, we argue that eliciting people's preferences using regret theory may be superior to using traditional utility theory because regret forces decision-makers to explicitly consider consequences of decisions. We have previously shown that we can always make errors in decision-making: recommend treatment that does not work, or fail to recommend treatment that does . Therefore, we reformulated DCA from the regret theory's point of view. Furthermore, it has been shown that the expected utility theory is often violated to minimize anticipated regret [33, 34].In addition, there is substantial evidence that medical decision making aims to minimize regret associated with wrong decisions [48–50].
Moreover, while descriptive, normative, and prescriptive theories  tend to evaluate individual outcomes, the approach presented here evaluates all of the outcomes in a holistic manner. Our approach is consistent with Reyna's "gist" or "fuzzy trace theory" in which the decision-maker characterizes gist of each outcome to arrive at a given decision . For example, consider that a decision maker is provided with a list of harms and benefits associated with each decision, as it is currently recommended by the practice guidelines panels . In traditional theories, the decision maker evaluates a treatment strategy by reasoning on each of the harms and benefits associated with a given strategy. This, as discussed above, would mean integration of all multiple outcomes that often go in different directions typically within limited time-frame. Due to the complexity of these decisions, however, this approach overwhelms the decision maker as our brain capacity is limited. The regret DCA methodology quantifies the global attitudes of the decision maker towards a specific strategy without requiring separate reasoning for each of the harms and benefits. This holistic assessment occurs within the dual processing cognitive system, which evaluates collectively the harms and the benefits associated with each treatment alternative. By assessing trade-offs through both cognitive mechanisms-intuitive and deliberative- we believe that we can assess decision makers' preferences more accurately.
In general, since our method relies on the elicitation of threshold probability we recommend using our methodology for every patient. As every patient's values are different the threshold probability should indeed be patient-specific. For example, a physician may act "aggressively" for a young patient who is the father of two underage kids and less aggressively for an older patient. However, in the cancer biopsy example, it is expected that most of the patients should present with similar characteristics and therefore most physicians would settle in a small area of threshold probabilities. In this case repeating the elicitation process for every patient would be impractical. Nevertheless, this is an empirical question worthy of further investigation as alluded above.
Our approach may help reconcile formal principles of rationality and human intuitions about good decisions that may better reflect "rationality" in medical decision-making [21, 32, 46, 47]. We hope that our theoretical work will stimulate empirical testing of the concepts outlined in this paper. Toward this end, we are currently working on developing a prescriptive computerized decision-support system to facilitate the application of the model described herein. Such a system is expected to be user friendly with built-in automatic manipulation of the complex calculations that may be off-putting to many users. We hope to report on testing of our system in the near future.
We have presented a decision making methodology that relies on regret theory and decision curve analysis to assist physicians in choosing between appropriate health care interventions. Our methodology utilizes the cognitive emotion of regret to determine the decision maker's preferences towards available strategies and DCA to suggest the optimal decision for the specific decision maker. We believe that our approach is suitable for those clinical situations when the best management option is the one associated with the least amount of regret (e.g. diagnosis and treatment of advanced cancer, etc).
As with any other novel theoretical work, our approach has its limitations. First, it has not been empirically tested in a clinical setting. However, we are in the process of developing the appropriate decision support tools to bring our model into clinical practice and evaluate its usefulness with actual physicians and patients. Second, the methodology presented is appropriate for single point decision making. Further investigation is required to determine the application of regret theory to decisions that re-occur over time. Finally, we assume that there is only one decision maker involved in the decision process. Nevertheless, our plan for future work includes extending our methodology to shared decision-making that will include both physician and patient in the decision process and investigate whether in practice there is a difference between preferences and choices made by physicians and their patients.
We propose a novel method for eliciting decision makers' preferences towards treatment administration. Contrary to traditional methodologies on eliciting preferences, our method considers the consequences of potential mistakes in decisions. We propose a dual visual analog scale to capture errors of omission and errors of commission and, therefore, evaluate the trade-offs associated with each of the available strategies.
We have reformulated DCA from the regret theory point of view. Our approach is intuitively more appealing to a decision maker and should facilitate decision making particularly in those clinical situations when the best management option is the one associated with the least amount of regret.
Finally, we utilize the concept of acceptable regret to identify the circumstances under which a decision maker tolerates a wrong decision.
We envision facilitation of the decision process in clinical settings through a computerized decision support system available at the point of care. In fact, we are in the process of developing such a system and hope to report about it soon.
Decision Curve Analysis
Net Expected Regret Difference
Visual Analog Scale, p: Prognostic probability
- P t :
- D +/D -:
The patient has/does not have the disease
- U i :
Utility corresponding to outcome I
Regret associated with the action x
- Rx +/Rx -:
- U 1 - U 3 :
Consequences of not administering treatment where indicated
Expected regret associated with an action
- #TP :
#TN,#FP,#FN Number of TP:, TN, FB, FN patients
- n :
Number of patients
- NERD(action 1:
action 2): Net expected regret difference between actions 1 and 2
- Rg 0 :
- Rg b :
Acceptable regret as defined in terms of loses in benefits due to forgoing treatment
- Rg h :
Acceptable regret as defined in terms of harms due to undergoing unnecessary treatment
- r b /r h :
Percentages of the benefits/harms a decision maker is willing to lose/incur in case of a wrong decision
- P treat all :
The prognostic probability above which the decision maker would tolerate recommending unnecessary treatment
- P treat none :
The prognostic probability below which the decision maker would tolerate not recommending treatment.
This work is supported by the Department of Army grant #W81 XWH 09-2-0175.
- Edwards W, Miles RFJ, von Winterfeldt D: Advances in decision analysis. From foundations to applications. 2007, New York: Cambridge University PressView ArticleGoogle Scholar
- Lindley D: Making decisions. 1985, New York: Willey, 2Google Scholar
- Greenland S: Probability logic and probabilistic induction. Epidemiology. 1998, 9: 322-332. 10.1097/00001648-199805000-00018.View ArticlePubMedGoogle Scholar
- Greenland S: Bayesian Interpretation and Analysis of Research Results. Seminars in Hematology. 2008, 45 (3): 141-149. 10.1053/j.seminhematol.2008.04.004.View ArticlePubMedGoogle Scholar
- Shannon C, Weaver W: The mathematical theory of communication. 1962, Urbana: The University of Illinois PressGoogle Scholar
- Zimmer man H: Fuzzy set theory and its applications. 1996, Boston: Kluwer Academic Press, 3View ArticleGoogle Scholar
- Zimmer man H: An application-oriented view of modelling uncertainty. European Journal of Operational Research. 2000, 122: 190-198. 10.1016/S0377-2217(99)00228-3.View ArticleGoogle Scholar
- Schurink CAM, Lucas PJF, Hoepelman IM, Bonten MJM: Computer-assisted decision support for the diagnosis and treatment of infectious diseases in intensive care units. The Lancet Infectious Diseases. 2005, 5 (5): 305-312. 10.1016/S1473-3099(05)70115-8.View ArticlePubMedGoogle Scholar
- Hansen C, Zidowitz S, Hindennach M, Schenk A, Hahn H, Peitgen HO: Interactive determination of robust safety margins for oncologic liver surgery. International journal of computer assisted radiology and surgery. 2009, 4 (5): 469-474. 10.1007/s11548-009-0359-1.View ArticlePubMedGoogle Scholar
- Bratchikov OP, Korenevsii NA, Seregin SP, Dolzhenkov SD, Shumakova EA, Kotsar AG, Kriukov AA, Krivovtsev SI, Popov AV: Automatic decision support system in prognostication, diagnosis, treatment and prophylaxis of chronic prostatitis. Urologiia. 2009, 44-48. 4
- Bertsche T, Askoxylakis V, Habl G, Laidig F, Kaltschmidt J, Schmitt SP, Ghaderi H, Bois AZ, Milker-Zabel S, Debus J: Multidisciplinary pain management based on a computerized clinical decision support system in cancer pain patients. Pain. 2009, 147 (1-3): 20-28. 10.1016/j.pain.2009.07.009.View ArticlePubMedGoogle Scholar
- Rahilly-Tierney CR, Nash IS: Decision-making in percutaneous coronary intervention: a survey. BMC Med Inform Decis Mak. 2008, 8: 28-10.1186/1472-6947-8-28.View ArticlePubMedPubMed CentralGoogle Scholar
- Dawes RM, Faust D, Meehl PE: Clinical versus actuarial judgment. Science. 1989, 243 (4899): 1668-1674. 10.1126/science.2648573.View ArticlePubMedGoogle Scholar
- Hastie R, Dawes RM: Rational choice in an uncertain world. 2001, London: Sage Publications, IncGoogle Scholar
- The-Support-Investigators: A Controlled Trial to Improve Care for Seriously III Hospitalized Patients: The Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). JAMA. 1995, 247 (20): 1591-1598.Google Scholar
- Baron J: Thinking and deciding. 2000, Cambridge: Cambridge University Press, 3Google Scholar
- Bell DE, Raiffa H, Tversky A: Decision making. Descriptive, normative, and prescriptive interactions. 1988, Cambridge: Cambridge University PresspublisherView ArticleGoogle Scholar
- Djulbegovic B: Lifting the fog of uncertainty from the practice of medicine. Bmj. 2004, 329 (7480): 1419-1420. 10.1136/bmj.329.7480.1419.View ArticlePubMedPubMed CentralGoogle Scholar
- Guyatt GH, Oxman AD, Kunz R, Falck-Ytter Y, Vist GE, Liberati A, Schunemann HJ: Going from evidence to recommendations. Bmj. 2008, 336 (7652): 1049-1051. 10.1136/bmj.39493.646875.AE.View ArticlePubMedPubMed CentralGoogle Scholar
- O'Connor AM, Legare F, Stacey D: Risk communication in practice: the contribution of decision aids. Bmj. 2003, 327 (7417): 736-740. 10.1136/bmj.327.7417.736.View ArticlePubMedPubMed CentralGoogle Scholar
- Djulbegovic B, Hozo I: Health care reform & criteria for rational decisionmaking. 2010, --- Either ISSN or Journal title must be supplied.. [http://www.smdm.org/newsletter/spring_2010/#a22]Google Scholar
- Slovic P, Finucane ML, Peters E, MacGregor DG: Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis. 2004, 24 (2): 311-321. 10.1111/j.0272-4332.2004.00433.x.View ArticlePubMedGoogle Scholar
- Zeelenberg M, Pieters R: A theory of regret regulation 1.1. J Consumer Psychol. 2007, 17: 29-35. 10.1207/s15327663jcp1701_6.View ArticleGoogle Scholar
- Vickers A, Cronin A, Elkin E, Gonen M: Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers. BMC Medical Informatics and Decision Making. 2008, 8 (1): 53-10.1186/1472-6947-8-53.View ArticlePubMedPubMed CentralGoogle Scholar
- Vickers A, Elkin E: Decision curve analysis: a novel method for evaluating prediction models. Med Dec Making. 2006, 26 (6): 565-574. 10.1177/0272989X06295361.View ArticleGoogle Scholar
- Djulbegovic B, Hozo I: When Should Potentially False Research Findings Be Considered Acceptable?. PLoS Med. 2007, 4 (2): e26-10.1371/journal.pmed.0040026.View ArticlePubMedPubMed CentralGoogle Scholar
- Djulbegovic B, Hozo I, Lyman GH: Linking evidence-based medicine therapeutic summary measures to clinical decision analysis. MedGenMed. 2000, 2 (1): E6-PubMedGoogle Scholar
- Djulbegovic B, Hozo I, Schwartz A, McMasters KM: Acceptable regret in medical decision making. Med Hypotheses. 1999, 53 (3): 253-259. 10.1054/mehy.1998.0020.View ArticlePubMedGoogle Scholar
- Pauker SG, Kassirer JP: Therapeutic decision making: a cost-benefit analysis. N Engl J Med. 1975, 293 (5): 229-234. 10.1056/NEJM197507312930505.View ArticlePubMedGoogle Scholar
- Pauker SG, Kassirer JP: The threshold approach to clinical decision making. N Engl J Med. 1980, 302 (20): 1109-1117. 10.1056/NEJM198005153022003.View ArticlePubMedGoogle Scholar
- Hozo I, Djulbegovic B: When is diagnostic testing inappropriate or irrational? Acceptable regret approach. Med Dec Making. 2008, 28 (4): 540-553. 10.1177/0272989X08315249.View ArticleGoogle Scholar
- Hozo I, Djulbegovic B: Will insistence on practicing medicine according to expected utility theory lead to an increase in diagnostic testing?. Med Dec Making. 2009, 29: 320-322. 10.1177/0272989X09334370.View ArticleGoogle Scholar
- Bell DE: Regret in Decision Making under Uncertainty. Operations Research. 1982, 30: 961-981. 10.1287/opre.30.5.961.View ArticleGoogle Scholar
- Loomes G, Sugden R: Regret theory: an alternative theory of rational choice. Economic J. 1982, 92: 805-824. 10.2307/2232669.View ArticleGoogle Scholar
- Lichenstein S, Slovic P: The construction of preference. 2006, New York: Cambridge University PressView ArticleGoogle Scholar
- Stiggelbout AM, de Haes JC: Patient preference for cancer therapy: an overview of measurement approaches. J Clin Oncol. 2001, 19 (1): 220-230.PubMedGoogle Scholar
- Hunnik M, Glasziou P: Decision-making in health and medicine. Integrating evidence and values. 2001, Cambridge: Cambridge University PressGoogle Scholar
- McCaffery M, Beebe A: Pain: Clinical manual for nursing practice. 1993, Baltimore: V.V. Mosby CompanyGoogle Scholar
- Steyerberg EW, Vickers AJ: Decision curve analysis: a discussion. Med Decis Making. 2008, 28 (1): 146-149. 10.1177/0272989X07312725.View ArticlePubMedPubMed CentralGoogle Scholar
- Evans TSBT: Hypothetical Thinking: Dual Processes in Reasoning and Judgement (Essays in Cognitive Psychology). 2007, New York: Psychology Press: Taylor and Francis GroupGoogle Scholar
- Peirce CS: The numerical measure of the success of predictions. Science. 1884, 4: 453-454. 10.1126/science.ns-4.93.453-a.View ArticlePubMedGoogle Scholar
- Djulbegovic B, Frohlich A, Bennett CL: Acting on imperfect evidence: How much regret are we ready to accept?. J Clin Oncol. 2005, 23 (28): 6822-6825. 10.1200/JCO.2005.06.007.View ArticlePubMedGoogle Scholar
- Hozo I, Schell MJ, Djulbegovic B: Decision-Making When Data and Inferences Are Not Conclusive: Risk-Benefit and Acceptable Regret Approach. Seminars in Hematology. 2008, 45 (3): 150-159. 10.1053/j.seminhematol.2008.04.006.View ArticlePubMedGoogle Scholar
- Decision curve analysis. --- Either ISSN or Journal title must be supplied.. [http://www.decisioncurveanalysis.org]
- Kahne man D: Maps of bounded rationality: psychology for behavioral economics. American Economic Review. 2003, 93: 1449-1475. 10.1257/000282803322655392.View ArticleGoogle Scholar
- Krantz DH, Kunreuther HC: Goals and plans in decision making. Judgement and decision making. 2007, 2 (3): 137-168.Google Scholar
- Rawls J: A theory of justice. Revised edition. 1999, Cambridge: Harvard University PressGoogle Scholar
- Feinstein AR: The 'chagrin factor' and qualitative decision analysis. Archives of internal medicine. 1985, 145 (7): 1257-1259. 10.1001/archinte.145.7.1257.View ArticlePubMedGoogle Scholar
- Le Minor M, Alperovitch A, Knill-Jones RP: Applying decision theory to medical decision-making--concept of regret and error of diagnosis. Methods of information in medicine. 1982, 21 (1): 3-8.PubMedGoogle Scholar
- Hilden J, Glasziou P: Regret graphs, diagnostic uncertainty and Youden's Index. Statistics in medicine. 1996, 15 (10): 969-986. 10.1002/(SICI)1097-0258(19960530)15:10<969::AID-SIM211>3.0.CO;2-9.View ArticlePubMedGoogle Scholar
- Reyna V: How people make decisions that involve risk: a dual-processesapproach. Current Directions in Phychological Sciences. 2004, 13: 60-66. 10.1111/j.0963-7214.2004.00275.x.View ArticleGoogle Scholar
- GRADE-working-Group: Grading quality of evidence and strength of recommendations. BMJ. 2004, 328: 1490-1498. 10.1136/bmj.328.7454.1490.View ArticlePubMed CentralGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/10/51/prepub