A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decisionmaking
 Athanasios Tsalatsanis^{1},
 Iztok Hozo^{2},
 Andrew Vickers^{3} and
 Benjamin Djulbegovic^{1, 4}Email author
DOI: 10.1186/147269471051
© Tsalatsanis et al; licensee BioMed Central Ltd. 2010
Received: 23 July 2010
Accepted: 16 September 2010
Published: 16 September 2010
Abstract
Background
Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decisionmaking is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decisionmaking should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA.
Methods
First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decisionmaker.
Results
We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease.
Conclusions
We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decisionmaker, particularly in those clinical situations when the best management option is the one associated with the least amount of regret (e.g. diagnosis and treatment of advanced cancer, etc).
Background
Decision making is often governed by uncertainty that inevitably affects the overall decision process. In their efforts to model uncertainty, decision theorists have proposed many methodologies with the majority of them having been based on statistics and probability[1–4], information theory and entropy[5], or possibilistic approaches such as fuzzy logic[6, 7].
In clinical medical research, much effort has been invested in developing decision support systems for diagnosis and treatment of various clinical conditions such as management of infectious diseases in an intensive care unit, chronic prostatitis, or liver surgery[8–12]to name a few examples. Most of these systems are based on probabilistic prediction models. Even though prediction models have been shown to be generally superior and potentially complementary to physicians' prognostications [13–15], historically they have not fulfilled decision makers expectations to help improve decisionmaking. One reason for this is that most probabilistic medical decision support systems are based on expected utility theory that humans often violate[14, 16, 17]. In addition, most models in medicine do not incorporate decisionmakers' preferences, which in addition to having reliable evidence, is the key to rational decisionmaking[18–20].
The goal of this paper is to develop a novel decisionmaking approach that incorporates the decision maker's attitudes towards multiple treatment strategies. Our goal is addressed through the following three specific aims. First, we deviate from the traditional expected utility theory in an attempt to satisfy both formal criteria of rationality and human intuition about good decisions[18–22]. We employ regret theory, since regret is a cognitive emotion that combines both rationality and intuition, which are key elements for decisionmaking[22, 23], to develop a novel methodology for eliciting decision makers' personal preferences. Consequently we reformulate decision curve analysis (DCA)[24, 25] from the regret theory point of view to evaluate alternative treatment strategies and to integrate both evidence on prognosis and treatment with the decision maker's attitudes and preferences[26–28]. Finally, we identify circumstances under which a decision maker tolerates a wrong decision.
To implement our approach, we first compute the threshold probability at which the decision maker is indifferent between alternative actions, based on the level of regret one might feel when he/she makes a wrong decision. We then employ the regret based DCA to identify the optimal strategy for a particular decision maker. The optimal strategy is the one that brings the least regret in the case that it is, in retrospect, wrong. We also show how to employ a prediction model to estimate the probability of disease for a patient and contrast it with the decision maker's threshold probability. Finally, we incorporate the concept of acceptable regret in the decision process to identify the conditions under which the decision maker tolerates a potentially wrong decision.
Methods
Decision analysis based on regret theory
In Figure 1, p = P(D +)is the probability associated with the presence of the disease as estimated by a prediction model;1  p = P(D )is the probability associated with the absence of the disease, and, U _{ i }, i ∈ [1,4], are the utilities corresponding to each outcome. For example, U _{1} is the utility of administering treatment to a patient who has the disease (e.g. treat when necessary), and U _{2} is the utility of administering treatment to a patient who does not have the disease (e.g. administering unnecessary treatment). Note that we use the term "treatment" in the generic sense of health care intervention, which may indicate therapy, procedure, or a diagnostic test.
The probabilistic nature of prognostication models complicates significantly the decision process. For example, if a prediction model estimates the probability of a patient having a disease equal to 40%, it is unclear whether this patient should receive treatment or not. A solution from the point of view of the classical decision theory is to employ the concept of threshold probability P _{ t }, which is defined as the probability at which the decision maker is indifferent between two strategies (e.g. administer treatment or not)[27, 29, 30]. Based on the threshold concept, the patient should be treated if p ≥ P _{ t }and should not be treated otherwise.
However, since in most cases decisions are made under uncertainty and can never be 100% accurate [23, 26, 28, 31–34]. Thus, after a decision has been made one may discover that another alternative would have been preferable. This knowledge may bring a sense of loss or regret to the decision maker[23, 26, 28, 31–34]. Regret can be particularly strong when the consequences of wrong decisions are life threatening or seriously influence the quality of the patient's life.
Formally, regret can be expressed as the difference between the utility of the outcome of the action taken and the utility of the outcome of the action that, in retrospect, should have been taken [23, 26, 28, 31–34]. Regret can be felt by any party involved in the decisionmaking process (e.g. patients receiving treatment, patient's proxies or physicians administering treatment). For the rest of this paper we assume that the decision maker is the treating physician.
We first employ regret theory to estimate the threshold probability, P _{ t }, at which the physician is indifferent between alternative management strategies (e.g. administer treatment or not). In order to accomplish this, we describe regret in terms of the errors of (1) not treating the patient who has the disease, and (2) treating the patient who does not have the disease.
Equation 1 effectively captures the preferences of the decision maker towards administering or not administering treatment. At the individual level, equation 1 shows how the threshold probability relates to the way the decision maker weighs false negative (i.e. failing to provide necessary treatment) vs. false positive (i.e. administering unnecessary treatment) results[24, 25].
Note that the fraction $\frac{{U}_{1}{U}_{3}}{{U}_{4}{U}_{2}}$ is undefined for U _{4}  U _{2} = 0, which means that in this situation there is no regret associated with administering unnecessary treatment. Under these circumstances, P _{ t }= 100%, indicating that treatment is justified only in case of absolute certainty of disease (p = 100%), a realistically unachievable goal[26].
Elicitation of threshold probability
There are numerous techniques for eliciting the decision maker's preferences regarding treatment administration [35]. None of them has been proven to be better than the other. We argue that any attempt to measure people's preferences and risk attitudes should be derived from an underlying theory of decisionmaking that can be applied to a problem or a class of the problems at hand. We approach elicitation of preferences by capturing people attitudes (e.g. physicians') through threshold probabilities. Normatively, a threshold probability reflects indifference between two alternative management strategies.
There are few commonly used methods to assess the value of this indifference for a decision maker such as the standard gamble, and the time tradeoff [35–37]. The problem is that both standard gamble and time tradeoff are timeconsuming, cognitively more complex and are shown that can lead to biased estimates of people's preferences [36, 37]. An alternative method is to use rating scales, such as visual analog scales (VAS), which are considerably easier to administer and better understood by the participants. The problem with analog scales, however, is that they cannot capture health state tradeoffs[36, 37].
The proposed method retains the simplicity of VAS but it takes into account the consequences of possible mistakes in decisionmaking by utilizing two visual analog scales. The first scale aims to assess the regret associated with potential error of failing to administer beneficial treatment ("regret of omission"). The second scale measures the regret of administration of unnecessary treatment ("regret of commission"). Using these two scales we can capture tradeoffs and compute the threshold probability at which a decision maker is indifferent between two alternative management strategies.
We employed the two visual analog scales with typical 100 points [35–37]anchored by no regret and maximal regret. This is modeled after pain assessment limiting the maximum possible pain that a person can experience [38]. Accordingly, we can elicit threshold probabilities by asking the physician to weigh the regret associated with wrong decisions (e.g. giving unnecessary treatment vs. failure to administer necessary treatment) using a numerical (0 to 100) scale. The questions may be narrowly defined related to specific outcomes (e.g., survival/mortality, heart attack etc.). We should, however, note that most treatments are associated with multiple dimensions, some good and some bad. This is a fundamental reason why no universally accepted method for assessment of decisionmakers' preferences has been developed so far. It is very difficult, if not impossible, to accurately determine the tradeoffs across multiple outcomes that can be permuted in a number of ways. A solution to this problem is to capture the decisionmaker's global or "holistic" perception toward treatment. By asking questions about tradeoffs in this way, we directly address both cognitive mechanismsintuitive and deliberative of the decision process. This, in turn, can lead to more accurate assessment of the decision makers' preferences.
For example, to elicit the physician's threshold probability, we may ask the following questions:
1. On a scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate the level of your regret if you failed to provide necessary treatment to your patient (i.e. did not give treatment that, in retrospect, you should have given)? [Note that the answer to this question corresponds to the (U1U3) expression in equation 1)].
2. On a scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate the level of your regret if you had administered unnecessary treatment to your patient (i.e. administered treatment that, in retrospect, should have not been given)? [Note the answer to this question corresponds to the (U4U2) expression in equation 1).]
Thus, the physician would be unsure as to whether to treat or not the patient if the patient's probability of disease as computed by the prediction model was 33%. Thus, the recommended action, which is based on elicitation of the decisionmaker preferences, is directly derived from the underlying theoretical model.
Regret based decision curve analysis (DCA)
Decisionmakers may be presented with many alternative strategies that can be difficult to model. A simple, yet powerful approach that is based on experience of a typical practicing physician is to compare the strategy based on modeling with those scenarios when all or no patient is treated. That is, the clinical alternatives to the prediction model strategy is to assume that all patients have the disease and thus treat them all, or to assume that no patient has the disease and thus treat none[25]. In this case the clinical dilemma a physician faces when considering treatment is threefold: (1) treat all the patients ("treat all"), (2)treat no patients ("treat none"), and (3) use a prediction model and treat a patient if p ≥ P _{ t }("model").
The optimal decision depends on the preferences of the decision maker as captured by the threshold probability. We use Decision Curve Analysis (DCA) [24, 25] to identify the range of threshold probabilities at which each strategy ("treat all", "treat none", and "model") is of value. Traditional DCA uses the (net expected) benefits associated with each strategy to recommend the best strategy [24, 25]. In this work, we consider that the optimal strategy is the one that brings the least regret in case it is proven wrong, retrospectively.
Here, FN (probability of false negatives) represents the conditional probability P(p < P _{ t }D +)of not treating the patient who has the disease.
FP (probability of false positives) is the conditional probability P(p ≥ P _{ t }D )of treating the patient who does not have the disease.
Similarly,
TP = 1  FN = P(p ≥ P _{t}D +)(probability of true positives): Probability of treating the patient who has the disease.
TN = 1  FP = P(p < P _{t}D ) (probability of true negatives): Probability of not treating the patient who does not have the disease.
Note that these are exactly the same formulas as those derived by Vickers and Elkin [25] who employ the expectedutility model in "decision curve analysis" (DCA). The regret based derivation, however, is mathematically more parsimonious. The original DCA formulation required several mathematical manipulations making the simplicity of regret approach more attractive. In addition, as argued throughout the manuscript, the regret formulation may have additional decisiontheoretical advantages as it enables experiencing consequences of decisions both at the emotional (system 1) and cognitive (system 2) level[23, 40].
Equations 10 and 11 above are useful when calculating NERD as a function of P _{ t }. The probabilities P(p ≥ P _{ t }∩ D +), P(p ≥ P _{ t }∩ D ), P(p ≥ P _{ t }∩ D +), and P(p ≥ P _{ t }∩ D ) are estimated as follows:

P(p ≥ P _{ t }∩ D +) ≈ the number of patients who have the disease and for whom the prognostic probability is greater than or equal to P _{ t }(with #TP = number of patients with true positive results, $P\left(p\ge {P}_{t}\cap D+\right)\approx \frac{\#TP}{n}$, where n is the total number of patients in the study).

P(p ≥ P _{ t }∩ D ) ≈ the number of patients who do not have the disease and for whom the prognostic probability of disease is greater than or equal to P _{ t }(with #FP = number of patients with false positive results, $P\left(p\ge {P}_{t}\cap D\u2013\right)\approx \frac{\#FP}{n})$.

P(p < P _{ t }∩ D +) ≈ the number of patients who have the disease and for whom the prognostic probability of disease is less than P _{ t }(with #TN = number of patients with true negative results, $P\left(p<{P}_{t}\cap D+\right)\approx \frac{\#TN}{n})$.

P(p < P _{ t }∩ D ) ≈ the number of patients who do not have the disease and for whom the prognostic probability of disease is less than P _{ t }(with #FN=number of patients with false negative results, $P\left(p<{P}_{t}\cap D\u2013\right)\approx \frac{\#FN}{n})$.
When computing NERD[Treat none, treat all] we assume that all patients have the disease, thus #TP is the number of people who actually have the disease and #FP is the number of people who do not have the disease but are given treatment. On the other hand, when computing NERD[Treat none, Model]from equation 10 and, NERD[Treat all, Model]from equation 11, #TP, #FP, #TN, and #FN are computed for each threshold probability assuming that a patient has the disease if the prognostic probability is greater than or equal to the threshold probability and does not have the disease, otherwise.
 1.
Select a value for threshold probability.
 2.
Assuming that patients should be treated if p ≥ P _{ t }and should not be treated otherwise, compute #TP and #FP for the prediction model.
 3.
Calculate the NERD(Treat none, Model)using equation 10.
 4.
Calculate NERD(Treat all, Model)using equation 11.
 5.
Compute the NERD(Treat none, Treat all)using equation 10 where #TP is the number of patients having the disease and #FP is the number of patients without disease who got treatment.
 6.
Repeat steps 1  6 for a range of threshold probabilities.
 7.
Graph each NERD calculated in steps 35 against each threshold probability.
Based on the Regret DCA methodology, the optimal decision at each threshold probability is derived by comparing each pair of strategies through their corresponding NERDs according to the transitivity principle (i.e., if A > B, B > C then A > C). Thus, if NERD(strategy 1, strategy 2) > NERD(strategy 2, strategy 3) > 0 then strategy 2 is better than strategy 1, and strategy 3 is better than strategy 2. Therefore, strategy 3 is the optimal strategy.
Acceptable Regret
No decision model can guarantee that the recommended strategy will be the correct one. Therefore, we can always make a mistake and recommend treatment we should not have, or fail to recommend treatment we should have administered [42]. However, there are situations where the regret resulting from a wrong decision will be tolerable. These situations are best described under the notion of acceptable regret [26, 28, 31]. Formally, acceptable regret,Rg _{0}, is defined as the portion of utility a decision maker is willing to lose/sacrifice when he/she adheres to a decision that may prove wrong [26, 28, 31, 32]. For example, a physician may regret administering unnecessary treatment to a patient but he/she can "still live with" the consequences of this decision if she/he judged them to be trivial or inconsequential.
The acceptable regret,Rg _{0}, can be computed using any of the two definitions described in equations 15 and 16.
represents the prognostic probability below which the physician would comfortably withhold treatment that may prove beneficial, in retrospect.
Note that equations 20 and 21 express acceptable regret in terms of probabilities while equations 1719 define it in terms of NERD. Hence, the outputs of these equations are not the same; rather, they complement each other.
Elicitation of acceptable regret
In most cases the decision maker does not have a complete understanding of benefits lost or harms inflicted and cannot assign a precise number to them. For this reason, we do not suggest inquiring directly about the value of r. Instead, we propose eliciting r through the decisionmaker's responses to specific clinical scenarios. For example, we propose the following approach:
Assume that you have 100 patients with the same probability of disease as the patient you are currently treating . You need to decide whether each of these patients should receive treatment or not. Since no prediction model is 100% accurate, it is expected that you will make some mistakes in your treatment recommendations (e.g. you may recommend treatment to a patient who does not need it, or fail to recommend treatment to a patient who needs it).
1. We are now interested in knowing your tolerance toward administering unnecessary treatment i.e. we want to learn what the magnitude of the unavoidable error you can live with is by inflicting potentially harmful treatment on a patient. Note that if you say that your acceptable regret is zero, this means that you can only make decision if you absolutely certain that your recommendation is correct.
Out of the number (100y) of patients who should have not received treatment, how many patients would you tolerate treating? (The answer is used to compute r_{ h }).
2. We are interested in knowing your tolerance toward failing to provide necessary treatment i.e. we want to learn what the magnitude of unavoidable error you can live with is by forgoing potentially beneficial treatment. Note that if you say that your acceptable regret is zero, this means that you can only make decision if you absolutely certain that your recommendation is correct.
Out of the number (100x) of patients who should have been treated, how many patients would you tolerate not treating? (The answer is used to compute r_{ b }).
It is unnecessary to ask the decision maker to answer both questions. We suggest asking only the question related to the recommendation the physician is about to make e.g. if the recommendation is about administering treatment, then the decision maker should be asked the second question, while if it is about not giving treatment, then he/she can ask the first question.
The value of acceptable regret is plotted in the regret DCA graph to visually facilitate the decision making process. At a specific threshold probability all strategies for which NERD ≤ Rg _{0} are considered equivalent in regret, according to the definition in the previous section.
Example
We will employ a prostate cancer biopsy example to demonstrate the applicability of our approach. Prostate cancer biopsy is an invasive and uncomfortable procedure, which can be painful and is associated with a risk of infection. However, it is often necessary for diagnosis of prostate cancer, one of the leading causes of cancer death in men.
Men are typically biopsied for prostate cancer if they have an elevated level of prostatespecific antigen (PSA). However, most men with a high PSA do not have prostate cancer. This has led to the idea that statistical models based on multiple predictors (PSA, age, family history, other markers) might be used to predict biopsy outcomes and hence aid biopsy decisions for individual patients. A physician seeing a patient with an elevated PSA has three possible options: go for biopsy, refuse biopsy or look up his probability in a statistical model and then make a decision.
 1.
NERD(biopsy none,model) > 0 therefore, the model is preferred to the strategy biopsy none.
 2.
NERD(biopsy none, biopsy all) > 0 therefore, the strategy biopsy all is preferred to the strategy biopsy none.
 3.
NERD(biopsy all,model) > 0 therefore, the model is preferred to the strategy biopsy none
Repeating the same procedure for all threshold probabilities, we can see that deciding based on the statistical model is the optimal strategy (i.e. results in the minimum expected regret) for threshold probabilities between 8% and 43%. For threshold probabilities between 42% and 95%, the optimal strategy is to biopsy no patients, while for 0% to 8% both model and biopsy all strategies are optimal.
To interpret these results, we have to consider how a typical physician values the harms of a false negative (missing a cancer) and a false positive (an unnecessary biopsy) result. If regret associated with unnecessary biopsy is felt to be worse than missing cancer, then according to equation 1, the threshold probability is greater than 50%. However, it is unlikely that a physician would consider an unnecessary biopsy to be worse than missing a cancer, so the threshold probability for biopsy must be less than 50%. Thus, a reasonable range of threshold probabilities might indeed be between 8%  43% as suggested by our model. As the model is superior across this entire range, we can conclude that, irrespective of the physician's exact preferences, making a biopsy decision based on the statistical model will lead to lower expected regret than an alternative such as biopsying all or no men. Based on discussions with clinicians, we believe that a reasonable range of threshold probability is 10%  40%. As the regret associated with the model strategy is lowest across this entire range, we can recommend use of the model. Nonetheless, we do not have a complete sample of all physician preferences and it is possible that a physician may have a probability outside of this range.
which means that the strategies "biopsy none" (biopsy no patients) and "model" are equivalent in regret. Therefore, the prediction model does not offer any better information and thus, it can be disregarded.
Case Study
This section describes the overall decision process regarding prostate cancer biopsy. The process begins with elicitation of the threshold probability from the treating physician and continues with evaluation of the available strategies based on regret DCA (Figure 4). Then, if necessary, the probability of cancer based on the available prognostic model is computed and contrasted with the threshold probability. Finally, the concept of acceptable regret is employed to arrive at the strategy which is the most tolerable to the decision maker who always faces possibilities of making wrong decisions. For the remainder of this section the normal font text corresponds to the author comments. The text in bold and underlined font corresponds to questions to, and answers from the physician respectively. The italic text is notes to the reader. We demonstrate the applicability of our approach using hypothetical answers from two physicians.
 1.
Interview with the physician to elicit his/her threshold probability.
 a.
On the scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate your level of regret if you failed to provide necessary treatment?
 b.
On the scale 0 to 100, where 0 indicates no regret and 100 indicates the maximum regret you could feel, how would you rate your level of regret if you administered unnecessary treatment?
Physician #1: 10, Physician #2: 60. This value corresponds to U _{4}  U _{2} from equation 1.
 2.
Using the graph in Figure 4, identify the optimal strategy for the computed threshold probability.
 1.
NERD(biopy all, model) > 0, the strategy "model" is better than the strategy "biopsy all"
 2.
NERD(biopsy none, model) > 0, the strategy "model" is better than the strategy "biopsy none"
 3.
NERD(biopsy none, biopsy all) > 0, the strategy "biopsy all" is better than "biopsy none".
Therefore, the optimal strategy is the "model" which corresponds to biopsy based on the probability of cancer predicted by the statistical model. The next step is to compute the patient's probability of cancer and contrast it with the threshold probability.
 3.
Compute the cancer probability for the specific patient based on the statistical model.
 a.
If the cancer probability is greater than or equal to the threshold probability, then the surgeon should biopsy the patient.
 b.
If the cancer probability is less than the threshold probability, then the surgeon should not biopsy the patient.
 4.
Elicitation of the level of acceptable regret.
Assume that you have 100 patients, all with probability of cancer equal to 20% (the same as your patient). This means that out of 100 patients, 20 patients will have cancer while 80 will not have cancer. You need to decide whether each of these patients should undergo biopsy or not. Since no prediction model is 100% accurate, it is expected that you will make some mistakes in your recommendations (e.g. you may recommend biopsy to a patient who does not need it, or fail to recommend biopsy to a patient who may need it).
 a.
a. The physician considers biopsy (Physician #1):
Out of the 20 patients who should be biopsied, for how many patients would you tolerate not recommending a necessary biopsy? 1.
This answer corresponds to ${r}_{b}=\frac{1}{20}=0.05$ and acceptable regret Rg _{b} = r _{ b } (U _{1}  U _{3})= 0.05 _{*} 0.5 = 0.025. The optimal strategy at P _{ t }= 16% is to use the statistical model (Figure 4). For P _{ t }= 16% and Rg _{ b }= 0.025 all NERDs are greater than acceptable regret, thus the optimal strategy remains the statistical model.
 b.
The physician does not consider biopsy (Physician #2).
Out of the 80 patients who should not undergo biopsy, for how many patients would you tolerate recommending an unnecessary biopsy? 40.
The answer provided by the Physician #2 corresponds to ${r}_{h}=\frac{40}{80}=0.50$ and acceptable regret Rg _{ h }= r _{ h }(U _{4}  U _{2}) = 0.5 _{*} 0.6 = 0.3.
The optimal strategy for P _{ t }= 46% is to biopsy no patients (Figure 4). Also, for p _{ t }= 46% and Rg _{ h }= 0.3, we have: NERD(biopsy none, biopsy all) =  0.639 > Rg _{ h }, NERD(biopsy none, model) =   0.003 < Rg _{ h } and NERD(biopsy all, model) = 0.6364 > Rg _{ h }. This means that the strategies "biopsy none" and "model" are equivalent in regret. In practical terms no additional effort is justified for using the statistical model.
 a.
Physician #1 considers recommending biopsy to his/her patient. Based on equation 21, the physician would tolerate not recommending a biopsy for any prognostic probability below P_{ treat none } = r _{ b } = 5%.
 b.
Physician #2 considers not recommending biopsy to his/her patient. Based on equation 20, the decision maker would tolerate recommending an unnecessary biopsy for any prognostic probability above P_{ treat all } = 1  r _{ h } = 50%
Discussion
Currently, there is no agreed upon method for how preferences regarding multiple objectives that typically go in opposite directions (i.e. most medical interventions are associated both with benefits and harms) should be elicited. We have presented and demonstrated an approach to decision making based on regret theory and decision curve analysis. The approach presented in this paper relies on the concept of the threshold probability at which a decision maker is indifferent between strategies, to suggest the optimal decision [27, 29, 30]. Unlike the approaches described in the classic threshold papers [27, 29, 30], our approach is based on the notion that the value of threshold probability is clearly subjective and depends on the personal preferences of the decision maker. We elicit threshold probabilities based on the regret one may feel in case that the chosen strategy is proven wrong, in retrospect. Although one can narrow down the approach to specific medical outcomes, we believe that eliciting preferences in a global, holistic way is more useful if our approach is to be used in the actual practice.
We believe that the model described here has a direct practical application in overcoming many difficulties related to linking evidence with patient's preferences to arrive at the optimal decision the issues that plagued the field of decisionmaking. The problem of eliciting preferences and integrating them in a coherent decision is not a simple one. We argue that the approach we are advocating here represents a contribution to the field of decision making, be should not be seen as the panacea to medical decision making. However, we anticipate our methodology to be suitable for medical decision primarily associated with tradeoffs between quality and quantity of life.
Over that last couple of decades, many attempts have been made to develop the best method to take these considerations in reallife settings. Unfortunately, as explained, no approach has succeeded [35]. We believe that the reason for this is that most approaches to elicit decision maker's preferences as well as to help improve decisionmaking have relied on a rational framework based on expected utility theory[21]. However, modern cognitive theories (within so called dualprocessing theory) have convincingly demonstrated that human decisions rely both on intuition (system 1) and analytical, deliberative process (system 2) in balancing risks and benefits in the decisionmaking process [22, 40, 45]. We believe that rational decisionmaking should take into account both formal principles of rationality and human intuition about good decisions [46, 47]. The key is to preserve rational framework, while allowing anticipation of the effect of decision on emotions (while avoiding biases associated with intuitive thinking) [40]. One way to accomplish this is to use the cognitive emotion of regret to serve as a link between system 1 (i.e. intuitive system) and system 2 (i.e. deliberative, analytical cognitive system). By anticipating consequences of our actions and circumstances under which we can live with our mistakes, we bring together both aspects of cognition that may lead to better and more satisfactory decisionmaking.
Specifically, we argue that eliciting people's preferences using regret theory may be superior to using traditional utility theory because regret forces decisionmakers to explicitly consider consequences of decisions. We have previously shown that we can always make errors in decisionmaking: recommend treatment that does not work, or fail to recommend treatment that does [26]. Therefore, we reformulated DCA from the regret theory's point of view. Furthermore, it has been shown that the expected utility theory is often violated to minimize anticipated regret [33, 34].In addition, there is substantial evidence that medical decision making aims to minimize regret associated with wrong decisions [48–50].
Moreover, while descriptive, normative, and prescriptive theories [17] tend to evaluate individual outcomes, the approach presented here evaluates all of the outcomes in a holistic manner. Our approach is consistent with Reyna's "gist" or "fuzzy trace theory" in which the decisionmaker characterizes gist of each outcome to arrive at a given decision [51]. For example, consider that a decision maker is provided with a list of harms and benefits associated with each decision, as it is currently recommended by the practice guidelines panels [52]. In traditional theories, the decision maker evaluates a treatment strategy by reasoning on each of the harms and benefits associated with a given strategy. This, as discussed above, would mean integration of all multiple outcomes that often go in different directions typically within limited timeframe. Due to the complexity of these decisions, however, this approach overwhelms the decision maker as our brain capacity is limited. The regret DCA methodology quantifies the global attitudes of the decision maker towards a specific strategy without requiring separate reasoning for each of the harms and benefits. This holistic assessment occurs within the dual processing cognitive system, which evaluates collectively the harms and the benefits associated with each treatment alternative. By assessing tradeoffs through both cognitive mechanismsintuitive and deliberative we believe that we can assess decision makers' preferences more accurately.
In general, since our method relies on the elicitation of threshold probability we recommend using our methodology for every patient. As every patient's values are different the threshold probability should indeed be patientspecific. For example, a physician may act "aggressively" for a young patient who is the father of two underage kids and less aggressively for an older patient. However, in the cancer biopsy example, it is expected that most of the patients should present with similar characteristics and therefore most physicians would settle in a small area of threshold probabilities. In this case repeating the elicitation process for every patient would be impractical. Nevertheless, this is an empirical question worthy of further investigation as alluded above.
Our approach may help reconcile formal principles of rationality and human intuitions about good decisions that may better reflect "rationality" in medical decisionmaking [21, 32, 46, 47]. We hope that our theoretical work will stimulate empirical testing of the concepts outlined in this paper. Toward this end, we are currently working on developing a prescriptive computerized decisionsupport system to facilitate the application of the model described herein. Such a system is expected to be user friendly with builtin automatic manipulation of the complex calculations that may be offputting to many users. We hope to report on testing of our system in the near future.
Conclusions
We have presented a decision making methodology that relies on regret theory and decision curve analysis to assist physicians in choosing between appropriate health care interventions. Our methodology utilizes the cognitive emotion of regret to determine the decision maker's preferences towards available strategies and DCA to suggest the optimal decision for the specific decision maker. We believe that our approach is suitable for those clinical situations when the best management option is the one associated with the least amount of regret (e.g. diagnosis and treatment of advanced cancer, etc).
As with any other novel theoretical work, our approach has its limitations. First, it has not been empirically tested in a clinical setting. However, we are in the process of developing the appropriate decision support tools to bring our model into clinical practice and evaluate its usefulness with actual physicians and patients. Second, the methodology presented is appropriate for single point decision making. Further investigation is required to determine the application of regret theory to decisions that reoccur over time. Finally, we assume that there is only one decision maker involved in the decision process. Nevertheless, our plan for future work includes extending our methodology to shared decisionmaking that will include both physician and patient in the decision process and investigate whether in practice there is a difference between preferences and choices made by physicians and their patients.
 1.
We propose a novel method for eliciting decision makers' preferences towards treatment administration. Contrary to traditional methodologies on eliciting preferences, our method considers the consequences of potential mistakes in decisions. We propose a dual visual analog scale to capture errors of omission and errors of commission and, therefore, evaluate the tradeoffs associated with each of the available strategies.
 2.
We have reformulated DCA from the regret theory point of view. Our approach is intuitively more appealing to a decision maker and should facilitate decision making particularly in those clinical situations when the best management option is the one associated with the least amount of regret.
 3.
Finally, we utilize the concept of acceptable regret to identify the circumstances under which a decision maker tolerates a wrong decision.
We envision facilitation of the decision process in clinical settings through a computerized decision support system available at the point of care. In fact, we are in the process of developing such a system and hope to report about it soon.
Abbreviations
 DCA:

Decision Curve Analysis
 NERD:

Net Expected Regret Difference
 VAS:

Visual Analog Scale, p: Prognostic probability
 P _{ t } :

Threshold probability
 D +/D :

The patient has/does not have the disease
 U _{ i } :

Utility corresponding to outcome I
 Rg(x):

Regret associated with the action x
 Rx +/Rx :

Treatment/No treatment
 U _{1}  U _{3} :

Consequences of not administering treatment where indicated
 ERg(action):

Expected regret associated with an action
 #TP :

#TN,#FP,#FN Number of TP:, TN, FB, FN patients
 n :

Number of patients
 NERD(action 1:

action 2): Net expected regret difference between actions 1 and 2
 Rg _{0} :

Acceptable regret
 Rg _{ b } :

Acceptable regret as defined in terms of loses in benefits due to forgoing treatment
 Rg _{ h } :

Acceptable regret as defined in terms of harms due to undergoing unnecessary treatment
 r _{ b }/r _{ h } :

Percentages of the benefits/harms a decision maker is willing to lose/incur in case of a wrong decision
 P _{ treat all } :

The prognostic probability above which the decision maker would tolerate recommending unnecessary treatment
 P _{ treat none } :

The prognostic probability below which the decision maker would tolerate not recommending treatment.
Declarations
Acknowledgements
This work is supported by the Department of Army grant #W81 XWH 0920175.
Authors’ Affiliations
References
 Edwards W, Miles RFJ, von Winterfeldt D: Advances in decision analysis. From foundations to applications. 2007, New York: Cambridge University PressView ArticleGoogle Scholar
 Lindley D: Making decisions. 1985, New York: Willey, 2Google Scholar
 Greenland S: Probability logic and probabilistic induction. Epidemiology. 1998, 9: 322332. 10.1097/0000164819980500000018.View ArticlePubMedGoogle Scholar
 Greenland S: Bayesian Interpretation and Analysis of Research Results. Seminars in Hematology. 2008, 45 (3): 141149. 10.1053/j.seminhematol.2008.04.004.View ArticlePubMedGoogle Scholar
 Shannon C, Weaver W: The mathematical theory of communication. 1962, Urbana: The University of Illinois PressGoogle Scholar
 Zimmer man H: Fuzzy set theory and its applications. 1996, Boston: Kluwer Academic Press, 3View ArticleGoogle Scholar
 Zimmer man H: An applicationoriented view of modelling uncertainty. European Journal of Operational Research. 2000, 122: 190198. 10.1016/S03772217(99)002283.View ArticleGoogle Scholar
 Schurink CAM, Lucas PJF, Hoepelman IM, Bonten MJM: Computerassisted decision support for the diagnosis and treatment of infectious diseases in intensive care units. The Lancet Infectious Diseases. 2005, 5 (5): 305312. 10.1016/S14733099(05)701158.View ArticlePubMedGoogle Scholar
 Hansen C, Zidowitz S, Hindennach M, Schenk A, Hahn H, Peitgen HO: Interactive determination of robust safety margins for oncologic liver surgery. International journal of computer assisted radiology and surgery. 2009, 4 (5): 469474. 10.1007/s1154800903591.View ArticlePubMedGoogle Scholar
 Bratchikov OP, Korenevsii NA, Seregin SP, Dolzhenkov SD, Shumakova EA, Kotsar AG, Kriukov AA, Krivovtsev SI, Popov AV: Automatic decision support system in prognostication, diagnosis, treatment and prophylaxis of chronic prostatitis. Urologiia. 2009, 4448. 4
 Bertsche T, Askoxylakis V, Habl G, Laidig F, Kaltschmidt J, Schmitt SP, Ghaderi H, Bois AZ, MilkerZabel S, Debus J: Multidisciplinary pain management based on a computerized clinical decision support system in cancer pain patients. Pain. 2009, 147 (13): 2028. 10.1016/j.pain.2009.07.009.View ArticlePubMedGoogle Scholar
 RahillyTierney CR, Nash IS: Decisionmaking in percutaneous coronary intervention: a survey. BMC Med Inform Decis Mak. 2008, 8: 2810.1186/14726947828.View ArticlePubMedPubMed CentralGoogle Scholar
 Dawes RM, Faust D, Meehl PE: Clinical versus actuarial judgment. Science. 1989, 243 (4899): 16681674. 10.1126/science.2648573.View ArticlePubMedGoogle Scholar
 Hastie R, Dawes RM: Rational choice in an uncertain world. 2001, London: Sage Publications, IncGoogle Scholar
 TheSupportInvestigators: A Controlled Trial to Improve Care for Seriously III Hospitalized Patients: The Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). JAMA. 1995, 247 (20): 15911598.Google Scholar
 Baron J: Thinking and deciding. 2000, Cambridge: Cambridge University Press, 3Google Scholar
 Bell DE, Raiffa H, Tversky A: Decision making. Descriptive, normative, and prescriptive interactions. 1988, Cambridge: Cambridge University PresspublisherView ArticleGoogle Scholar
 Djulbegovic B: Lifting the fog of uncertainty from the practice of medicine. Bmj. 2004, 329 (7480): 14191420. 10.1136/bmj.329.7480.1419.View ArticlePubMedPubMed CentralGoogle Scholar
 Guyatt GH, Oxman AD, Kunz R, FalckYtter Y, Vist GE, Liberati A, Schunemann HJ: Going from evidence to recommendations. Bmj. 2008, 336 (7652): 10491051. 10.1136/bmj.39493.646875.AE.View ArticlePubMedPubMed CentralGoogle Scholar
 O'Connor AM, Legare F, Stacey D: Risk communication in practice: the contribution of decision aids. Bmj. 2003, 327 (7417): 736740. 10.1136/bmj.327.7417.736.View ArticlePubMedPubMed CentralGoogle Scholar
 Djulbegovic B, Hozo I: Health care reform & criteria for rational decisionmaking. 2010,  Either ISSN or Journal title must be supplied.. [http://www.smdm.org/newsletter/spring_2010/#a22]Google Scholar
 Slovic P, Finucane ML, Peters E, MacGregor DG: Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis. 2004, 24 (2): 311321. 10.1111/j.02724332.2004.00433.x.View ArticlePubMedGoogle Scholar
 Zeelenberg M, Pieters R: A theory of regret regulation 1.1. J Consumer Psychol. 2007, 17: 2935. 10.1207/s15327663jcp1701_6.View ArticleGoogle Scholar
 Vickers A, Cronin A, Elkin E, Gonen M: Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers. BMC Medical Informatics and Decision Making. 2008, 8 (1): 5310.1186/14726947853.View ArticlePubMedPubMed CentralGoogle Scholar
 Vickers A, Elkin E: Decision curve analysis: a novel method for evaluating prediction models. Med Dec Making. 2006, 26 (6): 565574. 10.1177/0272989X06295361.View ArticleGoogle Scholar
 Djulbegovic B, Hozo I: When Should Potentially False Research Findings Be Considered Acceptable?. PLoS Med. 2007, 4 (2): e2610.1371/journal.pmed.0040026.View ArticlePubMedPubMed CentralGoogle Scholar
 Djulbegovic B, Hozo I, Lyman GH: Linking evidencebased medicine therapeutic summary measures to clinical decision analysis. MedGenMed. 2000, 2 (1): E6PubMedGoogle Scholar
 Djulbegovic B, Hozo I, Schwartz A, McMasters KM: Acceptable regret in medical decision making. Med Hypotheses. 1999, 53 (3): 253259. 10.1054/mehy.1998.0020.View ArticlePubMedGoogle Scholar
 Pauker SG, Kassirer JP: Therapeutic decision making: a costbenefit analysis. N Engl J Med. 1975, 293 (5): 229234. 10.1056/NEJM197507312930505.View ArticlePubMedGoogle Scholar
 Pauker SG, Kassirer JP: The threshold approach to clinical decision making. N Engl J Med. 1980, 302 (20): 11091117. 10.1056/NEJM198005153022003.View ArticlePubMedGoogle Scholar
 Hozo I, Djulbegovic B: When is diagnostic testing inappropriate or irrational? Acceptable regret approach. Med Dec Making. 2008, 28 (4): 540553. 10.1177/0272989X08315249.View ArticleGoogle Scholar
 Hozo I, Djulbegovic B: Will insistence on practicing medicine according to expected utility theory lead to an increase in diagnostic testing?. Med Dec Making. 2009, 29: 320322. 10.1177/0272989X09334370.View ArticleGoogle Scholar
 Bell DE: Regret in Decision Making under Uncertainty. Operations Research. 1982, 30: 961981. 10.1287/opre.30.5.961.View ArticleGoogle Scholar
 Loomes G, Sugden R: Regret theory: an alternative theory of rational choice. Economic J. 1982, 92: 805824. 10.2307/2232669.View ArticleGoogle Scholar
 Lichenstein S, Slovic P: The construction of preference. 2006, New York: Cambridge University PressView ArticleGoogle Scholar
 Stiggelbout AM, de Haes JC: Patient preference for cancer therapy: an overview of measurement approaches. J Clin Oncol. 2001, 19 (1): 220230.PubMedGoogle Scholar
 Hunnik M, Glasziou P: Decisionmaking in health and medicine. Integrating evidence and values. 2001, Cambridge: Cambridge University PressGoogle Scholar
 McCaffery M, Beebe A: Pain: Clinical manual for nursing practice. 1993, Baltimore: V.V. Mosby CompanyGoogle Scholar
 Steyerberg EW, Vickers AJ: Decision curve analysis: a discussion. Med Decis Making. 2008, 28 (1): 146149. 10.1177/0272989X07312725.View ArticlePubMedPubMed CentralGoogle Scholar
 Evans TSBT: Hypothetical Thinking: Dual Processes in Reasoning and Judgement (Essays in Cognitive Psychology). 2007, New York: Psychology Press: Taylor and Francis GroupGoogle Scholar
 Peirce CS: The numerical measure of the success of predictions. Science. 1884, 4: 453454. 10.1126/science.ns4.93.453a.View ArticlePubMedGoogle Scholar
 Djulbegovic B, Frohlich A, Bennett CL: Acting on imperfect evidence: How much regret are we ready to accept?. J Clin Oncol. 2005, 23 (28): 68226825. 10.1200/JCO.2005.06.007.View ArticlePubMedGoogle Scholar
 Hozo I, Schell MJ, Djulbegovic B: DecisionMaking When Data and Inferences Are Not Conclusive: RiskBenefit and Acceptable Regret Approach. Seminars in Hematology. 2008, 45 (3): 150159. 10.1053/j.seminhematol.2008.04.006.View ArticlePubMedGoogle Scholar
 Decision curve analysis.  Either ISSN or Journal title must be supplied.. [http://www.decisioncurveanalysis.org]
 Kahne man D: Maps of bounded rationality: psychology for behavioral economics. American Economic Review. 2003, 93: 14491475. 10.1257/000282803322655392.View ArticleGoogle Scholar
 Krantz DH, Kunreuther HC: Goals and plans in decision making. Judgement and decision making. 2007, 2 (3): 137168.Google Scholar
 Rawls J: A theory of justice. Revised edition. 1999, Cambridge: Harvard University PressGoogle Scholar
 Feinstein AR: The 'chagrin factor' and qualitative decision analysis. Archives of internal medicine. 1985, 145 (7): 12571259. 10.1001/archinte.145.7.1257.View ArticlePubMedGoogle Scholar
 Le Minor M, Alperovitch A, KnillJones RP: Applying decision theory to medical decisionmakingconcept of regret and error of diagnosis. Methods of information in medicine. 1982, 21 (1): 38.PubMedGoogle Scholar
 Hilden J, Glasziou P: Regret graphs, diagnostic uncertainty and Youden's Index. Statistics in medicine. 1996, 15 (10): 969986. 10.1002/(SICI)10970258(19960530)15:10<969::AIDSIM211>3.0.CO;29.View ArticlePubMedGoogle Scholar
 Reyna V: How people make decisions that involve risk: a dualprocessesapproach. Current Directions in Phychological Sciences. 2004, 13: 6066. 10.1111/j.09637214.2004.00275.x.View ArticleGoogle Scholar
 GRADEworkingGroup: Grading quality of evidence and strength of recommendations. BMJ. 2004, 328: 14901498. 10.1136/bmj.328.7454.1490.View ArticlePubMed CentralGoogle Scholar
 The prepublication history for this paper can be accessed here:http://www.biomedcentral.com/14726947/10/51/prepub
Prepublication history
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments
View archived comments (2)