- Research article
- Open Access
- Open Peer Review
Cognitive biases associated with medical decisions: a systematic review
BMC Medical Informatics and Decision Making volume 16, Article number: 138 (2016)
Cognitive biases and personality traits (aversion to risk or ambiguity) may lead to diagnostic inaccuracies and medical errors resulting in mismanagement or inadequate utilization of resources. We conducted a systematic review with four objectives: 1) to identify the most common cognitive biases, 2) to evaluate the influence of cognitive biases on diagnostic accuracy or management errors, 3) to determine their impact on patient outcomes, and 4) to identify literature gaps.
We searched MEDLINE and the Cochrane Library databases for relevant articles on cognitive biases from 1980 to May 2015. We included studies conducted in physicians that evaluated at least one cognitive factor using case-vignettes or real scenarios and reported an associated outcome written in English. Data quality was assessed by the Newcastle-Ottawa scale. Among 114 publications, 20 studies comprising 6810 physicians met the inclusion criteria. Nineteen cognitive biases were identified.
All studies found at least one cognitive bias or personality trait to affect physicians. Overconfidence, lower tolerance to risk, the anchoring effect, and information and availability biases were associated with diagnostic inaccuracies in 36.5 to 77 % of case-scenarios. Five out of seven (71.4 %) studies showed an association between cognitive biases and therapeutic or management errors. Of two (10 %) studies evaluating the impact of cognitive biases or personality traits on patient outcomes, only one showed that higher tolerance to ambiguity was associated with increased medical complications (9.7 % vs 6.5 %; p = .004). Most studies (60 %) targeted cognitive biases in diagnostic tasks, fewer focused on treatment or management (35 %) and on prognosis (10 %). Literature gaps include potentially relevant biases (e.g. aggregate bias, feedback sanction, hindsight bias) not investigated in the included studies. Moreover, only five (25 %) studies used clinical guidelines as the framework to determine diagnostic or treatment errors. Most studies (n = 12, 60 %) were classified as low quality.
Overconfidence, the anchoring effect, information and availability bias, and tolerance to risk may be associated with diagnostic inaccuracies or suboptimal management. More comprehensive studies are needed to determine the prevalence of cognitive biases and personality traits and their potential impact on physicians’ decisions, medical errors, and patient outcomes.
Medical errors occur in 1.7-6.5 % of all hospital admissions causing up to 100,000 unnecessary deaths each year, and perhaps one million in excess injuries in the USA [1, 2]. In 2008, medical errors cost the USA $19.5 billion . The incremental cost associated with the average event was about US$ 4685 and an increased length of stay of about 4.6 days. The ultimate consequences of medical errors include avoidable hospitalizations, medication underuse and overuse, and wasted resources that may lead to patients’ harm [4, 5].
Kahneman and Tversky introduced a dual-system theoretical framework to explain judgments, decisions under uncertainty, and cognitive biases. System 1 refers to an automatic, intuitive, unconscious, fast, and effortless or routine mechanism to make most common decisions (Fig. 1). Conversely, system 2 makes deliberate decisions, which are non-programmed, conscious, usually slow and effortful . It has been suggested that most cognitive biases are likely due to the overuse of system 1 or when system 1 overrides system 2 [7–9]. In this framework, techniques that enhance system 2 could counteract these biases and thereby improve diagnostic accuracy and decrease management errors.
Concerns about cognitive biases are not unique to medicine. Previous studies showed the influence of cognitive biases on decisions inducing errors in other fields (e.g., aeronautic industry, factory production) [10, 11]. For example, a study investigating failures and accidents identified that over 90 % of air traffic control system errors, 82 % of production errors in an unnamed company, and 50–70 % of all electronic equipment failures were partly or wholly due to human cognitive factors . Psychological assessments and quality assessment tools (e.g. Six Sigma) have been applied in many sectors to reduce errors and improve quality [12–15].
The health sector shares commonalities with industrial sectors including vulnerability to human errors [11, 14]. Therefore, a better understanding of the available evidence on cognitive biases influencing medical decisions is crucial. Such an understanding is particularly needed for physicians, as their errors can be fatal and very costly. Moreover, such an understanding could also be useful to inform learning strategies to improve clinical performance and patient outcomes, whereas literature gaps could be useful to inform future research.
In the last three decades, we learned about the importance of patient- and hospital-level factors associated with medical errors. For example, standardized approaches (e.g. Advanced Trauma Life Support, ABCs for cardiopulmonary resuscitation) at the health system level lead to better outcomes by decreasing medical errors [16, 17]. However, physician-level factors were largely ignored as reflected by reports from scientific organizations [18–20]. It was not until the 1970s that cognitive biases were initially recognized to affect individual physicians’ performance in daily medical decisions [6, 21–24]. Despite these efforts, little is known about the influence of cognitive biases and personality traits on physicians’ decisions that lead to diagnostic inaccuracies, medical errors or impact on patient outcomes. While a recent review on cognitive biases and heuristics suggested that general medical personnel is prone to show cognitive biases, it did not answer the question whether these biases actually relate to the number of medical errors in physicians .
In the present (primarily narrative) systematic review, we therefore reviewed the literature reporting the existing evidence on the relation between cognitive biases affecting physicians and medical decisions. Under the concept of cognitive biases, we also included personality traits (e.g. aversion to risk or ambiguity) that may systematically affect physicians’ judgments or decisions, independent of whether or not they result in immediate medical errors. Over 32 types of cognitive biases have been described . Importantly, some of these may reflect personality traits that could result in choice tendencies that are factually wrong, whereas others reflect decisions that are potentially suboptimal, although there is no objectively “correct” decision (e.g. risk aversion, tolerance to ambiguity). Both of these factors were included here.
Our review has four objectives: 1) to identify the most common cognitive biases by subjecting physicians to real world situations or case-vignettes, 2) to evaluate the influence of cognitive biases on diagnostic accuracy and medical errors in management or treatment, 3) to determine which cognitive biases have the greatest impact on patient outcomes, and 4) to identify literature gaps in this specific area to guide future research. After addressing these objectives, we conclude by highlighting the practical implications of our findings and by outlining an action plan to advance the field.
We conducted a literature search of MEDLINE and the Cochrane Library databases from 1980 to May 2015 by using a pre-specified search protocol (Additional file 1). We used a permuted combination of MeSH terms as major subjects, including: “medical errors”, “bias”, “cognition”, “decision making”, “physicians”, and “case-vignettes” or “case-scenarios”. In-line with the learning and education literature, case-vignettes, clinical scenarios or ‘real world’ encounters are regarded as the best simple strategy to evaluate cognitive biases among physicians . In addition, this approach has also the advantage of facilitating the assessment of training strategies to ameliorate the influence of cognitive biases on medical errors. We therefore restricted our sample to studies that used case-vignettes or real-world encounters.
Results of the combination of search terms are listed in the Additional file 1. We also completed further searches based on key words, and reviewed references from previously retrieved articles. All articles were then combined into a single list, and duplicates (n = 106) were excluded (Fig. 2).
Candidate articles examining cognitive biases influencing medical decisions were included for review if they met the following five inclusion criteria: First, the study was conducted on physicians. Second, at least one outcome measure was reported. Third, at least one cognitive factor or bias was investigated and defined a priori. Fourth, case-vignettes or real clinical encounters were used . Fifth, the study was written in English. We analyzed the number of articles that fulfilled our inclusion criteria on each cognitive factor or bias, methodological aspects, and the magnitude of effect (as prevalence or odds ratios) on diagnostic or therapeutic decisions. We excluded studies that were not the primary source. We analyzed the original data as reported by the authors. Studies not providing raw data were also excluded (e.g. review articles, letters to Editors).
A recent systematic review was focused on medical personnel in general rather than physicians, and therefore included a different set of studies in their analysis than those that are of interest when considering the impact of cognitive biases on physicians’ medical decision-making and medical errors (the focus of the current study) .
We extracted data according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (Fig. 2) . Two reviewers (GS, librarian) assessed titles and abstracts to determine eligibility. Data were extracted using standardized collection forms. Information was collected on country of origin, study design, year of publication, number of studied cognitive biases, population target (general practitioners, specialists, residents), decision type (e.g. diagnosis, treatment, management), unadjusted vs. adjusted analysis (for measured confounders, such as age, years of training, expertise), type of outcome (see below), data quality, and summary main findings. We also included descriptive elements (attributes) of the medical information provided for each case-scenario. The main outcomes were any form of medical error [26, 30], including: underuse or overuse of medical tests, diagnostic accuracy, lack of prescription or prescription of unnecessary medications, outcomes of surgical procedures, and avoidable hospitalizations.
We used the Newcastle-Ottawa Scale (NOS) to assess the quality of studies (see Additional file 2) . The NOS is a quality assessment tool for observational studies recommended by the Cochrane Collaboration . It assigns one or two points for each of eight items, categorized into three groups: the selection of the study groups; the comparability of the groups; and the ascertainment of the outcome of interest. Previous studies defined NOS scores as: 7–9 points considered as high quality, 5–6 as moderate quality, and 0–4 as low quality . For example, studies that do not provide a description of the cohort, ascertainment of the exposure, adjustment for major confounders, or demonstration that the outcome of interest was not present at the beginning of the study were ranked as low quality .
We identified 5963 studies for the combination of MESH terms “decision making” and “physicians”. Of these, 114 fulfilled the selection criteria and were retrieved for detailed assessment. Among them, 38 articles used case-vignettes or real case scenarios in physicians (Fig. 2). Combinations of other search terms are shown in the Additional file 1: Table S1. Twenty studies comprising 6810 physicians (median 180 per study; range: 36–2206) met the inclusion criteria (Fig. 2) [30, 34–52].
In 55 % (n = 11) of the retained studies, results were adjusted for confounders, such as age, gender, level of training (see Additional file 1 for further details). Importantly, only five (25 %) studies used clinical guidelines as the framework to determine diagnostic or treatment errors, illustrating the scarcity of research on evidence-based decision making (e.g. GRADE: decisions based on levels of evidence provided by randomized trials, meta-analysis, etc).
Eight (40 %) studies included residents, six (30 %) studies included general practitioners, six (30 %) studies included internists, three (15 %) studies included emergency physicians and seven (35 %) studies included other specialists (Table 2). Ten (50 %) studies were conducted in the USA. Only six (30 %) studies classified errors based on real life measures, such as patient encounters, pathological images or endoscopic procedures, whereas the remaining 14 used narrative case-vignettes. Studies included a wide variety of medical situations, most commonly infections (upper respiratory tract, urinary tract) and cardiovascular disease (coronary disease, cerebrovascular disease) (Table 1). In summary, the included studies covered a wide range of medical conditions and participants.
All studies were designed as cohort studies evaluating cognitive biases. According to the NOS, the majority of studies (n = 12, 60 %) were low quality, seven (35 %) studies ranked moderate and only one ranked as high quality  (see Additional file 2: Table S2 for details). All studies were classified as representative of the entire population (defined as how likely the exposed cohort was included in the population of physicians).
Presence of most common cognitive biases (Objective 1)
It is important to bear in mind that these studies do not systematically assess each cognitive bias or personality traits. As a result, it is not possible to provide a true estimate of the prevalence of all cognitive biases among physicians. Overall, at least one cognitive factor or bias was present in all studies. Studies evaluating more than two cognitive biases, found that 50 to 100 % of physicians were affected by at least one [39, 50, 52]. Only three manuscripts evaluated more than 5 cognitive biases in the same study, in-line with the narrow scope of most studies [39, 50, 52]. One third of studies (n = 6) were descriptive, i.e., they provided the frequency of the cognitive bias without outcome data [36, 37, 39, 44, 48, 51].
The most commonly studied personality trait was tolerance to risk or ambiguity (n = 5), whereas the framing effects (n = 5) and overconfidence (n = 5) were the most common cognitive biases. There was a wide variability in the reported prevalence of cognitive biases (Fig. 3). For example, when analyzing the three most comprehensive studies that accounted for several cognitive biases (Fig. 4), the availability bias ranged from 7.8 to 75.6 % and anchoring from 5.9 to 87.8 %, suggestive of substantial heterogeneity among studies. In summary, cognitive biases may be common and present in all included studies. The framing effect, overconfidence, and tolerance to risk/ambiguity were the most commonly studied cognitive biases. However, methodological limitations make it difficult to provide an accurate estimation of the true prevalence.
Effect of cognitive biases on medical tasks (Objective 2)
Our second objective concerned the assessment of the influence of cognitive biases on diagnostic, medical management or therapeutic tasks. Most studies (12/20; 60 %) targeted cognitive biases in diagnostic tasks, 7 (35 %) studies targeted treatment or management tasks, and 2 studies (10 %) focused on errors in prognosis. The main measure was diagnostic accuracy in 35 % (7/20) of studies (Fig. 5). Overall, the presence of cognitive biases was associated with diagnostic inaccuracies in 36.5 to 77 % of case-scenarios [30, 35, 40, 42, 45, 52, 53]. A study including 71 residents, fellows, and attending pathologists evaluated 2230 skin biopsies with a diagnosis confirmed by a panel of expert pathologists. Information biases, anchoring effects, and the representativeness bias were associated with diagnostic errors in 51 % of 40 case-scenarios (compared to 16.4 % case-scenarios leading to incorrect diagnoses not related to cognitive biases; p = 0.029) .
Only seven (35 %) studies provided information to evaluate the association between physicians’ cognitive biases and therapeutic or management errors [38, 41–43, 46, 47, 50]. Five out of the seven (71.4 %) studies showed an association between cognitive biases and these errors [38, 43, 46, 47, 50]. One study showed that overutilization of screening for prostate cancer among healthy individuals was associated with lower aversion to uncertainty (p < 0.01) . In another study including 94 obstetricians who cared for 3488 deliveries, better coping strategies (p < .015) and tolerance to ambiguity (p < .006) were associated with optimal management (reflected by lower instrumental vaginal deliveries) and lower errors . In a study including 32 anesthesiology residents, several cognitive biases (anchoring, overconfidence, premature closure, confirmation bias, etc.) were associated to errors in half of the 38 simulated encounters . Two studies evaluating triage strategies for patients with bronchiolitis and coronary artery disease showed no association between personality traits (e.g. risk aversion or tolerance to uncertainty) and hospital admissions [41, 42].
In summary, our findings suggest that cognitive biases (from one to two thirds of case-scenarios) may be associated with diagnostic inaccuracies. Evidence from five out of seven studies suggests a potential influence of cognitive biases on management or therapeutic errors [38, 43, 46, 47, 50]. Physicians who exhibited information bias, anchoring effects and representativeness bias, were more likely to make diagnostic errors [38, 43, 46, 50].
Further studies are needed to identify what the most common cognitive biases and the most effective strategies to overcome their potential influence of medical tasks and errors.
Effect of physician’s cognitive biases on patient outcomes (Objective 3)
The third objective of the present study was to determine the impact of cognitive biases on patient outcomes (e.g. avoidable hospitalizations, complications related to a procedure or medication, exposure to unnecessary invasive tests, etc). Only two (10 %) studies provided information to answer this question, both evaluating physicians’ tolerance to uncertainty [41, 43]. In a study evaluating obstetrical practices, higher tolerance to ambiguity was associated with an increased risk of postpartum hemorrhage (9.7 % vs 6.5 %; p = .004). The negative effects persisted in the multivariable analysis (for postpartum hemorrhage: OR 1.51, 95 % CI 1.10–2.20 and for chorioamninitis: OR 1.37, 95 % CI 1.10–1.70) . This phenomenon could be explained by overconfidence and underestimation of risk factors associated with maternal infections or puerperal bleeding. On the other hand, a study including 560 infants with bronchiolitis presented to the emergency department cared for by 46 pediatricians showed similar admission rates among physicians with low and high risk aversion or discomfort with diagnostic uncertainty (measured using a standardized tool) .
In summary, there too little evidence to make definitive conclusions on the influence of physicians’ personality traits or cognitive biases on patient outcomes.
Literature gaps and recommendations (Objective 4)
We systematically reviewed gaps in the literature. First, most of the studies (60 %) provided a qualitative definition of cognitive biases based on the interpretation of comments made by participants (e.g. illustrative quotes), lacking a unified and objective assessment tool [39, 50]. Second, the unit of study varies from study to study. For example, some authors report results based on the number of physicians involved in the study, whereas others report the results based on the number of case-scenarios. Third, limited information is currently available on the impact of cognitive biases on evidence-based care, as only 15 % of the studies were based on or supported by clinical guidelines (Table 2). Fourth, only one study evaluated the effect of an intervention (e.g. reflective reasoning) to ameliorate cognitive biases in physicians . Fifth, most studies were classified as low quality according to NOS criteria. However, this scale is regarded as having a modest inter-rater reliability. We need consensus among researchers on the best tools to assess the quality of manuscripts. Sixth, only two studies evaluated the influence of physicians’ biases on patient outcomes. Finally, considering the great majority of studies (85 %) targeted only one or two biases (Table 1), the true prevalence of cognitive biases influencing medical decisions remains unknown.
As mentioned, medical errors are common in medical practice . Physicians’ biases and personality traits may explain, at least in part, some medical errors. Given the wide practice variability across medical disciplines, decisions on screening tests, surgical procedures, preventative medications, or other interventions (e.g. thrombolysis for acute stroke, antibiotics for an underlying infection, etc.) may not require the same cognitive abilities it is therefore likely that studies from one discipline cannot be transferred automatically to a different discipline. By extension, physicians’ personality traits (e.g. aversion to ambiguity, tolerance to uncertainty) or cognitive biases (e.g. overconfidence) may not equally influence patient outcomes or medical errors in all disciplines. Time-urgency of the medical decision may be a relevant characteristic. Thus, a discipline-based research approach may be needed. There is scarce information in some disciplines and areas, including anesthesiology (decisions on procedures and anesthetic agents), emergency care, obstetrics and gynecology (e.g. decisions on procedures and primary care on women’s health), endoscopic procedures (e.g. gastrointestinal, uropelvic), neurology (e.g. decision in multiple sclerosis and stroke care).
Early recognition of physicians’ cognitive and biases are crucial to optimize medical decisions, prevent medical errors, provide more realistic patient expectations, and contribute to decreasing the rising health care costs altogether [3, 8, 54]. In the present systematic review, we had four objectives. First, we identified the most commonly reported cognitive biases (i.e., anchoring and framing effects, information biases) and personality traits (e.g. tolerance to uncertainty, aversion to ambiguity) that may potentially affect physicians’ decisions. All included studies found at least one cognitive factor/bias, indicating that a large number of physicians may be possibly affected [39, 50, 52]. Second, we identified the effect of physician’s cognitive biases or personality traits on medical tasks and on medical errors. Studies evaluating physicians’ overconfidence, the anchoring effect, and information or availability bias may suggest an association with diagnostic inaccuracies [30, 35, 40, 42, 45, 52, 53]. Moreover, anchoring, information bias, overconfidence, premature closure, representativeness and confirmation bias may be associated with therapeutic or management errors [38, 43, 46, 47, 50]. Misinterpretation of recommendations and lower comfort with uncertainty were associated with overutilization of diagnostic tests . Physicians with better coping strategies and tolerance to ambiguity could be related to optimal management .
For our third objective – identifying the relation between physicians’ cognitive biases and patient’s outcomes- we only had very sparse data: Only 10 % of studies provided data on this area [41, 43]. Only one study showed higher complications (OR 1.51, 95 % CI 1.10–2.20) among patients cared for by physicians with higher tolerance to ambiguity . The fourth and final objective was to identify gaps in the literature. We found that only few (<50 %) of an established set of cognitive biases  were assessed, including: overconfidence, and framing effects. Other listed and relevant biases were not studied (e.g. aggregation bias, feedback sanction, hindsight bias). For example, aggregation bias (the assumption that aggregated data from clinical guidelines do not apply to their patients) or hindsight bias (the tendency to view events as more predictable than they really are) both compromise a realistic clinical appraisal, which may also lead to medical errors [18, 26]. More importantly, only 35 % of studies provided information on the association between cognitive biases or personality traits and medical errors [38, 41–43, 46, 47, 50], with scarce information on their impact on patient outcomes, preventing us from making definite conclusions [41, 43]. Furthermore, the quality of the included studies was classified as low to modest according to NOS criteria, as most studies provided limited descriptions of the exposure and research cohort, and none contributed with follow-up data (e.g. sustainability and reliability of the effects or long-term outcomes) (Additional file 2).
When comparing the previous systematic review on patients and medical personnel  with ours, some commonalities are apparent. Both reviews agree on the relevance of the topic, identify that a systematic analysis of the impact of cognitive biases on medical decisions is lacking despite substantial work completed in the last two decades . Having a different objective, the authors nicely summarized the number of studies that investigated each cognitive bias either in patients or medical personnel . Similarly, cognitive biases seem to be common among physicians as identified in 80 % (n = 51) of studies included in Blumenthal-Barby and Krieger’s review and all selected studies (n = 20) evaluating at least one outcome in the present review .
However, both studies were not able to provide an accurate estimate of the true prevalence of cognitive biases or personality traits affecting medical decisions in physicians.
On the other hand, our study adds relevant information regarding the influence of cognitive biases particularly in physicians on diagnostic inaccuracies, suboptimal management and therapeutic errors, and patient outcomes. Our first objective allowed the identification of additional biases (e.g. framing effect, decoy effect, default bias) or physician’s personality traits (e.g. low tolerance to uncertainty, aversion to ambiguity), by including 14 further studies. We also completed a systematic quality assessment of each study using a standardized tool and identified gaps related to the influence of cognitive biases on medical errors .
What can be done?
The identification and recognition of literature gaps constitute the first step to finding potential solutions. Increasing awareness among physicians and medical students is an important milestone. A comprehensive narrative review comprising 41 studies on cognitive interventions to reduce misdiagnosis found three main effective strategies: increasing knowledge and expertise, improving clinical reasoning, and getting help from colleagues, experts and tools . First, reflective reasoning counteracts the impact of cognitive biases by improving diagnostic accuracy in second- (OR 2.03; 95 % CI, 1.49–2.57) and first-year residents [OR (odds ratio) 2.31; 95 % CI, 1.89–2.73] . Second, the implementation of tools (e.g. cognitive checklist, calibration) may overcome overconfidence, the anchoring and framing effects (Fig. 5) [8, 9, 56]. Third, heuristics approaches (shortcuts to ignore less relevant information to overcome the complexity of some clinical situations) can improve decision making. As shown by Marewski and Gigerenzer, the identification of three rules (search for predictors to determine their individual importance, stop searching when relevant information was already obtained, and a criteria that specifies how a decision is made) may facilitate prompt decisions and may help physicians to avoid errors in some clinical situations [21, 57, 58].
The inclusion of training in cognitive biases in graduate and postgraduate programs might foster medical education and thereby improve health care delivery . A commitment from academic institutions, scientific organizations, universities, the public, and policy-makers would be needed to reduce a defensive medical practice [60, 61]. An initial step towards this goal may be the ‘Choosing wisely’ strategy [62, 63].
What are the practical implications of our findings?
As shown, cognitive biases and personality traits may affect our clinical reasoning processes which may lead to errors in the diagnosis, management, or treatment of medical conditions [6, 26]. Errors perceived by faculty to be relevant were indeed observed in 50–80 % of trainees in real practice . Misdiagnosis, mismanagement, and mistreatment are frequently associated with poorer outcomes, which are the most common reasons for patients’ dissatisfaction and medical complaints [54, 64, 65].
Our study has several limitations that deserve comment. First, although we aimed to be as systematic as possible in reviewing the literature, we cannot rule out involuntary omissions. It is also possible that our results may be somewhat limited by the strictness of our inclusion criteria. Second, we were not able to complete a formal meta-analysis due to the diversity of definitions and data reported, and small number of studies evaluating specific cognitive biases. In particular, a limited number of studies evaluated the same constructs. Moreover, across studies we often found a lack (in 30 % of studies) or heterogeneity in the outcome measures, mixed denominators (some studies report their findings based on the number of participants, while others based on case-scenarios) [41, 43, 52], and different scope (e.g. some studies are descriptive, [36, 37, 39, 44, 48, 51] whereas others [7, 30, 35, 42, 43, 47, 50, 52] target diagnostic or therapeutic errors). Third, most studies use hypothetical case-vignettes which may not truly reflect medical decisions in real life. Fourth, the assessment of the number of medical elements included in each case scenario may not be consistent (some were reported by authors and others estimated based on the description of case-scenarios) [35, 40, 51]. Fifth, the use of the NOS to assess the quality of studies has been criticized for having modest inter-rater reliability [66, 67].
Despite the aforementioned limitations, our study reflects the relevance and potential burden of the problem, how little we know about the implications of cognitive biases and personality traits on physicians’ decisions, and their impact on patients-oriented outcomes. Our findings may also increase physicians’ awareness of own personality traits or cognitive biases when counseling or advising patients and their family members that may lead to medical errors. From a health policy perspective, this information would provide additional insights on medically relevant cognitive biases and personality traits that contribute the rising health care costs [3, 68].
In the present systematic review, we highlighted the relevance of recognizing physicians’ personality traits and cognitive biases. Although cognitive biases may affect a wide range of physicians (and influence diagnostic accuracy, management, and therapeutic decisions), their true prevalence remains unknown.
Thus, substantial gaps limit our understanding of the impact of cognitive biases on medical decisions. As a result, new research approaches are needed. We propose the design of more comprehensive studies to evaluate the effect of physicians’ personality traits and biases on medical errors and patient outcomes in real medical encounters and interventions or using guideline-based case-vignettes. This can be accomplished by identifying physician characteristics, combining validated surveys and experiments commonly used in behavioral economics to elicit several critical personality traits (e.g. tolerance to uncertainty, aversion to risk and ambiguity), and cognitive biases (e.g. overconfidence, illusion of control). Prospective studies evaluating and comparing different training strategies for physicians are needed to better understand and ameliorate the potential impact of cognitive biases on medical decisions or errors. In addition, effective educational strategies are also needed to overcome the effect of cognitive biases on medical decisions and interventions. Together, this information would provide new insights that may affect patient outcomes (e.g. avoidable hospitalizations, complications related to a procedure or medication, request of unnecessary tests, etc) and help attenuate medical errors [3, 68, 69].
Preferred reporting items for systematic reviews and meta-analyses
Classen DC, Pestotnik SL, Evans RS, Burke JP. Computerized surveillance of adverse drug events in hospital patients. JAMA. 1991;266(20):2847–51.
Bates DW, Cullen DJ, Laird N, Petersen LA, Small SD, Servi D, Laffel G, Sweitzer BJ, Shea BF, Hallisey R, et al. Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group. JAMA. 1995;274(1):29–34.
Andel C, Davidow SL, Hollander M, Moreno DA. The economics of health care quality and medical errors. J Health Care Finance. 2012;39(1):39–50.
OECD. Health at a Glance 2013: OECD Indicators, OECD Publishing. 2013.http://dx.doi.org/10.1787/health_glance-2013-en.
Ioannidis JP, Lau J. Evidence on interventions to reduce medical errors: an overview and recommendations for future research. J Gen Intern Med. 2001;16(5):325–34.
Tversky A, Kahneman D. Judgment under Uncertainty: Heuristics and Biases. Science. 1974;185(4157):1124–31.
Mamede S, van Gog T, van den Berge K, van Saase JL, Schmidt HG. Why do doctors make mistakes? A study of the role of salient distracting clinical features. Acad Med. 2014;89(1):114–20.
van den Berge K, Mamede S. Cognitive diagnostic error in internal medicine. Eur J Intern Med. 2013;24(6):525–9.
Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med. 2011;86(3):307–13.
Dhillon BS. Human errors: a review. Microelectron Reliab. 1989;29(3):299–304.
Stripe SC, Best LG, Cole-Harding S, Fifield B, Talebdoost F. Aviation model cognitive risk factors applied to medical malpractice cases. J Am Board Fam Med. 2006;19(6):627–32.
Chassin MR. Is health care ready for Six Sigma quality? Milbank Q. 1998;76(4):565–91. 510.
Corn JB. Six sigma in health care. Radiol Technol. 2009;81(1):92–5.
Zeltser MV, Nash DB. Approaching the evidence basis for aviation-derived teamwork training in medicine. Am J Med Qual. 2010;25(1):13–23.
Ballard S-B. The U.S. commercial air tour industry: a review of aviation safety concerns. Aviat Space Environ Med. 2014;85(2):160–6.
Kern KB, Hilwig RW, Berg RA, Sanders AB, Ewy GA. Importance of continuous chest compressions during cardiopulmonary resuscitation: improved outcome during a simulated single lay-rescuer scenario. Circulation. 2002;105(5):645–9.
Collicott PE, Hughes I. Training in advanced trauma life support. JAMA. 1980;243(11):1156–9.
Michaels AD, Spinler SA, Leeper B, Ohman EM, Alexander KP, Newby LK, Ay H, Gibler WB, American Heart Association Acute Cardiac Care Committee of the Council on Clinical Cardiology, Outcomes Research, et al. Medication errors in acute cardiovascular and stroke patients: a scientific statement from the American Heart Association. Circulation. 2010;121(14):1664–82.
Khoo EM, Lee WK, Sararaks S, Abdul Samad A, Liew SM, Cheong AT, Ibrahim MY, Su SH, Mohd Hanafiah AN, Maskon K, et al. Medical errors in primary care clinics--a cross sectional study. BMC Fam Pract. 2012;13:127.
Jenkins RH, Vaida AJ. Simple strategies to avoid medication errors. Fam Pract Manag. 2007;14(2):41–7.
Marewski JN, Gigerenzer G. Heuristic decision making in medicine. Dialogues Clin Neurosci. 2012;14(1):77–89.
Wegwarth O, Schwartz LM, Woloshin S, Gaissmaier W, Gigerenzer G. Do physicians understand cancer screening statistics? A national survey of primary care physicians in the United States. Ann Intern Med. 2012;156(5):340–9.
Elstein AS. Analytic methods and medical education. Problems and prospects. Med Decis Making. 1983;3(3):279–84.
Elstein AS. Clinical judgment: psychological research and medical practice. Science. 1976;194(4266):696–700.
Blumenthal-Barby J, Krieger H. Cognitive biases and heuristics in medical decision making: a critical review using a systematic search strategy. Med Decis Making. 2015;35(4):539–57.
Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775–80.
Peabody JW, Luck J, Glassman P, Jain S, Hansen J, Spell M, Lee M. Measuring the quality of physician practice by using clinical vignettes: a prospective validation study. Ann Intern Med. 2004;141(10):771–80.
Eva KW. What every teacher needs to know about clinical reasoning. Med Educ. 2005;39(1):98–106.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009;151(4):W65–94.
Meyer AN, Payne VL, Meeks DW, Rao R, Singh H. Physicians’ diagnostic accuracy, confidence, and resource requests: a vignette study. JAMA Intern Med. 2013;173(21):1952–8.
The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. [In http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp]
Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011] [In http://www.cochrane-handbook.org]
Shi Q, MacDermid J, Santaguida L, Kyu HH: Predictors of surgical outcomes following anterior transposition of ulnar nerve for cubital tunnel syndrome: A systematic review. J Hand Surg Am. 2011;36(12):1996–2001.e1–6. doi:10.1016/j.jhsa.2011.09.024.
Mamede S, Splinter TA, van Gog T, Rikers RM, Schmidt HG. Exploring the role of salient distracting clinical features in the emergence of diagnostic errors and the mechanisms through which reflection counteracts mistakes. BMJ Qual Saf. 2012;21(4):295–300.
Mamede S, van Gog T, van den Berge K, Rikers RM, van Saase JL, van Guldener C, Schmidt HG. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA. 2010;304(11):1198–203.
Msaouel P, Kappos T, Tasoulis A, Apostolopoulos AP, Lekkas I, Tripodaki ES, Keramaris NC. Assessment of cognitive biases and biostatistics knowledge of medical residents: a multicenter, cross-sectional questionnaire study. Med Educ Online. 2014;19:23646.
Ross S, Moffat K, McConnachie A, Gordon J, Wilson P. Sex and attitude: a randomized vignette study of the management of depression by general practitioners. Br J Gen Pract. 1999;49(438):17–21.
Perneger TV, Agoritsas T. Doctors and patients’ susceptibility to framing bias: a randomized trial. J Gen Intern Med. 2011;26(12):1411–7.
Ogdie AR, Reilly JB, Pang WG, Keddem S, Barg FK, Von Feldt JM, Myers JS. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med. 2012;87(10):1361–7.
Friedman C, Gatti G, Elstein A, Franz T, Murphy G, Wolf F. Are clinicians correct when they believe they are correct? Implications for medical decision support. Stud Health Technol Inform. 2001;84(Pt 1):454–8.
Baldwin RL, Green JW, Shaw JL, Simpson DD, Bird TM, Cleves MA, Robbins JM. Physician risk attitudes and hospitalization of infants with bronchiolitis. Acad Emerg Med. 2005;12(2):142–6.
Reyna VF, Lloyd FJ. Physician decision making and cardiac risk: effects of knowledge, risk perception, risk tolerance, and fuzzy processing. J Exp Psychol Appl. 2006;12(3):179–95.
Yee LM, Liu LY, Grobman WA. The relationship between obstetricians’ cognitive and affective traits and their patients’ delivery outcomes. Am J Obstet Gynecol. 2014;211(6):692 e691–696.
Graber MA, Bergus G, Dawson JD, Wood GB, Levy BT, Levin I. Effect of a patient’s psychiatric history on physicians’ estimation of probability of disease. J Gen Intern Med. 2000;15(3):204–6.
Bytzer P. Information bias in endoscopic assessment. Am J Gastroenterol. 2007;102(8):1585–7.
Sorum PC, Shim J, Chasseigne G, Bonnin-Scaon S, Cogneau J, Mullet E. Why do primary care physicians in the United States and France order prostate-specific antigen tests for asymptomatic patients? Med Decis Making. 2003;23(4):301–13.
Redelmeier DA, Shafir E. Medical decision making in situations that offer multiple alternatives. JAMA. 1995;273(4):302–5.
Dibonaventura M, Chapman GB. Do decision biases predict bad decisions? Omission bias, naturalness bias, and influenza vaccination. Med Decis Making. 2008;28(4):532–9.
Saposnik G, Cote R, Mamdani M, Raptis S, Thorpe KE, Fang J, Redelmeier DA, Goldstein LB. JURaSSiC: Accuracy of clinician vs risk score prediction of ischemic stroke outcomes. Neurol. 2013;81(5):448–55.
Stiegler MP, Ruskin KJ. Decision-making and safety in anesthesiology. Curr Opin Anaesthesiol. 2012;25(6):724–9.
Gupta M, Schriger DL, Tabas JA. The presence of outcome bias in emergency physician retrospective judgments of the quality of care. Ann Emerg Med. 2011;57(4):323–8. e329.
Crowley RS, Legowski E, Medvedeva O, Reitmeyer K, Tseytlin E, Castine M, Jukic D, Mello-Thoms C. Automated detection of heuristics and biases among pathologists in a computer-based system. Adv Health Sci Educ Theory Pract. 2013;18(3):343–63.
Mamede S, Schmidt HG, Rikers RM, Custers EJ, Splinter TA, van Saase JL. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert. Psychol Res. 2010;74(6):586–92.
Zwaan L, Thijs A, Wagner C, van der Wal G, Timmermans DR. Relating faults in diagnostic reasoning with diagnostic errors and patient harm. Acad Med. 2012;87(2):149–56.
Graber ML, Kissam S, Payne VL, Meyer AN, Sorensen A, Lenfestey N, Tant E, Henriksen K, Labresh K, Singh H. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535–57.
Balla JI, Heneghan C, Glasziou P, Thompson M, Balla ME. A model for reflection for good clinical practice. J Eval Clin Pract. 2009;15(6):964–9.
Raab M, Gigerenzer G. The power of simplicity: a fast-and-frugal heuristics approach to performance science. Front Psychol. 2015;6:1672.
Elwyn G, Thompson R, John R, Grande SW. Developing IntegRATE: a fast and frugal patient-reported measure of integration in health care delivery. Int J Integr Care. 2015;15:e008.
Hyman DJ, Pavlik VN, Greisinger AJ, Chan W, Bayona J, Mansyur C, Simms V, Pool J. Effect of a physician uncertainty reduction intervention on blood pressure in uncontrolled hypertensives--a cluster randomized trial. J Gen Intern Med. 2012;27(4):413–9.
Kachalia A, Mello MM. Defensive medicine—legally necessary but ethically wrong?: Inpatient stress testing for chest pain in low-risk patients. JAMA Intern Med. 2013;173(12):1056–7.
Smith TR, Habib A, Rosenow JM, Nahed BV, Babu MA, Cybulski G, Fessler R, Batjer HH, Heary RF. Defensive medicine in neurosurgery: does state-level liability risk matter? Neurosurg. 2015;76(2):105–13. discussion 113-104.
Bhatia RS, Levinson W, Shortt S, Pendrith C, Fric-Shamji E, Kallewaard M, Peul W, Veillard J, Elshaug A, Forde I, et al. Measuring the effect of Choosing Wisely: an integrated framework to assess campaign impact on low-value care. BMJ Qual Saf. 2015;24(8):523–31. doi:10.1136/bmjqs-2015-004070.
Levinson W, Huynh T. Engaging physicians and patients in conversations about unnecessary tests and procedures: Choosing Wisely Canada. CMAJ. 2014;186(5):325–6.
Kallberg AS, Goransson KE, Ostergren J, Florin J, Ehrenberg A. Medical errors and complaints in emergency department care in Sweden as reported by care providers, healthcare staff, and patients - a national review. Eur J Emerg Med. 2013;20(1):33–8.
Studdert DM, Mello MM, Gawande AA, Gandhi TK, Kachalia A, Yoon C, Puopolo AL, Brennan TA. Claims, errors, and compensation payments in medical malpractice litigation. N Engl J Med. 2006;354(19):2024–33.
Hartling L, Milne A, Hamm MP, Vandermeer B, Ansari M, Tsertsvadze A, Dryden DM. Testing the Newcastle Ottawa Scale showed low reliability between individual reviewers. J Clin Epidemiol. 2013;66(9):982–93.
Lo CK, Mertz D, Loeb M. Newcastle-Ottawa Scale: comparing reviewers’ to authors’ assessments. BMC Med Res Methodol. 2014;14:45.
Stangierski A, Warmuz-Stangierska I, Ruchala M, Zdanowska J, Glowacka MD, Sowinski J, Ruchala P. Medical errors - not only patients’ problem. Arch Med Sci. 2012;8(3):569–74.
Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22 Suppl 2:ii21–7.
The authors thank Maria Terzaghi and Lauren Cipriano for her input and comments regarding the manuscript and methods of this systematic review. We are also grateful to David Lighfoot (Librarian at the Li Ka Shing Institute, University of Toronto) for his help with the literature search and data quality assessment.
Availability of data and materials
GS participated in the conception, design, literature search, analysis, interpretation of the results, drafting the manuscript and made critical revisions of the manuscript. DR participated in the conception, design, interpretation of the results, drafting the manuscript and made critical revisions of the manuscript. CCR participated in the conception, design, interpretation of the results, drafting the manuscript and made critical revisions of the manuscript. PNT participated in the conception, design, analysis, interpretation of the results, drafting the manuscript and made critical revisions of the manuscript. All authors read and approved the final manuscript.
Dr. Saposnik is supported by the Distinguished Clinician-Scientist Award from the Heart and Stroke Foundation of Canada (HSFC) following a peer-review and open competition. Dr. Redelmeier holds a Canada Research Chair in Decision making. Drs. Ruff and Tobler are supported by the Swiss National Science Foundation (CRSII3_141965).
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate
About this article
Cite this article
Saposnik, G., Redelmeier, D., Ruff, C.C. et al. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak 16, 138 (2016) doi:10.1186/s12911-016-0377-1
- Decision making
- Cognitive bias
- Personality traits
- Systematic review