Skip to main content
  • Research article
  • Open access
  • Published:

Measuring agreement between decision support reminders: the cloud vs. the local expert

Abstract

Background

A cloud-based clinical decision support system (CDSS) was implemented to remotely provide evidence-based guideline reminders in support of preventative health. Following implementation, we measured the agreement between preventive care reminders generated by an existing, local CDSS and the new, cloud-based CDSS operating on the same patient visit data.

Methods

Electronic health record data for the same set of patients seen in primary care were sent to both the cloud-based web service and local CDSS. The clinical reminders returned by both services were captured for analysis. Cohen’s Kappa coefficient was calculated to compare the two sets of reminders. Kappa statistics were further adjusted for prevalence and bias due to the potential effects of bias in the CDS logic and prevalence in the relative small sample of patients.

Results

The cloud-based CDSS generated 965 clinical reminders for 405 patient visits over 3 months. The local CDSS returned 889 reminders for the same patient visit data. When adjusted for prevalence and bias, observed agreement varied by reminder from 0.33 (95% CI 0.24 – 0.42) to 0.99 (95% CI 0.97 – 1.00) and demonstrated almost perfect agreement for 7 of the 11 reminders.

Conclusions

Preventive care reminders delivered by two disparate CDS systems show substantial agreement. Subtle differences in rule logic and terminology mapping appear to account for much of the discordance. Cloud-based CDSS therefore show promise, opening the door for future development and implementation in support of health care providers with limited resources for knowledge management of complex logic and rules.

Peer Review reports

Background

Attention to preventive care can protect patients from developing serious health conditions and supports the triple aim of reducing health care costs while improving the quality and efficiency of care delivery [1]. Numerous public and private organizations, including most professional medical societies, publish guidelines that describe recommendations for proper preventive care. Unfortunately, patients receive recommended preventive care just 54.9% of the time [2]. Too often busy clinicians treat acute medical problems, lacking the time required to address a patient’s preventive care.

Evidence demonstrates that computerized provider order entry (CPOE) with clinical decision support (CDS) can improve the delivery of preventive care [38]. Given evidence of the potential value CDS holds for achievement of the triple aim, U.S. health care policymakers advocate wider adoption and use of CPOE with CDS [911]. Stage 2 Meaningful Use criteria from the Centers for Medicare and Medicaid Services, the federal agency tasked with incentivizing the adoption of electronic health record (EHR) systems, place greater emphasis on CDS, escalating the number of required decision support rules linked to specific quality indicators [12].

Policies like Meaningful Use are likely necessary as many hospitals and clinics failed to adopt CDS prior to their passage. Currently just 15% of the 5795 U.S. hospitals have a “basic” electronic health record system, and only 4.4% of hospitals report implementing “core” functionalities of the meaningful use criteria which include CDS [13]. Furthermore, adoption of CDS is typically found in larger, urban academic medical centers which can mandate use by providers [14]. Although 86% of all U.S. hospitals are community hospitals, just 6.9% of community hospitals have reported having a basic clinical information system [15]. Rates are equally poor for other types of hospitals, with just 6% of long-term acute care hospitals, 4% of rehabilitation hospitals, and 2% of psychiatric hospitals reporting the use of a basic electronic health record system [16].

Implementation of CDS to comply with federal regulations, however, is not sufficient to ensure its use. Several studies highlight that certain forms of CDS are turned off or ignored following implementation [1719]. A fundamental barrier for many providers is the creation and curation of preventive care rules, alerts, and reminders; a process referred to as knowledge management (KM) [2022]. KM is challenging as it requires significant investment in human and infrastructure resources to ensure that the knowledge base supporting CDS is accurate and up-to-date [2325].

Local experts within an institution are often charged with KM tasks such as designing CDS-based preventive service reminders. Often these experts are asked to translate preventive service guidelines from national information sources to the local CDS system. While these local experts are familiar with the terminologies and policies at their institution and therefore often successful, their efforts are laborious and require continuous review, updates, and management. A recent survey found that, while KM tasks necessary to “customize” CDS are routinely performed in both large as well as small-to-medium sized community hospitals, the level of effort required to customize CDS prior to implementation was greater than expected [26]. The task of KM is therefore daunting, and it remains unclear how to scale the financial, technical, and human capital necessary to support CDS across all U.S. hospitals. Therefore new methods and models for KM and dissemination of knowledge for CDS are needed to support national efforts towards achieving meaningful use and the triple aim.

Given the need for scalable KM across an increasing landscape of hospitals with CDS, we sought to compare preventive reminders created using traditional, local expert KM processes with reminders developed collaboratively for a cloud-based CDS system operating across a consortium of independently managed hospitals. In 2008, the Regenstrief Institute joined the Clinical Decision Support Consortium (CDSC) [27], which seeks “to assess, define, demonstrate, and evaluate best practices for knowledge management and clinical decision support in healthcare information technology at scale – across multiple ambulatory care settings and EHR technology platforms” [28]. The CDSC, funded by the U.S. Agency for Healthcare Research and Quality (AHRQ), is based at Partners Healthcare, but involves a growing array of CDS stakeholders.

To compare local expertise driven CDS methods with those of the CDSC, we executed parallel sets of preventive service guidelines: one set implemented locally by Regenstrief experts and, independently, another set implemented in the cloud-based CDSC web service by knowledge engineers at another institution. Although the two implementations were different, the preventive guidelines which they covered were the same. The study is unique because it directly compares the outcome of preventive service guidelines enacted at separate institutions for the same set of patient data. It is further unique in that it examines a novel modality of CDS where KM and execution of rules are performed “in the cloud” to reduce burden on hospitals in their efforts to implement and adopt CDS.

Methods

This research was conducted principally at Eskenazi Health (formerly Wishard Health Services), a large, urban safety net provider in Marion County, Indiana. Eskenazi Health includes a 315-bed hospital and 11 community health centers. Almost 1.4 million outpatient visits annually take place at these facilities. Eskenazi Health is closely integrated with the Indiana University School of Medicine and includes a large presence from medical students, resident physicians, and other health professionals in training.

Regenstrief Institute, Inc. is a research institution closely affiliated with Eskenazi Health, and provides Eskenazi clinicians with order entry and decision support services. Since the 1970s, Regenstrief has provided KM for the various alerts, reminders, and displays that support patient care at Eskenazi Health. Non-urgent preventive care reminders (e.g., recommendations for mammograms or cholesterol testing) are written in the CARE language and delivered to the physician at the beginning of each patient visit [29].

In July 2011, we began a 6-month feasibility study to incorporate CDSC preventive care reminders into the CareWeb information system used in Eskenazi Health community health centers. Patient enrollment was limited to those patients who arrived for a scheduled outpatient visit for three part-time physicians practicing at two health centers. We limited the current investigation to the final three months (October 1 to December 31, 2011) of this feasibility study during which the receipt, integration, and logging of ECRS preventive care reminders were fully operational. The study obtained ethics approval and a waiver of written informed consent from the Indiana University Institutional Review Board (Study No. 1111007478).

Every time a patient arrived at a clinic for a visit with one of the physicians, an electronic arrival message was generated by the front desk registration system. This arrival message triggered the automated assembly of a standards-based continuity of care document (CCD) through a query of the patient’s electronic health records. A limited data set was encoded into the CCD as dates of service were required for successful execution of CDS logic. However, other patient identifiers including name, medical record number, and date of birth were de-identified. The CCD was sent to the CDSC cloud-based service at Partners [30, 31]. The term ‘cloud-based’ refers to a specific set of characteristics and services available on-demand across a network from a pool of computing resources [32]. Prior articles from the CDSC describe its cloud-based architecture and implementation [33, 34].

After processing by the CDSC service, preventive care reminders (if applicable) were included in the response message returned from Partners to Regenstrief where they were written to a table in Regenstrief’s enterprise CDS infrastructure. When a physician viewed the patient’s record in the CareWeb information system, these preventive care reminders were displayed. As previously mentioned, this feasibility study was limited to the eleven preventive care reminders used in the pilot project shown in Table 1.

Table 1 List of the 11 preventive care reminders provided by CDSC web service

At the conclusion of the study period, we tabulated which preventive care reminders were delivered by the CDSC web service for each patient visit (defined as the combination of the patient’s medical record number and the visit date).

Then we gathered eleven corresponding CARE rules developed at Regenstrief. The eleven rules in each set (the CDSC set and the Regenstrief CARE set) attempt to achieve the same result: encode the logic for the preventive care reminders in Table 1. However, the underlying details differ greatly. CDSC rules are written in the language specified for the IBM/ILOG rules engine; Regenstrief CARE rules are written for a custom-built rules engine based on the VMS operating system. Furthermore, CDSC rules rely on concepts coded in standard vocabularies (SNOMED CT, RxNORM, NDFRT, and LOINC) whereas CARE rules expect all concepts to be coded using Regenstrief’s local term dictionary.

The corresponding CARE rules were executed retrospectively for each of the patient visits in this study, relying on the data available for that patient on the date of that visit. We tabulated which preventive care reminders were generated by the CARE rule engine for each patient visit.

For each of the eleven reminders, we created a 2 × 2 frequency table and compared the cloud-based CDSC rules with the locally-crafted CARE rules for agreement with respect to the delivery (‘Yes’) or absence (‘No’) of a preventive care reminder. Four outcomes were possible: both rules delivered a reminder; only the CDSC rule delivered a reminder; only the CARE rule delivered a reminder; or neither rule delivered a reminder. Observed agreement (P0) is the proportion of times both the CDSC rule and the CARE rule agreed on ‘Yes’ or ‘No’.

The standard measure of agreement in a 2 × 2 frequency table is Cohen’s Kappa coefficient (κ). Kappa adjusts the observed agreement by the agreement expected by chance. However, if no further adjustments are made, Kappa can be deceptive, because it is sensitive to both the bias in reporting ‘Yes’ between the two rules, if any exists, and the prevalence of ‘Yes’ relative to ‘No’ in the sample.

The Bias Index (BI) measures the difference in the proportion of ‘Yes’ between the CDSC rules and the CARE rules. The Prevalence Index (PI) measures the difference in proportions between ‘Yes’ and ‘No’ overall (using only cases where both rules agreed). We adjusted the Kappa both for bias and for prevalence by calculating the Prevalence-Adjusted Bias-Adjusted Kappa (PABAK), in accordance with the methodology described by Byrt, Bishop and Carlin [35]. PABAK values were interpreted according to the guidelines for Kappa provided by Landis and Koch: 0.81 – 1.00: almost perfect agreement; 0.61 – 0.80: substantial agreement; 0.41 – 0.60: moderate agreement; 0.21 – 0.40: fair agreement; and 0.01 – 0.20: slight agreement [36]. In addition, we generated 95% confidence intervals for each value of PABAK using a bootstrap algorithm with 10,000 bootstrap samples [37].

We also compared the demographic data for the patients in the study sample to the total year 2011 clinic volume. A two-sample t-test for age, and chi-square tests for ethnicity, gender, and insurance status were performed. A p-value of < 0.05 was considered significant. SAS version 9.3 (Cary, NC) was used for all analyses.

Results

Patient demographics

During the three-month analysis period, 405 patient visits occurred. A total of 372 distinct patients were seen during visits to the three providers. Table 2 illustrates demographic data for the patients in the study sample, as well as the total year 2011 clinic volume. The study sample did not differ from the total clinic volume on ethnicity or gender, but was older on average, more likely to have Medicare insurance, and less likely to have Wishard Advantage insurance (a managed care program providing medical care to residents of Indianapolis with incomes less than 200% of the federal poverty level).

Table 2 Demographics of study patients compared with overall clinic population

Observed agreement

During the three-month period, a total of 965 preventive care reminders were delivered by the cloud-based CDSC rules engine. For those same patient visits, 889 reminders were generated by locally-crafted CARE rules. These raw counts are compared in Table 3. Observed agreement (P0) varies from 0.66 to 0.99.

Table 3 Raw counts of preventive care reminders delivered by the cloud-based CDSC rules and the locally-crafted CARE rules

Prevalence-adjusted bias-adjusted kappa

Kappa statistic is calculated and shown for the preventive care reminders in Table 4. Also shown are the Bias Index (BI), Prevalence Index (PI), and the Prevalence-adjusted Bias-adjusted Kappa (PABAK). The unadjusted Kappa statistic varies from 0.10 to 0.90, suggesting little agreement in Rule 11 (K = 0.10), Rule 5 (K = 0.13), and Rule 3 (K = 0.28). When adjusted for prevalence and bias, PABAK varies from 0.33 (95% CI 0.24 – 0.42) to 0.99 (95% CI 0.97 – 1.00).

Table 4 Unadjusted kappa statistic, as well as prevalence-adjusted bias-adjusted kappa

Using the Landis and Koch interpretation, the adjusted Kappa statistic (PABAK) demonstrates almost perfect agreement for 7 of the 11 preventive care reminders. Two more reminders (reminders 4 and 5) can be interpreted as substantially in agreement. The remaining two reminders (reminders 9 and 10) demonstrate fair or moderate agreement.

Discussion

Using a limited set of preventive care reminders, we compared the results of CDS logic execution from a remote CDS web service with the results returned from a locally developed and maintained CDS infrastructure. Using the Kappa statistic, with adjustments for prevalence and for bias, we found a high level of agreement between the two sets of results. Strong agreement is auspicious for future development of cloud-based CDS that can support centralized knowledge management functions associated with operational CDS systems.

Our institution, like many other urban as well as community hospitals, has previously relied on decision support rules implemented and maintained locally. In the case of Eskenazi Health, these were carefully developed and maintained by local clinical informatics experts. Other institutions may purchase such rules directly from a vendor and install them in their local information system [38]. With either approach, institutions are challenged by constrained resources and substantial expenses if they seek to continue maintaining and expanding their own decision support infrastructure [24, 26, 3840].

Cloud-based CDS represents a completely new model for delivering advice and guidelines to the point of care. In the current study, patient data at Eskenazi Health in Indiana were packaged into a standard envelope (the CCD document) using standard vocabulary identifiers. These data were sent to a distant, cloud-based web service hosted in Massachusetts. The decision support engine in the cloud generated reminders based on local patient data, and delivered the reminders to the local EHR system, where they were integrated for use by local clinicians.

This remote web service was not custom-built just for this transaction. The CDS infrastructure supporting the CDSC extended the CDSS which previously provided similar services to clinicians using the Longitudinal Medical Record at Partners HealthCare System hospitals in the Boston area. The CDSC has demonstrated that a CDS engine can be engineered to receive data from, and send reminders to, multiple and non-affiliated health systems using secure protocols in a community cloud [33, 34, 4143].

Demonstrations by the CDSC to show that a CDS infrastructure in the cloud can be engineered to securely exchange protected health information is a remarkable achievement that has provided many important lessons [31, 33, 34, 41]. For cloud-based CDS to be widely adopted, it must be shown to be at least as good as traditional approaches to CDS in place locally. Our current study observed considerable agreement between two sets of independently curated sets of reminders. Such agreement suggests that cloud-based CDS infrastructures that enable remote KM and economies of scale are feasible both from an engineering and clinical viewpoint.

Adjustment of Cohen’s Kappa coefficient was necessary due to the potential effects of bias in the CDS logic and prevalence in the relative small sample of patients. Bias can occur when two sets of encoded CDS logic differ in how they assess input data (clinical variables). We hypothesized that independently created and maintained rule logic would potentially assess the patient’s EHR data in different ways. We observed that bias had the greatest effect on Reminder 9, “Due for blood pressure”. Bias increases the Kappa, suggesting that agreement is better than the raw counts indicate. When we adjust for bias, the Kappa coefficient is lower, providing a more realistic impression of the amount of agreement.

The value of Kappa is also affected by the relative probabilities of “Yes” or “No”. We hypothesized that in our limited sample of patients some reminders would be rarely triggered, affecting the probability of a “Yes” versus a “No”. We observed that prevalence had the greatest effect for Reminder 3, “Recent A1c was over 8”. This reminder was rarely triggered, because it required finding a markedly elevated A1c test value older than 3 months but more recent than 5 months. For such low-prevalence events, although the P0 is reasonable (0.95), the initial calculation of Kappa is low (0.28). Adjusting for the low prevalence produces a higher value (PABAK = 0.91) which conveys a more accurate impression of agreement.

Adjusting for prevalence and bias improved agreement for nearly all of the measures. The adjustment revealed that for 7 of the 11 measures there was near-perfect agreement (0.81-1) with 2 measures demonstrating substantial agreement (0.61-0.80), one measure demonstrating moderate agreement (0.41-.060), and one measure demonstrating fair agreement (0.21-0.40). These results are positive, but they also suggest some discordance. Discordance was likely to occur given variation in knowledge engineering techniques as described in prior work [44]. We identified four types of discrepancies between the local and cloud-based services that likely contributed to the discordance: 1) terminology misalignment, 2) local practice variation, 3) temporal windows, and 4) use of exclusions in guidelines implementation. We now examine these discrepancies, which suggest future opportunities for research and development to advance CDS systems.

Terminology misalignment has potential to cause disagreement between two sets of decision support rules, even when operating on the same patient’s data. Of the eleven rules in our project, blood pressure reminders generated the least agreement. The logic of the blood pressure reminder seems very simple: a recommendation to check blood pressure for those adults who do not have a blood pressure documented during the past 12 months. Yet it illustrates a key challenge of computerized implementation of a simple CDS rule. In its initial implementation, the CDSC rules engine only recognized the LOINC code for “Systolic Blood Pressure” (8480–6). Eskenazi Health outpatient clinics measure blood pressure, but the local electronic health record stores blood pressure values using a different LOINC code: “Systolic Blood Pressure – Sitting” (8459–0). These outpatient blood pressure measurements were not recognized by the CDSC engine. Subsequently, the CDSC rules engine was reconfigured to recognize a broader set of codes. This example illustrates that subtle terminology differences (two LOINC codes which almost mean the same thing) can determine whether two engines generate the same advice or not.

Local practice variations also have potential to introduce discrepancies. We reviewed some of the SNOMED CT codes used to represent diagnoses. For example, a young patient without Coronary Artery Disease (CAD) generated a CDSC recommendation to start anti-platelet therapy with aspirin, as if he needed treatment of CAD. Upon review of the patient’s medical history, we found the patient was treated for chest pain due to a gunshot wound. The CCD sent to the CDSC web service included the SNOMED CT code 194828000 (Angina). The CDSC rules engine recognized this SNOMED CT code as an indicator of CAD, and sent a recommendation for anti-platelet therapy. The local CARE rules service did not consider Angina to be a strong indicator of CAD, and thus did not generate any reminder.

The inclusion of more SNOMED CT codes can also have the opposite effect and make a reminder more specific. For example, CARE rules consider anti-platelet medications contraindicated in the setting of Bleeding Disorder, Thrombocytopenia, and GI Bleed. CDSC rules also look for these contraindications, but include additional contraindications too, such as: Esophageal Varices, Coagulation Factor Deficiency Syndrome, and Cerebral Hemorrhage. By searching for these additional SNOMED-CT codes, the CDSC rules might uncover additional contraindications, and thus better suppress inappropriate reminders for anti-platelet therapy.

An under-recognized source of discrepancy arises when different rules query for data from different time ranges. For example, the CDSC rule queries lab data for evidence of microalbuminuria to justify generating a recommendation to start an ACE Inhibitor medication. This rule only looks at a 12 month time frame when searching for this data. On the other hand, the CARE rule does not stop at 12 months. It does not specify any time limit. Older lab data may be included, potentially decreasing the specificity of this reminder.

Important issues arise when checking the existence of Diabetes. The CDSC diabetes classification excludes Gestational Diabetes from the diagnosis of Diabetes, and thus does not send reminders for eye exams or foot exams to women who have only experienced Gestational Diabetes. The CARE rule does not make this exclusion. The CDSC rule asserts Diabetes based only on the patient’s problem list. The CARE rule uses additional criteria to define Diabetes: the use of any oral hypoglycemic medications or insulins from a manually assembled list. The CARE rule also queries hospital ICD9 discharge diagnoses for evidence of diabetes; the CDSC rule does not.

One of the finer points of decision support is the judicious use of exclusions to prevent over-alerting and alert fatigue. For example, the CDSC rule recommends microalbuminuria screening, but excludes patients who already carry a diagnosis of established renal disease. The CARE rule makes no such exclusion; even if a patient has end-stage renal disease, a screening reminder will be generated if no test in such a category has been performed in the last 12 months. The CARE rule only looks for one contraindication to the use of an ACE Inhibitor: an allergy to this class of drugs. The CDSC rule also excludes patients with pregnancy or hyperkalemia. When recommending annual eye exams, only the CARE rule excludes patients with blindness, or patients who have visited the eye clinic during the year; the CDSC rule does not.

Discordance and the discrepancies likely to have contributed to it illustrate an important dichotomy between universal (or cloud-based) CDS versus local CDS knowledge and maintenance. While cloud-based CDS is likely to produce efficiency and cost benefits to health systems, there will likely be a natural loss of control over the implementation and management of CDS which embodies local knowledge and work practices. This may be an anathema to many clinicians who value both the art and science of medicine. However, customization would erode the economies of scale afforded by cloud-based CDS.

Instead of conceptualizing local practice as something that should be accommodated, initiatives like the CDSC should see local variation and terminology development as an opportunity to improve the collective, universal CDS. As new members are integrated, positive deviance should be identified and adapted for the use of the whole community. For example, identifying variant LOINC codes for blood pressure and exclusions such as blindness for diabetic annual eye exam reminders should be welcomed to improve the knowledge base and rule logic for all. If this is the approach taken, then terminologies become aligned and rules become refined over time and the universal CDS becomes more specific and reduces alert fatigue.

Previous studies have shown that guidelines advanced by national and international professional societies are almost never implemented as intended [45]. Often this is due to poorly designed guidelines with vague definitions of the target population or unclear exclusion criteria. Yet sometimes clinical leaders choose to deviate from guidelines due to local habits. While it does not make sense for a cloud-based CDS to customize its rule sets for individual institutions, it may be appropriate for local institutions to adapt the output of the service to meet local needs. The output of the CDSC is a set of reminders that fired for a given input. Local sites have control over how the information is displayed to clinical users, so output from the CDSC could be presented as a non-interruptive alert instead of an interruptive alert, or ignored altogether, depending on local preferences or practices. While designing such customization for every rule might defeat the purpose of cloud-based CDS, it may be appropriate under certain conditions based on local users’ needs, habits or desires.

Limitations

Our study is chiefly limited by its small size. As the CDSC system was in its initial stage of deployment, just eleven preventive care reminders were implemented. Only the results delivered in the course of 405 patient visits, over a 3 month time period, were analyzed. While we adjusted Kappa to account for prevalence, larger trials comparing local versus cloud-based services would provide greater evidence on the agreement between disparate CDS systems. Further expansion of the CDSC may also uncover other challenges which may lead to more disagreement between the two sets of reminders.

Another limitation is the relative simplicity of the 11 reminders implemented in the study. This set of reminders is not as complex as some rule sets described in the CDS literature. Future plans for the CDSC include implementation of additional preventative rules, including guidelines for immunization schedules and management of chronic illnesses. More complex rule logic, additional exclusion criteria, and rules that rely on social or lifestyle data which are more challenging to extract from electronic health records could pose additional challenges for a remote CDS service. We don’t anticipate that the KM or rule execution of more complex guidelines would be much different than what is presented here, but greater complexity may cause greater discordance with locally developed CDS as more opportunity for diversion from a common standard exists.

Another limitation is the mix of patients in our study sample. As Table 2 indicates, there were small statistically significant differences between the study patients and the larger clinic population, with respect to age and insurance coverage. This is not surprising, because study patients were associated with a convenience set of three physicians, and were not selected at random across multiple sites within the health system. In our judgment, patient demographics are still reasonably characteristic of the larger clinic population. Another, more relevant question is whether our results are generalizable to other outpatient settings in other locations. Our patients are drawn from the urban population of Indianapolis, with a low rate of commercial health insurance. Other institutions elsewhere may serve a very different community. Nevertheless, we believe that our lessons learned about the challenges of data sharing are of great interest regardless of social or economic settings.

Conclusion

The potential of having one CDS engine providing advice through the cloud to multiple institutions running a variety of EHR systems compels us to further develop and evaluate the CDSC. These results should also encourage research and development by others towards more universal approaches to CDS that can provide economies of scale while delivering relevant knowledge to clinicians at the point-of-care. The development of more integrated web-based services for CDS that build on the international efforts occurring within HL7 would not only strengthen the CDSC but enable other regions and nations to advance CDS knowledge management and services. Efforts to further standardize or align terminologies for common preventative services would support greater harmonization across CDS service efforts nationally and internationally. Finally, improved processes for translating guidelines into executable logic would support cloud-based CDS by enabling better pooling of guideline knowledge and rule sets. These efforts would advance core CDS capabilities as well as cloud-based models to deliver accordant, valuable advice to resource-challenged health care providers across the United States and around the world.

Abbreviations

CAD:

Coronary artery disease

CCD:

Continuity of care document

CDS:

Clinical decision support

CDSC:

Clinical decision support consortium

CDSS:

Clinical decision support system

CPOE:

Computerized provider order entry

EHR:

Electronic health record

KM:

Knowledge management

PABAK:

Prevalence adjusted bias adjusted Kappa.

References

  1. Berwick DM, Nolan TW, Whittington J: The triple aim: care, health, and cost. Health Aff (Millwood). 2008, 27 (3): 759-769. 10.1377/hlthaff.27.3.759.

    Article  Google Scholar 

  2. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA: The quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348 (26): 2635-2645. 10.1056/NEJMsa022615.

    Article  PubMed  Google Scholar 

  3. Dexter PR, Wolinsky FD, Gramelspacher GP, Zhou XH, Eckert GJ, Waisburd M, Tierney WM: Effectiveness of computer-generated reminders for increasing discussions about advance directives and completion of advance directive forms. A randomized, controlled trial. Ann Intern Med. 1998, 128 (2): 102-110. 10.7326/0003-4819-128-2-199801150-00005.

    Article  CAS  PubMed  Google Scholar 

  4. Dexter PR, Perkins S, Overhage JM, Maharry K, Kohler RB, McDonald CJ: A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med. 2001, 345 (13): 965-970. 10.1056/NEJMsa010181.

    Article  CAS  PubMed  Google Scholar 

  5. Rosenman M, Wang J, Dexter P, Overhage JM: Computerized reminders for syphilis screening in an urban emergency department. AMIA Annu Symp Proc. 2003, 2003: 987-

    PubMed Central  Google Scholar 

  6. Dexter PR, Perkins SM, Maharry KS, Jones K, McDonald CJ: Inpatient computer-based standing orders vs physician reminders to increase influenza and pneumococcal vaccination rates: a randomized trial. JAMA. 2004, 292 (19): 2366-2371. 10.1001/jama.292.19.2366.

    Article  CAS  PubMed  Google Scholar 

  7. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG: Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006, 144 (10): 742-752. 10.7326/0003-4819-144-10-200605160-00125.

    Article  PubMed  Google Scholar 

  8. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Kendrick AS, Sanders GD, Lobach D: Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012, 157 (1): 29-43. 10.7326/0003-4819-157-1-201207030-00450.

    Article  PubMed  Google Scholar 

  9. Bates DW, Gawande AA: Improving safety with information technology. N Engl J Med. 2003, 348 (25): 2526-2534. 10.1056/NEJMsa020847.

    Article  PubMed  Google Scholar 

  10. Blumenthal D, Glaser JP: Information technology comes to medicine. N Engl J Med. 2007, 356 (24): 2527-2534. 10.1056/NEJMhpr066212.

    Article  CAS  PubMed  Google Scholar 

  11. Desroches CM, Charles D, Furukawa MF, Joshi MS, Kralovec P, Mostashari F, Worzala C, Jha AK: Adoption Of Electronic Health Records Grows Rapidly, But Fewer Than Half Of US Hospitals Had At Least A Basic System In 2012. Health Aff (Millwood). 2012, 32 (8): 1478-1485.

    Article  Google Scholar 

  12. Centers for Medicare & Medicaid Services: Medicare and Medicaid Programs; Electronic Health Record Incentive Program-- Stage 2. Federal Register. 2012, Washington: Office of the Federal Register, National Archives and Records Administration

    Google Scholar 

  13. Jha AK, Burke MF, DesRoches C, Joshi MS, Kralovec PD, Campbell EG, Buntin MB: Progress toward meaningful use: hospitals' adoption of electronic health records. Am J Manag Care. 2011, 17 (12 Spec No): SP117-SP124.

    PubMed  Google Scholar 

  14. Jha AK, DesRoches CM, Kralovec PD, Joshi MS: A progress report on electronic health records in U.S. hospitals. Health Aff (Millwood). 2010, 29 (10): 1951-1957. 10.1377/hlthaff.2010.0502.

    Article  Google Scholar 

  15. Jha AK, DesRoches CM, Campbell EG, Donelan K, Rao SR, Ferris TG, Shields A, Rosenbaum S, Blumenthal D: Use of electronic health records in U.S. hospitals. N Engl J Med. 2009, 360 (16): 1628-1638. 10.1056/NEJMsa0900592.

    Article  CAS  PubMed  Google Scholar 

  16. Wolf L, Harvell J, Jha AK: Hospitals ineligible for federal meaningful-use incentives have dismally low rates of adoption of electronic health records. Health Aff (Millwood). 2012, 31 (3): 505-513. 10.1377/hlthaff.2011.0351.

    Article  Google Scholar 

  17. Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS: Physicians’ decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003, 163 (21): 2625-2631. 10.1001/archinte.163.21.2625.

    Article  PubMed  Google Scholar 

  18. Eccles M, McColl E, Steen N, Rousseau N, Grimshaw J, Parkin D, Purves I: Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ. 2002, 325 (7370): 941-10.1136/bmj.325.7370.941.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Shah NR, Seger AC, Seger DL, Fiskio JM, Kuperman GJ, Blumenfeld B, Recklet EG, Bates DW, Gandhi TK: Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006, 13 (1): 5-11. 10.1197/jamia.M1868.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Earl M: Knowledge Management Strategies: Toward a Taxonomy. J Manag Inf Syst. 2001, 18 (1): 215-233.

    Google Scholar 

  21. Kakabadse NK, Kakabadse A, Kouzmin A: Reviewing the knowledge management literature: towards a taxonomy. J Knowl Manag. 2003, 7 (4): 75-91. 10.1108/13673270310492967.

    Article  Google Scholar 

  22. Dixon BE, McGowan JJ, Cravens GD: Knowledge sharing using codification and collaboration technologies to improve health care: lessons from the public sector. Knowl Manage Res Pract. 2009, 7 (3): 249-259. 10.1057/kmrp.2009.15.

    Article  Google Scholar 

  23. Ash JS, Sittig DF, Dykstra R, Wright A, McMullen C, Richardson J, Middleton B: Identifying best practices for clinical decision support and knowledge management in the field. Stud Health Technol Inform. 2010, 160 (Pt 2): 806-810.

    PubMed  Google Scholar 

  24. Sittig DF, Wright A, Simonaitis L, Carpenter JD, Allen GO, Doebbeling BN, Sirajuddin AM, Ash JS, Middleton B: The state of the art in clinical knowledge management: an inventory of tools and techniques. Int J Med Inform. 2010, 79 (1): 44-57. 10.1016/j.ijmedinf.2009.09.003.

    Article  PubMed  Google Scholar 

  25. Berner ES: Clinical Decision Support Systems: State of the Art. 2009, Rockville, MD: U.S. Agency for Healthcare Research and Quality

    Google Scholar 

  26. Ash JS, McCormack JL, Sittig DF, Wright A, McMullen C, Bates DW: Standard practices for computerized clinical decision support in community hospitals: a national survey. J Am Med Inform Assoc. 2012, 19 (6): 980-987. 10.1136/amiajnl-2011-000705.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Middleton B: The clinical decision support consortium. Stud Health Technol Inform. 2009, 150: 26-30.

    PubMed  Google Scholar 

  28. The Clinical Decision Support Consortium website.http://www.cdsconsortium.org,

  29. Biondich PG, Dixon BE, Duke J, Mamlin B, Grannis S, Takesue BY, Downs SM, Tierney WM: Regenstrief Medical Informatics: Experiences with Clinical Decision Support Systems. Clinical Decision Support: The Road to Broad Adoption. Edited by: Greenes RA. 2014, Burlington, MA: Elsevier, Inc, 165-187. 2

    Chapter  Google Scholar 

  30. Paterno MD, Schaeffer M, Van Putten C, Wright A, Chen ES, Goldberg HS: Challenges in creating an enterprise clinical rules service. AMIA Annu Symp Proc. 2008, 2008: 1086-

    Google Scholar 

  31. Paterno MD, Maviglia SM, Ramelson HZ, Schaeffer M, Rocha BH, Hongsermeier T, Wright A, Middleton B, Goldberg HS: Creating shareable decision support services: an interdisciplinary challenge. AMIA Annu Symp Proc. 2010, 2010: 602-606.

    PubMed  PubMed Central  Google Scholar 

  32. National Institute of Standards and Technology: The NIST Definition of Cloud Computing: Recommendations of the National Institute of Standards and Technology. 2011, Gaithersburg, MD: Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology

    Google Scholar 

  33. Paterno MD, Goldberg HS, Simonaitis L, Dixon BE, Wright A, Rocha BH, Ramelson HZ, Middleton B: Using a Service Oriented Architecture Approach to Clinical Decision Support: Performance Results from Two CDS Consortium Demonstrations. AMIA Annu Symp Proc. 2012, 2012: 690-698.

    PubMed  PubMed Central  Google Scholar 

  34. Dixon BE, Simonaitis L, Goldberg HS, Paterno MD, Schaeffer M, Hongsermeier T, Wright A, Middleton B: A pilot study of distributed knowledge management and clinical decision support in the cloud. Artif Intell Med. 2013, 59 (1): 45-53. 10.1016/j.artmed.2013.03.004.

    Article  PubMed  Google Scholar 

  35. Byrt TBJ, Carlin JB: Bias, prevalence and kappa. Clin Epidemiol. 1993, 46 (5): 423-429. 10.1016/0895-4356(93)90018-V.

    Article  CAS  Google Scholar 

  36. Landis JRKG: The measurement of observer agreement for categorical data. Biometrics. 1977, 33 (1): 159-174. 10.2307/2529310.

    Article  CAS  PubMed  Google Scholar 

  37. Ea T: An Introduction to the Boostrap, Chapman & Hall/CRC. 1993

    Google Scholar 

  38. Sittig DF, Wright A, Meltzer S, Simonaitis L, Evans RS, Nichol WP, Ash JS, Middleton B: Comparison of clinical knowledge management capabilities of commercially-available and leading internally-developed electronic health records. BMC Med Inform Decis Mak. 2011, 11: 13-10.1186/1472-6947-11-13.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Wright A, Phansalkar S, Bloomrosen M, Jenders RA, Bobb AM, Halamka JD, Kuperman G, Payne TH, Teasdale S, Vaida AJ, Bates DW: Best Practices in Clinical Decision Support: the Case of Preventive Care Reminders. Appl Clin Inform. 2010, 1 (3): 331-345. 10.4338/ACI-2010-05-RA-0031.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Wright A, Sittig DF, Ash JS, Bates DW, Feblowitz J, Fraser G, Maviglia SM, McMullen C, Nichol WP, Pang JE, Starmer J, Middleton B: Governance for clinical decision support: case studies and recommended practices from leading institutions. J Am Med Inform Assoc. 2011, 18 (2): 187-194. 10.1136/jamia.2009.002030.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Hongsermeier T, Maviglia S, Tsurikova L, Bogaty D, Rocha RA, Goldberg H, Meltzer S, Middleton B: A legal framework to enable sharing of Clinical Decision Support knowledge and services across institutional boundaries. AMIA Annu Symp Proc. 2011, 2011: 925-933.

    PubMed  PubMed Central  Google Scholar 

  42. Boxwala AA, Rocha BH, Maviglia S, Kashyap V, Meltzer S, Kim J, Tsurikova R, Wright A, Paterno MD, Fairbanks A, Middleton B: A multi-layered framework for disseminating knowledge for computer-based decision support. J Am Med Inform Assoc. 2011, 18 (Suppl 1): i132-i139. 10.1136/amiajnl-2011-000334.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Dixon BE, Paterno MD, Simonaitis L, Goldberg H, Boxwala A, Hongsermeier T, Tsurikova R, Middleton B: Demonstrating Cloud-based Clinical Decision Support at Scale: The Clinical Decision Support Consortium. Stud Health Technol Inform. 2013, 192: 1268-

    Google Scholar 

  44. Peleg M, Boxwala AA, Tu S, Zeng Q, Ogunyemi O, Wang D, Patel VL, Greenes RA, Shortliffe EH: The InterMed approach to sharable computer-interpretable guidelines: a review. J Am Med Inform Assoc. 2004, 11 (1): 1-10.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Tierney WM, Overhage JM, Takesue BY, Harris LE, Murray MD, Vargo DL, McDonald CJ: Computerizing guidelines to improve care and patient outcomes: the example of heart failure. J Am Med Inform Assoc. 1995, 2 (5): 316-322. 10.1136/jamia.1995.96073834.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Pre-publication history

Download references

Acknowledgments

We sincerely thank Joe Kesterson, Andrew Martin, Amanda Nyhuis, Dr. Marc Rosenman, and Faye Smith for their hard work and their dedication to the success of this research study. We are especially thankful to Dr. William Tierney for his advice and guidance. Finally, we gratefully acknowledge Dr. Lisa Harris and all of Eskenazi Health for allowing us to conduct this research study at select community health centers in Indianapolis.

This publication is derived from work supported under a contract with the Agency for Healthcare Research and Quality (AHRQ) Contract # HHSA290200810010. This work was further supported, in part, by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service CIN 13–416. Dr. Dixon is a Health Research Scientist at the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Indiana.

The findings and conclusions in this document are those of the authors, who are responsible for its content, and do not necessarily represent the views of AHRQ or the Department of Veterans Affairs (VA). No statement in this report should be construed as an official position of AHRQ, VA, or of the U.S. Department of Health and Human Services.

Identifiable information on which this report, presentation, or other form of disclosure is based is protected by federal law, Section 934(c) of the public health service act, 42 U.S.C. 299c-3(c). No identifiable information about any individuals or entities supplying the information or described in it may be knowingly used except in accordance with their prior consent. Any confidential identifiable information in this report or presentation that is knowingly disclosed is disclosed solely for the purpose for which it was provided.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian Edward Dixon.

Additional information

Competing interests

All authors have no competing interest to declare.

Authors’ contributions

BED and LS contributed to the concept of the paper. SMP supported the statistical analysis. All authors (1) drafted the paper or revised it critically for important intellectual content; and (2) have given their final approval of the submitted paper.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article

Dixon, B.E., Simonaitis, L., Perkins, S.M. et al. Measuring agreement between decision support reminders: the cloud vs. the local expert. BMC Med Inform Decis Mak 14, 31 (2014). https://doi.org/10.1186/1472-6947-14-31

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6947-14-31

Keywords