Skip to main content

Electronic medical record-based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models

Abstract

Background

There is increasing interest in using prediction models to identify patients at risk of readmission or death after hospital discharge, but existing models have significant limitations. Electronic medical record (EMR) based models that can be used to predict risk on multiple disease conditions among a wide range of patient demographics early in the hospitalization are needed. The objective of this study was to evaluate the degree to which EMR-based risk models for 30-day readmission or mortality accurately identify high risk patients and to compare these models with published claims-based models.

Methods

Data were analyzed from all consecutive adult patients admitted to internal medicine services at 7 large hospitals belonging to 3 health systems in Dallas/Fort Worth between November 2009 and October 2010 and split randomly into derivation and validation cohorts. Performance of the model was evaluated against the Canadian LACE mortality or readmission model and the Centers for Medicare and Medicaid Services (CMS) Hospital Wide Readmission model.

Results

Among the 39,604 adults hospitalized for a broad range of medical reasons, 2.8 % of patients died, 12.7 % were readmitted, and 14.7 % were readmitted or died within 30 days after discharge. The electronic multicondition models for the composite outcome of 30-day mortality or readmission had good discrimination using data available within 24 h of admission (C statistic 0.69; 95 % CI, 0.68-0.70), or at discharge (0.71; 95 % CI, 0.70-0.72), and were significantly better than the LACE model (0.65; 95 % CI, 0.64-0.66; P =0.02) with significant NRI (0.16) and IDI (0.039, 95 % CI, 0.035-0.044). The electronic multicondition model for 30-day readmission alone had good discrimination using data available within 24 h of admission (C statistic 0.66; 95 % CI, 0.65-0.67) or at discharge (0.68; 95 % CI, 0.67-0.69), and performed significantly better than the CMS model (0.61; 95 % CI, 0.59-0.62; P < 0.01) with significant NRI (0.20) and IDI (0.037, 95 % CI, 0.033-0.041).

Conclusions

A new electronic multicondition model based on information derived from the EMR predicted mortality and readmission at 30 days, and was superior to previously published claims-based models.

Peer Review reports

Background

To encourage hospitals to improve care provided to inpatients, two key quality of care outcomes have been operationalized by the Centers for Medicare and Medicaid Services (CMS). 30-day risk-adjusted mortality and 30-day risk-adjusted readmission rates are both publicly reported and hospitals face substantial financial penalties for poor performance as part of the CMS Hospital Readmissions Reduction Program [1, 2]. Despite some evidence that a combination of careful discharge planning, provider coordination and intensive counseling can prevent re-hospitalization, success has been difficult to achieve and sustain [14]. Enrolling all patients into a uniform, high intensity care transition program requires a depth of case management and outpatient resources out of reach for many health systems. Accordingly, there is increasing interest in predicting patient risk of readmission [510], identifying high-risk patients early in the admission [7, 10], and establishing multi-disciplinary programs that target hospital and community resources in order to reduce readmissions in this high-risk subset [11].

Existing models in the literature are mainly based on administrative data and can only be used after patient discharge. In addition, they are often limited to specific disease conditions (e.g. congestive heart failure) or a subset of patients (e.g., Medicare patients). These characteristics limit the potential use of these models in practice. To be effective, programs require predictive models that adequately discriminate between high and low risk patients with a wide range of disease conditions and demographic profiles early in the hospitalization. Predictive models should enable simultaneous comparison across patients in a hospital, reduce time-consuming manual chart review by front-line staff, and work across diverse patient and hospital populations. More recent models, such as the HOSPITAL [9] model and the PREADM [10] model, included the entire adult populations or used EHR data (the PREADM model), and showed promising discriminatory power. We aim to continue to advance in this direction, but to focus on a different outcome, a composite outcome of readmission or death from any cause within 30 days of discharge.

The aim of this study was to use data from 7 diverse hospitals in one large metropolitan area that used a common commercially available electronic medical record (EMR) to: 1) construct and validate an electronic multicondition model (e-model) of all-cause 30-day readmission or mortality risk using data present in the first 24 h of admission (24-h e-model), 2) assess the incremental predictive power of the model by adding information available on hospital discharge (e.g. length of stay, other comorbidities) (discharge e-model), and 3) examine the performance of these e-models compared to two widely cited, administrative claims-based multicondition all-cause readmission models-- the LACE [6] and the CMS Hospital Wide Readmission models [12].

Methods

The study population consisted of all consecutive patients admitted for any medical reason to any of the internal medicine services at 7 hospitals in the Dallas Fort Worth area between November 1, 2009 and October 30, 2010. Patients who left the hospital against medical advice, died during the inpatient stay, or were transferred to another acute care facility were excluded. For patients with multiple index admissions, only the first admission was included (Additional file 1: Figure S1). The seven hospitals are part of three health systems: Parkland Health & Hospital System (PHHS, a public, safety net hospital), University of Texas Southwestern (UTSW, a university teaching hospital), and Texas Health Resources (THR, a faith-based, nonprofit health system).

These health systems were chosen because they use the Epic EMR (EPIC Systems Corporation, Verona, WI), but differ in financial and operating models, teaching status, patient mix, patient volume, size, bed count, and overall mission. These health systems serve a diverse patient population that is more representative of the patients in most US hospitals than those in existing models. Each health system independently extracted data from their EMRs, generated standardized variables, and de-identified data sets prior to analysis. After the datasets were de-identified, they were consolidated and patients were randomly split into derivation and validation cohorts (50-50 split). The study was reviewed and approved by the Institutional Review Boards of UTSW and THR.

Patient-level outcomes

The outcomes of interest were a composite of readmission or death from any cause within 30 days of discharge. Readmission was defined as non-elective re-hospitalization for any cause to any of the 75 acute care hospitals in the larger North Texas region (which includes but extends beyond the Dallas-Forth Worth metropolitan area) using a data linkage service available through the Dallas Fort Worth Hospital Council. Each health system provided data on known deaths within 30 days of discharge based on their own EMR and administrative datasets. We used information on documented encounters in each health system after 30 days post discharge to rule out deaths within 30 days. In the absence of health system evidence of survival beyond 30 days, we queried the Social Security Death Index to identify additional deaths.

Derivation of the models predicting the 30-day readmission or death and 30-day readmission

The 30-day readmission or death model (composite outcome) was constructed using candidate risk factors that met three criteria: (1) available in the EMR at each of the participating hospital, (2) routinely collected or available within the first 24 h of hospital presentation, and (3) plausible predictors of adverse outcomes based on clinical expertise and existing literature [617]. Candidate variables included clinical data such as vital signs, laboratory orders and results, and comorbidities; demographic variables such as age, gender, and marital status; and prior or current healthcare utilization such as number of prior emergency room visits to the hospital. We were not able to include many social and environmental factors associated with readmission risk because they were not consistently available in every institution’s EMR [7, 1220].

Model building occurred in five stages. First, univariate relationships between the composite outcome and each of the 30 candidate variables were assessed in the derivation cohort using a pre-specified significance threshold of P = 0.05. Continuous laboratory and vital sign values were transformed into categorical variables with multiple discrete levels using recursive partitioning [21]. Study team clinicians examined these cut-off values post hoc to ensure consistency with clinical interpretation. Second, to protect against over-fitting (i.e., limiting the number of predictors according to the general guideline of having 10 outcomes for each independent variable used in the multivariate model), the number of predictor variables were restricted to that estimated through a heuristic shrinkage formula [22]. Third, candidate variables were ranked by P value using bootstrapping with replacement in 1000 multivariate logistic regression iterations [21]. Fourth, again using a pre-specified significance threshold of P = 0.05 as well as post hoc clinical judgment, the final 27 model variables were selected to fit the model. Finally, an additional ‘discharge’ model was derived using 3 updated, supplementary variables available at time of discharge (i.e. length of stay, additional coded diagnoses including comorbidities used by CMS readmission models and those used for AHRQ Patient Safety Indicators, and an end of stay Charlson comorbidity index). Missing values occurred to various extents for lab variables, ranging from less than 2 % for vital signs to around 30 % for more selective labs (Albumin). No imputation was employed for missing values. Instead, categories for missing values were created for each variable and outcome rates were compared across the levels and pooled into the reference group. The electronic 30-day readmission model used the same variables identified in the fifth stage of the above process, and was estimated separately in the derivation cohort.

Comparison models

For comparison with published prediction models, the Canadian LACE claims-based model and the CMS Hospital Wide Readmission (HWR) claims-based model were used. The LACE model is a multicondition model for adults of all ages designed to predict 30-day mortality or unplanned readmission among patients discharged from 11 hospitals in Ontario, Canada [6]. LACE uses length of stay, acuity of admission, Charlson comorbidity score, and prior ED visits to construct a prediction index. The CMS HWR readmission measure is a multicondition model that is conditional on the primary diagnosis. It uses the same set of comorbidity variables, but has different sets of odds ratios depending on primary diagnosis. The CMS model is designed to profile hospital performance among Medicare patients hospitalized for multiple disease conditions, using claims based data for risk adjustment and model categorization [12]. Model comparison cohorts excluded patients with psychiatric conditions and cancer as these were not included in the CMS HWR model. We do note that the CMS model was developed to evaluate hospital performance, and does not include a number of variables we used in developing our model, including previous hospitalizations, ER use, and payment source. We decided to use the CMS model as a benchmark despite these differences because it is the most widely recognized multicondition readmission model in the United States.

Statistical analyses

Model calibration was evaluated using the Hosmer-Lemeshow Χ2 goodness of fit test and using the calibration plot (Additional file 1: Figure S3) [23]. Using interval end-points determined by the derivation cohort, five risk categories (1 = very low to 5 = very high) were created based on quintiles of predicted 30 day risk and were graphically assessed by comparing derivation and validation cohort results.

Model discrimination was assessed using several complementary methods. The C statistic was calculated for each fitted model, and compared between models. To provide information beyond changes in the C statistic, we calculated the estimated Integrated Discrimination Improvement (IDI) [24].

For the electronic 30-day readmission risk model, these methods were supplemented with classification and reclassification analyses. Classification analyses compared the electronic model’s patient-level predictions of 30-day readmission with observed readmission events.

In the reclassification analysis [24] for both the electronic model and for the CMS HSR model, the patient-level predicted probabilities of the event were ranked from highest risk to lowest risk and grouped into respective quintiles. The rankings of each model were compared to understand whether each model classified the same patients in different risk strata. All reported P values were based on two-tailed tests with significance level of 0.05, and no corrections for multiple comparisons were made to minimize the errors of interpretation [25]. All analyses were conducted using STATA statistical software (version 10.0; STATA Corp, College Station, TX) and RTREE (from https://pypi.python.org/pypi/Rtree/).

Results

A total of 39,604 index admissions formed the derivation and validation cohorts (Additional file 1: Figure S1). Table 1 lists key characteristics and outcomes of this adult population. The mean age was 61.3 years but ranged from 18 to 89 (age was censored at 89 for de-identification purpose). Of these, 1169 (3 %) died within 30 days, 5142 (13 %) were readmitted within 30 days, and 6022 (15 %) were either readmitted or dead within 30 days (not exclusive).

Table 1 Characteristics of patients in the derivation and validation cohorts

Candidate risk predictors for both the 30-day composite outcome and 30-day readmission electronic model (e-model) are shown in Additional file 1: Table S1, and the risk predictors included in the final e-model are shown in Table 2. Derivation and validation cohorts were highly concordant across the risk spectrum (Additional file 1: Figure S2) and the cohort models were well-calibrated (Additional file 1: Figure S3). Using interval end-points determined by the derivation cohort, quintiles of predicted risk were created which ranged between 5 % and 30 % and these were concordant between derivation and validation cohorts (Additional file 1: Figure S2).

Table 2 Final electronic multicondition multivariate model of risk of 30 day readmission or deatha

For the composite outcome, the 24-h e-model had a C statistic of 0.69 (95 % CI: 0.68 – 0.70) which improved only modestly after the updating of comorbidities in the Charlson comorbidity index and adding length of stay, which were both available on discharge (0.71; 95 % CI: 0.70-0.72; P = 0.05). Model fit was adequate with generalized R2 of 0.06.

Model comparison with the LACE model was performed in a cohort subset (N = 17,233 patients) of the validation cohort (N = 19,773) to exclude patients with cancer or psychiatric disease. For comparison against the CMS model, a cohort subset (N = 16,937 patients) that excluded those who died within 30 days of discharge was used.

In comparison to the LACE model (Table 3) which used data available at discharge, the discharge e-model had significantly better discrimination (C statistic: 0.71; 95 % CI: 0.70-0.72 versus 0.65, 95 % CI: 0.64 – 0.66; difference in model fit 0.06, 95 % CI: 0.05-0.07; P = 0.02). The e-model also created a broader spread of predicted risk across deciles (from 4.9 % to 40.2 % compared to the LACE model (6.1 % to 32.7 %).

Table 3 Comparison of performance of discharge 30-day composite readmission or mortality risk models (N = 17233)a

The e-model for 30-day readmission used the same variables as the composite outcome model, with different estimated odds ratios (Additional file 1: Table S2). Discrimination for the discharge e-model (0.68; 95 % CI: 0.67-0.69) was not significantly better than the 24-h e-model (0.66; 95 % CI: 0.65-0.67). The discharge e-model had similarly good fit in the validation cohort (Table 4). This model’s classification performance was adequate with sensitivity of 49 % and positive predictive values of 21 % when predictions were dichotomized as predicting an outcome if predicted probability of readmission was greater than the 70th percentile (Additional file 1: Table S3).

Table 4 Comparison of performance of discharge 30-day readmission risk modelsa

In comparison with the CMS HWR model (Table 4), the discharge readmission e-model had better discrimination (0.68; 95 % CI: 0.67-0.69 versus 0.61, 95 % CI: 0.59-0.62; difference 0.08, 95 % CI: 0.06-0.09; P < 0.01). Other measures of model discrimination improvement were confirmed by the statistical superiority of the e-model with a significant NRI (0.20) and significant improvement in the IDI (0.037; P < 0.05). The e-model also created a broader spread of predicted risk across the deciles (from 5.9 to 30.7 % compared to the CMS model (9.6 to 20.3 %, Table 4). Of the 2155 patients with readmissions, the e-model also correctly reclassified a greater number of patients with readmissions (873, or 40.5 %) into a higher quintile compared to the CMS model.

The reclassification analysis in Table 5 confirmed this. Using the e-model would result in significantly more accurate risk stratification than the CMS model, as the readmission rates for patients in areas of disagreement were more consistent with rates predicted by the e-model than the CMS model. For example, the readmission rate for the 231 patients predicted to be in the top 20 % by the e-model and bottom 20 % in the CMS model was high (22.5 %), whereas the readmission rate for the 81 patients predicted to be in the bottom 20 % by the e-model and top 20 % by the CMS model was very low (3.7 %). The e-model therefore was more accurate and better calibrated than the CMS model, especially when the two models differ substantially.

Table 5 Risk stratification comparison between discharge 30-day readmission risk e-model and CMS-HWR models

Discussion

In a study population comprising 7 diverse hospitals and 39,604 adults of all ages hospitalized for a broad range of medical reasons, an electronic model utilizing EMR data routinely available within 24 h of admission identified patients at high risk of post-discharge death or readmission events early in their hospitalization.

Adding information available on discharge (e.g. length of stay and other comorbidities) to the electronic model had a small incremental benefit in predicting the risk of readmission and death, but no significant impact on predicting the risk of readmission alone. This suggests that meaningful patient-level risk stratification of readmission risk can occur early in the hospital stay without waiting for further information at time of discharge. The electronic model does not require manual computation by staff and was constructed such that it can be calculated directly from the commonly used commercial EMR employed by this diverse group of 7 hospitals. With wide-spread adaption of EMR systems in US hospitals, accurate, real-time, automated prediction models have the potential to significantly improve patient care during and after hospitalization.

The present study suggests that multicondition electronic models also perform well and may be a more efficient and generalizable approach to predicting risk of readmission across a broad range of medical reasons for hospitalization. Much of the work to date has focused on disease-specific models for conditions such as heart failure, which though the most common reason for admission, still only comprise a few percent of all hospitalizations. A multicondition model would also be more practically useful compared to a spectrum of disease specific models, as many patients have multiple comorbidities.

In contrast to the CMS HWR measure, which is both claims-based and specific to elderly Fee-For-Service Medicare beneficiaries, our new electronic model should be more generalizable because it was derived and validated in a population of all ages (ranging from 18 to 89) and with a diverse payer mix including patients with commercial, Medicare, and Medicaid insurance, as well as those who are uninsured. Among those individual with Medicare, we included those with both Medicare Fee-For-Service and Medicare managed care. Similarly, this electronic model was validated in the high readmission rate environment of US healthcare. In contrast, the LACE model was validated in a lower readmission rate setting in Canada and performed poorly when used in a United Kingdom population differing from the Canadian population [26].

These findings suggest several implications for care delivery and clinical practice. The most practical and promising advantage of this new, multicondition electronic model is that it is based on data readily available in commercial EMRs in the first day of admission, so there is an opportunity to identify high risk patients in real-time early in the hospitalization. We have previously shown that it is feasible to implement a real-time, electronic heart failure model to identify high risk patients, and then target hospital and outpatient evidence-based interventions using existing hospital resources [11]. In this controlled before-and-after trial, implementation of this e-model resulted in unadjusted readmission rates declining from 26.2 to 21.2 % (P < 0.01) over two years, corresponding to a significant adjusted odds ratio of 0.73 (95 % CI: 0.58-0.93). These significant reductions in the readmission rate for the overall heart failure population were accomplished by intervening in about one-fifth of heart failure cases.

Readmission to a hospital within 30 days can be a marker of poor quality of care, but efforts to reduce such events often involve intensive resource management applied to all patients, or interventions that are timed too late in the admission to support effective multi-disciplinary efforts [27]. Methods that can identify those at the highest risk of adverse events and allow sufficient time to initiate and coordinate the concentration of scare resources on those most likely to benefit have great potential for accomplishing the ‘triple aim’ of higher quality, more cost-conscious care for patients and populations.

This study has several strengths and limitations worth noting. First, patients in the study population came from three very different health systems in the fourth largest metroplex in the US, which serve a large, diverse patient populations. However, social and financial factors across this diverse set of 7 hospitals may not be fully representative of hospitals and patients in other regions. If hospitals have different admission thresholds, unobserved severity may differ systematically across hospitals [28]. Second, all 7 hospitals in this study operated using the same EMR platform; it is not known whether access to early admission data, data availability, and EMR adoption stage would be similar in other hospitals. However, this is a common commercial EMR in large institutions. Third, while we derived and validated the model retrospectively in distinct split half datasets, future work to prospectively validate the e-model independently and in other health system settings would be ideal. Finally, the CMS model was developed to evaluate hospital performance using data from the Medicare population using claims data, and not optimized for individual level risk stratification. As such, results of the comparison to CMS model should be interpreted with caution, as the CMS model did not perform well in predicting readmission risk in our population.

Conclusions

High quality, cost-conscious inpatient care requires hospitals to manage and improve their risk-adjusted 30-day mortality and 30-day readmission rates. Much initial attention has focused on developing disease-specific risk models, but quality and efficiency initiatives are needed for all internal medicine conditions. The multicondition electronic model described in this study performed better than previously published comparators and could be implemented in real-time, suggesting that adult internal medicine patients at highest risk of post-discharge events can be identified early in the course of their hospital stay, when this information is most actionable.

References

  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the medicare fee-for-service program. New Engl J Med. 2009;360(14):1418–28.

    Article  CAS  PubMed  Google Scholar 

  2. Centers for Medicare and Medicaid Services. Readmissions Reduction Program. Available at: http://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html. Accessed on November 20, 2013.

  3. Seow H, Phillips CO, Rich MW, Spertus JA, Krumholz HM, Lynn J. Isolation of health services research from practice and policy: the example of chronic heart failure management. J Am Geriatr Soc. 2006;54(3):535–40.

    Article  PubMed  Google Scholar 

  4. Goodman DC, Fisher ES, Chang C-H. After Hospitalization: A Dartmouth Atlas Report on Post-Acute Care for Medicare Beneficiaries. Hanover, NH: The Dartmouth Institute for Health Policy and Clinical Practice; 2011.

    Google Scholar 

  5. Kansagara D, Englander H, Salanitro A, Kagen D, Theobald C, Freeman M, et al. Risk Prediction Models for Hospital Readmission: A Systematic Review. Washington (DC): Department of Veterans Affairs (US); 2011.

    Google Scholar 

  6. van Walraven C, Dhalla IA, Bell C, Etchells E, Stiell IG, Zarnke K, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551(557).

    Google Scholar 

  7. Amarasingham R, Moore B, Tabak Y, Drazner MH, Clark CA, Zhang S, et al. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care. 2010;48(11):981–8.

    Article  PubMed  Google Scholar 

  8. Kansagara D, Englander H, Salanitro A, Kagen D, Theobald C, Freeman M, et al. Risk prediction models for hospital readmission. JAMA. 2011;306(15):1688–98.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Donzé J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA. Intern Med. 2013;173(8):632–8.

    Google Scholar 

  10. Shadmi E, Flaks-Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30-day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283–9.

    Article  PubMed  Google Scholar 

  11. Amarasingham R, Patel P, Toto K, Nelson LL, Swanson TS, Moore BJ, et al. Allocating Scarce Resources in Real-Time to Reduce Heart Failure Readmissions: a Prospective, Controlled Study. BMJ Qual Saf. http://doi.org/10.1136/bmjqs-2013-001901.

  12. Horwitz L, Partovian C, Lin Z, Herrin J, Grady J, Conover M et al. Hospital-Wide All-Cause Unplanned Readmission – Version 3.0. Available at: http://altarum.org/sites/default/files/uploaded-publication-files/Rdmsn_Msr_Updts_HWR_0714_0.pdf. Accessed on May 25, 2015.

  13. Ni H, Nauman D, Burgess D, Wise K, Crispell K, Hershberger RE. Factors influencing knowledge of and adherence to self-care among patients with heart failure. Arch Intern Med. 1999;159(14):1613–9.

    Article  CAS  PubMed  Google Scholar 

  14. Huynh QL, Saito M, Blizzard CL, Eskandari M, Johnson B, Adabi G, et al. Roles of nonclinical and clinical data in prediction of 30-day rehospitalization or death among heart failure patients. J Card Fail. 2015;21(5):374–81.

    Article  PubMed  Google Scholar 

  15. Gwadry-Sridhar F, Flintoft V, Lee DS, Lee H, Guyatt GH. A systematic review and meta-analysis of studies comparing readmission rates and mortality rates in patients with heart failure. Arch Intern Med. 2004;164(21):2315–20.

    Article  PubMed  Google Scholar 

  16. Phillips CO, Wright SM, Kern DE, Singa RM, Shepperd S, Rubin HR. Comprehensive discharge planning with postdischarge support for older patients with congestive heart failure. JAMA. 2004;291(11):1358–67.

    Article  CAS  PubMed  Google Scholar 

  17. Keenan P, Normand S, Lin Z, Drye EE, Bhat KR, Ross JS, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30-Day all-cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:29–37.

    Article  PubMed  Google Scholar 

  18. Hersh AM, Masoudi FA, Allen LA. Post-discharge environment following heart failure hospitalization: expanding the view of hospital readmission. J Am Heart Assoc. 2013;2:e000116.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Post-discharge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling medicare beneficiaries. Gerontologist. 2008;48:495–504.

    Article  PubMed  Google Scholar 

  20. Peterson PN, Shetterly SM, Clarke CL, Bekelman DB, Chan PS, Allen LA, et al. Health literacy and outcomes among patients with heart failure. JAMA. 2011;305:1695–701.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. Cook EF, Goldman L. Empiric comparison of multivariate analytic techniques: advantages and disadvantages of recursive partitioning analysis. J Chronic Dis. 1984;37(9-10):721–31.

    Article  CAS  PubMed  Google Scholar 

  22. Harrell Jr FE, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med. 1996;15(4):361–87.

    Article  PubMed  Google Scholar 

  23. Kramer AA, Zimmerman JE. Assessing the calibration of mortality benchmarks in critical care: the Hosmer-Lemeshow test revisited. Crit Care Med. 2007;35(9):2052–6.

    Article  PubMed  Google Scholar 

  24. Pencina MJ, D’ Agostino RB, D’Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27:157–72.

    Article  PubMed  Google Scholar 

  25. Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiol. 1990;1:43–6.

    Article  CAS  Google Scholar 

  26. Cotter PE, Bhalla VK, Wallis SJ, Biram RW. Predicting readmissions: poor performance of the LACE index in an Older UK population. Age Ageing. 2012;41(6):784–9.

    Article  PubMed  Google Scholar 

  27. McAlister FA. Decreasing Readmissions: It Can Be Done But One Size Does Not Fit All. BMJ Qual Saf. http://dx.doi.org/10.1136/bmjqs-2013-002407

  28. Huesch MD. Payment policy by measurement of health care spending and outcomes. JAMA. 2010;303:2405–6.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This study was funded by the Commonwealth Fund Grant #20100323, Developing a Clinical Decision Support Tool to Prospectively Identify Patients at High Risk for Hospital Readmission Parkland Health and Hospital. Drs. Halm and Zhang were additionally supported by AHRQ Grant# R24 HHS022418, UT Southwestern Center for Patient-Centered Outcomes Research. We also wish to acknowledge the support of: Parkland Health and Hospital, UT Southwestern Medical Center, and Texas Health Resources Health System.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruben Amarasingham.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Drs. Amarasingham, Xie and Ma and Mr Clark had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Author contributions were as follows: 1) conception and design (RA, EAH, FV, DB, CC); 2) acquisition of data (RA, DB, CC, YM); 3) analysis and interpretation of data (RA, DB, CC, BX, YM, SZ, BL, MH); 4) drafting of the manuscript (RA, ED, AJ, MH); 5) critical revision of the manuscript for important intellectual content (RA, CC, SZ, BX, YM, FV, DB, BL, AJ, EAH, MH); 6) statistical analysis (RA, CC, BX, SZ, YM); 7) obtaining funding (RA, FV, EAH); 8) administrative, technical, or material support (RA, AJ); 9) supervision (RA, FV, EAH). All authors read and approved the final manuscript.

Additional file

Additional file 1: Table S1.

Results of univariate analysis of Risk of 30 Day Readmission or Deatha. Table S2. Multivariate Adjusted Logistic Regression Coefficients For Present on Admission 30-Day Readmission Model. Table S3. E-Risk Model Classification Performance for All-Comers 30-Day Readmission Risk Based on Data Available in First 24 Hoursa. Figure S1. Cohort assembly diagram. Figure S2. Calibration of Derivation and Validation Cohorts By Quintiles of the Electronic Multicondition Model Predicting 30-Day Readmission and Death. Figure S3. Calibration Plot for both Derivation and Validation cohorts.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amarasingham, R., Velasco, F., Xie, B. et al. Electronic medical record-based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak 15, 39 (2015). https://doi.org/10.1186/s12911-015-0162-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-015-0162-6

Keywords