Skip to main content
  • Research article
  • Open access
  • Published:

Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system

Abstract

Background

Identification of potentially preventable readmissions is typically accomplished through manual review or automated classification. Little is known about the concordance of these methods.

Methods

We manually reviewed 459 30-day, all-cause readmissions at 18 Kaiser Permanente Northern California hospitals, determining potential preventability through a four-step manual review process that included a chart review tool, interviews with patients, their families, and treating providers, and nurse reviewer and physician evaluation of findings and determination of preventability on a five-point scale. We reassessed the same readmissions with 3 M’s Potentially Preventable Readmission (PPR) software. We examined between-method agreement and the specificity and sensitivity of the PPR software using manual review as the reference.

Results

Automated classification and manual review respectively identified 78% (358) and 47% (227) of readmissions as potentially preventable. Overall, the methods agreed about the preventability of 56% (258) of readmissions. Using manual review as the reference, the sensitivity of PPR was 85% and specificity was 28%.

Conclusions

Concordance between methods was not high enough to replace manual review with automated classification as the primary method of identifying preventable 30-day, all-cause readmission for quality improvement purposes.

Peer Review reports

Background

Hospital readmissions are expensive and may reflect poor quality care. Under the new Readmissions Reduction Program, the U.S. Centers for Medicare and Medicaid Services reduces payments to hospitals with excess 30-day readmission rates [1]. Many hospitals are therefore interested in identifying preventable readmissions and understanding how they can be prevented.

Classifying readmissions as potentially preventable or not preventable can be used to improve hospital performance. Administrators can sort potentially preventable readmissions into categories that are actionable for improvement. They can identify trends over time or across reporting units. Classifying readmissions as potentially preventable or not preventable can also be used to establish accountability across reporting units and reward top performers.

In a recent meta-analysis of 16 studies, the median proportion of 30-day readmissions that were judged as avoidable was 21.6% [2]. The range was 5% to 59% [24]. The methods used to measure potential preventability vary greatly, but most involve manual chart review by at least one reviewer [2, 5]. Manual review is labor intensive and subjective. To address these shortcomings, automated software classification programs have been developed that rely on administrative data to identify potential preventability [5, 6]. Automated classification offers the prospect of greater efficiency and consistency. However, automated classification has been found to identify more readmissions as potentially preventable than does manual review, so its validity has been questioned [5, 7]. Although studies have compared manual review to automated classification, no published evidence describes the extent of agreement between methods applied to the same readmissions. We assessed the concordance between manual review and automated classification on the same set of readmissions to determine if automated classification could more efficiently identify preventable readmissions for quality improvement purposes.

Methods

Design

We compared a manual review of readmissions to automated classification by the Potentially Preventable Readmission (PPR) software from 3 M. Manual review consisted of a multi-step process that has been described in more detail elsewhere, which was conducted to identify missed opportunities to prevent readmissions [8]. The first step was a detailed chart review conducted by trained nurse reviewers, based loosely on an expanded version of a readmissions diagnostic tool from the Institute for Healthcare Improvement [9]. Chart review data came from KP HealthConnectTM, the electronic health record (EHR). Interviews with treating physicians followed, and guided topics included their assessment of the preventability of readmission. We also interviewed patients and family caregivers in 73% of readmissions, again using an interview guide and soliciting an assessment of preventability. The same nurse reviewer conducted the chart review and interviews for each patient. In the final step of manual review, the nurse reviewer partnered with a physician reviewer to review and assess information and opinions from the chart review and interviews. They identified factors representing missed opportunities to prevent the readmission from a checklist of 35 possibilities prepared by subject matter experts and pilot tested before use. After discussion, the nurse reviewer/physician team used clinical judgment to assess the preventability of the readmission on a five-point scale (not, slightly, moderately, very, or completely likely to be preventable).

Preventability was assessed within six weeks of readmission. Nurse reviewers conducting chart reviews and interviews were trained and, during each case review, received facilitation, guidance, and consultation from a member of the research team that focused on data validation and consistent ratings during data collection.

The PPR software assesses potential preventability based on All Patient Refined Diagnosis Related Groups (APR DRGs), an index of diagnosis and severity of illness [10]. Cases are assigned an APR DRG code at initial admission and at readmission. A panel of physicians involved in the development of PPR looked at all possible combinations of APR DRGs and predetermined whether each combination involved a readmission that was clinically related to initial admission or the result of a complication from initial admission7. If an APR DRG combination was predetermined to be clinically related or involving a clinical complication, PPR classifies the readmission as potentially preventable [11].

Setting and population

Our study was conducted in the Northern California region of Kaiser Permanente (KPNC) which currently has 3.25 million members and had 20 hospitals at the time of this study. We piloted and refined our manual review methods at two hospitals. The readmissions we reviewed for this assessment took place between December 2009 and June 2010 in the remaining 18 KPNC hospitals; at the time, the regional all cause, 30-day readmission rate for Medicare recipients was 12.2%, well below the 19.6% reported across all fee-for-service Medicare beneficiaries in the United States (unadjusted for any case mix differences) [12, 13]. The hospitals were located in both urban and suburban areas, and the daily census ranged from 35 to 250 patients. All hospitals employed salaried hospitalists, and five were teaching hospitals.

Our comparison included 459 cases of patients readmitted for any reason within 30 days of hospital discharge for whom manual reviews had been conducted as part of a previous quality improvement report [8]. The initial population, identified using administrative data from the EHR, consisted of approximately 30 patients at each site who were most recently and consecutively readmitted to the same facility within 30 days of index hospitalization discharge. We chose the most recent readmissions to increase the likelihood of reaching patients, families, and providers for interviews and ensuring they would recall the details of the episode. We excluded patients who were pregnant, childbearing, or under the age of 18 from the initial population. In addition, during the assessment reported here, we excluded 79 readmissions because the manual review was missing data (17) or did not assess readmission preventability (62), and the PPR software excluded 71 readmissions due to diagnoses of human immunodeficiency virus (HIV) or metastatic malignancy (64), patients who left against medical advice (2), transfers to other settings (4), or multiple trauma (1). Nine readmissions were both manual review and PPR exclusions. Each of the remaining 459 readmissions, which had an existing assessment of preventability from manual review was independently classified as preventable or not preventable by automated classification using PPR.

Analysis

We compared results of manual review and PPR using two-by-two tables to describe patterns of agreement and disagreement. The true proportion of readmissions that are potentially preventable is unknown, but the predominant reported method of identifying potentially preventable readmissions is manual review [5]. We therefore used manual review as a reference point to measure the sensitivity and specificity of PPR. Sensitivity refers to the percentage of potentially preventable readmissions identified by manual review that were also identified as such by PPR. Specificity refers to the percentage of non-potentially preventable readmissions identified by manual review that were also identified as such by the software program.

We conducted two supplementary analyses, examining PPR sensitivity and specificity among medical patients and among readmissions occurring within and after seven days of hospital discharge. The Kaiser Permanente Northern California Institutional Review Board approved this study.

Results

The median age of readmitted patients was 69; 54% were female (Table 1). PPR identified 78% (358) of readmissions as potentially preventable, whereas the manual review identified 49% (227) of readmissions as potentially (slightly to completely) preventable. Overall, the methods agreed about preventability or non-preventability in 56% (258) of cases (Table 2).

Table 1 Patient characteristics (n = 459)
Table 2 Concordance between methods for identifying potential preventability

Using manual review as reference, the sensitivity of PPR was 85%. In other words, it identified 85% of the potentially preventable readmissions that were identified by manual review. The specificity of PPR was 28%; it correctly classified 28% of the non-potentially preventable readmissions identified by manual review. Of the 232 cases identified as not potentially preventable by manual review, PPR identified 72% as potentially preventable. These results did not vary substantially when we considered only medical patients (N = 312). When we considered readmissions that occurred within seven days of discharge from the index admission (N = 326), sensitivity was slightly higher at 91%; however, when we considered 133 readmissions that occurred more than seven days after index discharge (N = 133), sensitivity was 67%. We did not test the statistical significance of this difference. Specificity did not vary substantially from the original analysis.

Discussion

Substantial differences existed between manual review and automated classification methods, with PPR identifying many more readmissions as potentially preventable. This may have occurred because PPR uses a sole criterion to identify potential preventability: clinical relatedness to the index admission. In contrast, manual review classified as non-preventable many readmissions that were clinically related to the index stay. For example, a 75-year-old man was admitted twice within 30 days for exacerbation of chronic obstructive pulmonary disease. Reviewers found that his follow-up care and transition care plan were appropriate. The patient and his physician felt that the readmission could not have been prevented by Kaiser Permanente, and the reviewers agreed.

To a lesser extent, manual review also identified potentially preventable readmissions that PPR did not identify. For example, a 54-year-old woman was first admitted for partial thickness burns and then readmitted with a digestive system diagnosis. Reviewers found that, had she received appropriate referrals and post-discharge follow up, the readmission may potentially have been prevented. This assessment is consistent with recent research suggesting that, immediately after discharge, patients may be at generalized elevated risk and need additional support to manage ongoing health conditions [14]. PPR did not identify this case as potentially preventable.

Manual review, a subjective process, might have resulted in misclassifications. Manual review processes including more than one reviewer are associated with an increase in the proportion of readmissions identified as preventable [2]. Our manual review process used a nurse reviewer/physician team to assess preventability and identified 47% of readmissions as potentially preventable, nearly double the reported median [2]. It is unlikely that between-methods differences resulted from underestimation of preventability on manual review.

A strength of our report is that we used both methods among the same cases, controlling for variables that have made it difficult to compare methods of measuring preventability in the past, such as patient population and quality of hospital care [15]. Several limitations deserve mention. Reviewers were affiliated with (physicians) or employed by (nurses) Kaiser Permanente, which might have affected their assessment; however, they had not provided care for cases they reviewed. Our assessment took place in an integrated care setting with comprehensive EHR capabilities, and the generalizability of our findings to other settings is unknown. PPR is designed to assess potential preventability over thousands of cases; our analysis may have been too small to assess its accuracy. A different automated classification system may have generated different results, although studies using administrative data alone yield preventability estimates of 55% to 77.1%, much higher than the median for manual review of less than 22% [2].

Few validation reports of PPR exist to which we can compare our results. PPR identified 6.2% of 30-day readmissions among pediatric patients as potentially preventable and excluded some diagnoses amenable to quality improvement or of uncertain preventability; the authors concluded that caution was warranted when applying the tool to pediatric populations [16]. In preliminary findings from an ongoing study at the U.S. Department of Veterans Affairs (VA), PPR identified just over half of researcher-identified pneumonia readmissions [17]. Another VA study found that PPR and the Centers for Medicare & Medicaid Services (CMS) all-cause readmission measure were moderately correlated; when the variable of potential preventability was removed from the analysis, correlation increased [18].

Our objective was to determine whether PPR could replace manual review as a method for identifying preventable readmissions to support our ultimate goal of identifying system gaps that contributed to them. The significant discrepancy between results precludes that option; PPR classification agreed with manual review only slightly better than half the time. It would overlook 15% of preventable readmissions and direct most of our organizational attention to readmissions that were not potentially preventable.

The developers of PPR recommend that it be used as a screening tool to identify types of patients and providers with higher than expected readmission rates as a means of focusing subsequent manual review on those patients who have the greatest likelihood of having a preventable readmission. We did not assess the use of PPR across settings and cannot comment on its ability to measure relative performance across facilities. However, in a recent comparison of PPR and the CMS all-cause readmission measure, PPR hospital profiles would have generated different payment penalties for 30% of hospitals [18].

The true number of potentially preventable readmissions remains unknown, and the choice of method greatly influences the proportion identified as potentially preventable. However, meaningful identification of preventability, which pinpoints missed opportunities leading to avoidable readmissions and forms the basis for quality improvement efforts, depends on the review of primary data [2, 8, 18]. Future research is required to identify and test ways to refine the PPR to increase its concordance with manual review. For example, studies with larger samples may identify subsets of readmissions in which sensitivity and specificity are improved. For instance, one of our additional analyses suggests that PPR sensitivity may vary with the timing of readmissions; further research is required to confirm this finding. Research is also required to establish the effectiveness of using automated classification and manual review in combination to identify potentially preventable readmissions and quality improvement opportunities to address them.

Conclusions

Thorough manual review and automated classification methods differed substantially in the proportion of readmissions classified as potentially preventable. PPR identified many more readmissions as potentially preventable. Not enough concordance currently exists between methods to use automated classification to replace manual review for quality improvement initiatives.

References

  1. Readmissions reduction program.http://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program.html,

  2. van Walraven C, Jennings A, Forster AJ: A meta-analysis of hospital 30-day avoidable readmission rates. J Eval Clin Pract. 2012, 18: 1211-1218. 10.1111/j.1365-2753.2011.01773.x.

    Article  PubMed  Google Scholar 

  3. Levy A, Alsop K, Hehir M, Lock V, Greenwood R, Tobin G: Hospital readmissions. We'll meet again. Health Serv J. 2000, 110: 30-31.

    CAS  PubMed  Google Scholar 

  4. Williams EI, Fitton F: Factors affecting early unplanned readmission of elderly patients to hospital. BMJ. 1988, 297: 784-787. 10.1136/bmj.297.6651.784.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. van Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ: Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011, 183: E391-402. 10.1503/cmaj.101860.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Goldfield N: How important is it to identify avoidable hospital readmissions with certainty?. CMAJ. 2011, 183: E368-369. 10.1503/cmaj.110448.

    Article  PubMed  PubMed Central  Google Scholar 

  7. van Walraven C, Jennings A, Taljaard M, Dhalla I, English S, Mulpuru S, Blecker S, Forster AJ: Incidence of potentially avoidable urgent readmissions and their relation to all-cause urgent readmissions. CMAJ. 2011, 183: E1067-1072. 10.1503/cmaj.110400.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Feigenbaum P, Neuwirth E, Trowbridge L, Teplitsky S, Barnes CA, Fireman E, Dorman J, Bellows J: Factors contributing to all-cause 30-day readmissions: a structured case series across 18 hospitals. Med Care. 2012, 50: 599-605. 10.1097/MLR.0b013e318249ce72.

    Article  PubMed  Google Scholar 

  9. STAAR: STate Action on Avoidable Rehospitalizations.http://www.ihi.org/knowledge/Pages/Tools/ReadmissionsDiagnosticWorksheet.aspx,

  10. Averill RF, Goldfield NI, Muldoon J, Steinbeck BA, Grant TM: A closer look at all-patient refined DRGs. J AHIMA. 2002, 73: 46-50.

    PubMed  Google Scholar 

  11. Goldfield NI, McCullough EC, Hughes JS, Tang AM, Eastman B, Rawlins LK, Averill RF: Identifying potentially preventable readmissions. Health Care Financ Rev. 2008, 30: 75-91.

    PubMed  PubMed Central  Google Scholar 

  12. Jencks SF, Williams MV, Coleman EA: Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med. 2009, 360: 1418-1428. 10.1056/NEJMsa0803563.

    Article  CAS  PubMed  Google Scholar 

  13. America's Health Insurance Plans: Reductions in hospital days, re-admissions, and potentially avoidable admissions among Medicare Advantage enrollees in California and Nevada. 2009, Washington, DC: AHIP Center for Policy and Research

    Google Scholar 

  14. Krumholz HM: Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013, 368: 100-102. 10.1056/NEJMp1212324.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Yam CH, Wong EL, Chan FW, Wong FY, Leung MC, Yeoh EK: Measuring and preventing potentially avoidable hospital readmissions: a review of the literature. Hong Kong Med J. 2010, 16: 383-389.

    PubMed  Google Scholar 

  16. Gay J, Agrawal R, Del Beccaro M, Eghtesady P, Feildston ES, Golias J, Hain P, McClead RE, Morse RB, Neuman MI, Harris JM, Simon H, Tejedor-Soto J, Teufel R, Shah S: Rates and impact of potentially preventable readmissions at children's hospitals [abstract]. Am Acad Pediatr. 2013,https://aap.confex.com/aap/2013/webprogram/Paper20638.html,

    Google Scholar 

  17. Validating and classifying VA readmissions for quality assessment and improvement.http://www.hsrd.research.va.gov/research/abstracts.cfm?Project_ID=2141701056#.UsNsPmRDtX8,

  18. Mull HJ, Chen Q, O'Brien WJ, Shwartz M, Borzecki AM, Hanchate A, Rosen AK: Comparing 2 methods of assessing 30-Day readmissions: what is the impact on hospital profiling in the veterans health administration?. Med Care. 2013, 51: 589-596. 10.1097/MLR.0b013e31829019a4.

    Article  PubMed  PubMed Central  Google Scholar 

Pre-publication history

Download references

Acknowledgements

We thank the following individuals from 3 M for their contributions to this work: Norbert Goldfield, Lisa Lyons, Rich Averill, John Hughes, and Matt Bates. The efforts of Carol Ann Barnes, Rebecca Hambright, Erica Rudolph, Elizabeth Suden, and Jay Robles from Kaiser Permanente were essential. Jennifer Green provided editorial assistance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ana H Jackson.

Additional information

Competing interests

All authors declare that they have no competing interests.

Authors’ contributions

AJ designed the study, collected, analyzed and interpreted the data, and revised the manuscript for important intellectual content. EF collected and analyzed data and drafted the manuscript. PF interpreted the data and revised the manuscript for important intellectual content. EN designed the study, collected data, and revised the manuscript. PK analyzed the data and revised the manuscript. JB conceived of the study, interpreted the data, and revised the manuscript. All authors approved the final version of the manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Cite this article

Jackson, A.H., Fireman, E., Feigenbaum, P. et al. Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system. BMC Med Inform Decis Mak 14, 28 (2014). https://doi.org/10.1186/1472-6947-14-28

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6947-14-28

Keywords