Skip to main content

Table 1 Evaluation of AUROC and AUPRC for all machine learning algorithms

From: Machine learning methods to predict 30-day hospital readmission outcome among US adults with pneumonia: analysis of the national readmission database

Algorithm

AUROC (95% CI)

P valuea

AUPRC (95% CI)

Net differenceb

Testing set

Rule-based model

0.6591 (0.6556–0.6627)

[Reference]

0.2146

[Reference]

Decision tree

0.5783 (0.5751–0.5815)

P < 0.001

0.156

− 0.0586

Random forest

0.6509 (0.6473–0.6545)

P < 0.01

0.2052

− 0.0094

XGBoost

0.6606 (0.657–0.6641)

0.015

0.2147

0.0001

LASSO

0.6087 (0.6053–0.612)

P < 0.001

0.2042

− 0.0104

Training set

Rule-based model

0.669 (0.6654–0.6725)

[Reference]

0.219

[Reference]

Decision tree

0.5773 (0.5741–0.5805)

P < 0.001

0.1556

− 0.0634

Random forest

0.6558 (0.6522–0.6594)

P < 0.001

0.2109

− 0.0081

XGBoost

0.6725 (0.669–0.6761)

P < 0.001

0.2279

0.0089

LASSO

0.6062 (0.6029–0.6095)

P < 0.001

0.2007

− 0.0183

  1. The best performance model is in bold
  2. ML, machine learning; XGBoost: Extreme Gradient Boosting; AUROC, area under receiver operating curve; LASSO, least absolute shrinkage, and selection operator; AUPRC: area under the precision-recall curve a p value is based on the DeLong test for comparison of area under the receiver operating characteristic curves for different models with reference to the rule-based model. b Calculated based on the net difference between all baseline models with reference to the rule-based model