Skip to main content
Figure 1 | BMC Medical Informatics and Decision Making

Figure 1

From: Computerized prediction of intensive care unit discharge after cardiac surgery: development and validation of a Gaussian processes model

Figure 1

ROC of the classification task. Panel A. Validation cohort of 499 patients. ROC of the GP model (aROC = 0.758) (dark interrupted line), with a circle indicating the cutoff for best discrimination and calibration. ROC of the EuroSCORE (aROC = 0.726) (grey dotted line), with a star indicating the cutoff for best discrimination and calibration. There was no statistically significant difference between GP and EuroSCORE (p = 0.286). Panel B. Validation subcohort of 396 patients. ROC of the GP model (aROC = 0.769) (dark interrupted line), with a circle indicating the cutoff for best discrimination and calibration. ROC of the EuroSCORE (aROC = 0.726) (grey dotted line), with a star indicating the cutoff for best discrimination and calibration. ROC of the predictions by nurses (aROC = 0.695) (black uninterrupted line), with a triangle indicating the cutoff for best discrimination and calibration. The aROC of the predictions by nurses was significantly lower than the GP model (p = 0.018). Panel C. Validation subcohort of 159 patients. ROC of the GP model (aROC = 0.777) (dark interrupted line), with a circle indicating the cutoff for best discrimination and calibration. ROC of the EuroSCORE (aROC = 0.726) (grey dotted line), with a star indicating the cutoff for best discrimination and calibration. ROC of the predictions by ICU physicians (aROC = 0.758) (black uninterrupted line), with a triangle indicating the cutoff for best discrimination and calibration. The aROC of the predictions by ICU physicians was not significantly different from the GP model (p = 0.719).

Back to article page