Skip to main content

Table 1 Common metrics in model evaluation [28]

From: An intelligent decision support system for acute postoperative endophthalmitis: design, development and evaluation of a smartphone application

Evaluation metrics

Definitions

Formula

Accuracy

Percentage of correct identification of both positive and negative results. The higher the accuracy, the better the classifier

\(\frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{TN}+\mathrm{FP}+\mathrm{FN}}\)

Precision/Positive Prediction Value (PPV)

Proportion of correctly identified positives out of all samples identified as positive

\(\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}\)

Sensitivity/True Positive Rate (TPR)

Proportion of correctly identified positives

\(\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}\)

Specificity/True Negative Rate (TNR)

Proportion of correctly identified negatives

\(\frac{\mathrm{TN}}{\mathrm{TN}+\mathrm{FP}}\)

Negative Predictive Values (NPV)

Proportion of correctly identified negatives out of all samples identified as negative

\(\frac{\mathrm{TN}}{\mathrm{TN}+\mathrm{FN}}\)

F-measure

Harmonic mean of precision and sensitivity. The highest F1 score is 1 and the lowest is 0

\(\frac{2 *\mathrm{ TP}}{(2 *\mathrm{ TP }+\mathrm{ FP}+\mathrm{FN})}\)