Skip to main content

Table 3 Performances of transformer- and fusion-based models in terms of class-specific recall, precision and F1-scores, and overall accuracy

From: Text classification models for the automatic detection of nonmedical prescription medication use from social media

Classification algorithm Precision Recall F1-score Accuracy (%)
A C M U A C M U A C M U
BERT-1 0.60 0.78 0.86 0.88 0.61 0.77 0.85 0.89 0.60 0.77 0.86 0.89 79.48
BERT-2 0.60 0.79 0.86 0.91 0.61 0.77 0.86 0.85 0.61 0.78 0.86 0.88 79.85
RoBERTa 0.63 0.81 0.88 0.90 0.66 0.82 0.87 0.89 0.65 0.81 0.88 0.90 82.32
AlBERT 0.66 0.81 0.88 0.86 0.63 0.83 0.88 0.88 0.65 0.82 0.88 0.87 82.78
XLNet 0.65 0.77 0.86 0.87 0.55 0.83 0.86 0.82 0.60 0.80 0.86 0.85 80.52
DistilBERT 0.56 0.75 0.86 0.89 0.60 0.77 0.83 0.87 0.58 0.76 0.84 0.88 78.0
Proposed Fusion-1 0.60 0.84 0.91 0.78 0.76 0.81 0.84 0.93 0.67 0.82 0.87 0.85 82.22
Proposed Fusion-2 0.67 0.83 0.87 0.88 0.62 0.83 0.90 0.89 0.65 0.83 0.89 0.88 83.43
Proposed Fusion-3 0.56 0.83 0.90 0.75 0.73 0.80 0.83 0.92 0.64 0.82 0.86 0.82 80.92
Proposed Fusion-4 0.68 0.84 0.87 0.89 0.62 0.82 0.90 0.87 0.64 0.83 0.89 0.88 83.49
  1. Best scores for each metric over all the classifiers shown in bold
\