Skip to main content

Table 6 The experimental results of transformers-sklearn and transformers in four medical NLP tasks (mode_type = “roberta”)

From: Transformers-sklearn: a toolkit for medical language understanding with transformer-based models

Name

Score

Second

Lines of code

Pre-trained model

Ours

Transformers

Ours

Transformers

Ours

Transformers

TrialClassification

0.8148a

0.8231a

1206

1208

38

246

chinese-roberta-wwm-ext

BC5CDR

0.8528a

0.8461a

460

504

41

309

roberta-base

DiabetesNER

0.7068a

0.7184a

1445

1426

63

309

chinese-roberta-wwm-ext

BIOSSES

0.3996b

0.3614b

36

17

41

246

roberta-base

  1. aThe value of Macro F1, where the bolded one indicates the best performance
  2. bThe value of Person correlation, where the bolded one indicates the best performance