Skip to main content

Table 7 The experimental results of transformers-sklearn and transformers in four medical NLP tasks (mode_type = “albert”)

From: Transformers-sklearn: a toolkit for medical language understanding with transformer-based models

Name

Score

Second

Lines of code

Pre-trained model

Ours

Transformers

Ours

Transformers

Ours

Transformers

TrialClassification

0.7142a

0.4504a

1062

1068

38

246

albert_chinese_base

BC5CDR

0.8422a

0.8523a

444

492

41

309

albert-base-v2

DiabetesNER

0.6196a

0.6436a

1122

1253

63

309

albert_chinese_base

BIOSSES

0.1892b

0.4394b

12

11

41

246

albert-base-v2

  1. aThe value of Macro F1, where the bolded indicates the best performance
  2. bThe value of Person correlation, where the bolded inidcates the best performance