Skip to main content

Table 5 The experimental results of transformers-sklearn, transformers and UER in four medical NLP tasks (mode_type = “bert”)

From: Transformers-sklearn: a toolkit for medical language understanding with transformer-based models

Name

Score

Second

Lines of code

Pre-trained model

Ours

Transformers

UER

Ours

Transformers

UER

Ours

Transformers

UER

TrialClassification

0.8225a

0.8312a

0.8213a

1198

1227

764

38

246

412

bert-base-chinese

BC5CDR

0.8703a

0.8635a

-

471

499

-

41

309

-

bert-base-cased

DiabetesNER

0.6908a

0.6962a

0.7166a

1254

1548

2805

63

309

372

bert-base-chinese

BIOSSES

0.8260b

0.8200b

-

19

15

-

41

246

-

bert-base-cased

  1. aThe value of Macro F1, where the bolded one indicates the best performance.
  2. bThe value of Person correlation, where the bolded one indicates the best performance.