From: Transformers-sklearn: a toolkit for medical language understanding with transformer-based models
Name | Score | Second | Lines of code | Pre-trained model | |||
---|---|---|---|---|---|---|---|
Ours | Transformers | Ours | Transformers | Ours | Transformers | ||
TrialClassification | 0.8148a | 0.8231a | 1206 | 1208 | 38 | 246 | chinese-roberta-wwm-ext |
BC5CDR | 0.8528a | 0.8461a | 460 | 504 | 41 | 309 | roberta-base |
DiabetesNER | 0.7068a | 0.7184a | 1445 | 1426 | 63 | 309 | chinese-roberta-wwm-ext |
BIOSSES | 0.3996b | 0.3614b | 36 | 17 | 41 | 246 | roberta-base |