From: Transformers-sklearn: a toolkit for medical language understanding with transformer-based models
Name | Score | Second | Lines of code | Pre-trained model | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Ours | Transformers | UER | Ours | Transformers | UER | Ours | Transformers | UER | ||
TrialClassification | 0.8225a | 0.8312a | 0.8213a | 1198 | 1227 | 764 | 38 | 246 | 412 | bert-base-chinese |
BC5CDR | 0.8703a | 0.8635a | - | 471 | 499 | - | 41 | 309 | - | bert-base-cased |
DiabetesNER | 0.6908a | 0.6962a | 0.7166a | 1254 | 1548 | 2805 | 63 | 309 | 372 | bert-base-chinese |
BIOSSES | 0.8260b | 0.8200b | - | 19 | 15 | - | 41 | 246 | - | bert-base-cased |