Skip to main content

Table 3 Performance of LSTM-CRFs models on UF test set

From: A study of deep learning methods for de-identification of clinical notes in cross-institute settings

Model

Training data

Fine Tuning

Performance on UF Test

Strict

Relax

Pre

Rec

F1

Pre

Rec

F1

LSTM-CRFs

i2b2

NA

0.8883

0.8274

0.8568

0.9288

0.8651

0.8958

LSTM-CRFs+Lexical

i2b2

NA

0.8767

0.8509

0.8636

0.9314

0.9041

0.9175

LSTM-CRFs+Lexical + Knowledge

i2b2

NA

0.8767

0.8706

0.8736

0.9229

0.9166

0.9197

LSTM-CRFs+Lexical + Knowledge

i2b2

UF

0.9474

0.9109

0.9288

0.9776

0.9400

0.9584

LSTM-CRFs+Lexical + Knowledge

UF

NA

0.9408

0.8992

0.9195

0.9705

0.9277

0.9486

LSTM-CRFs+Lexical + Knowledge

i2b2 + UF

NA

0.9352

0.9163

0.9257

0.9681

0.9484

0.9582

  1. Best F1 scores are highlighted in bold