Skip to main content

Table 7 Performance comparison of different models on the yidu-s4k dataset

From: An imConvNet-based deep learning model for Chinese medical named entity recognition

Model

Evaluation index (%)

Entity type

Comprehensive value

Disease

Position

LabCheck

Check

Drug

Method

IDCNN-CRF

P(precision)

71.94

72.07

81.35

81.56

77.51

77.55

74.09

R(recall)

70.88

78.83

76.67

76.25

72.38

78.35

75.77

F1-score

71.41

75.30

78.94

78.81

74.86

77.95

74.92

BiLSTM-CRF

P(precision)

75.13

71.14

77.78

81.42

77.88

73.63

73.98

R(recall)

72.16

80.06

76.36

78.93

72.93

76.29

76.75

F1-score

73.62

75.34

77.06

80.16

75.32

74.94

75.34

imConvNet-CRF

P(precision)

72.45

74.79

77.06

72.26

81.06

76.77

74.84

R(recall)

75.81

78.02

79.39

80.84

80.39

78.35

77.99

F1-score

74.10

76.37

78.21

76.31

80.72

77.55

76.38

imConvNet-BiLSTM-CRF

P(precision)

74.53

76.19

77.78

81.60

78.84

81.77

76.77

R(recall)

70.78

78.72

82.73

78.16

75.14

76.29

76.49

F1-score

72.61

77.43

80.18

79.84

76.94

78.93

76.63

BERT-imConvNet-CRF

P(precision)

89.77

90.74

95.62

91.89

96.58

91.74

89.27

R(recall)

89.38

86.95

94.70

97.14

97.35

90.97

87.19

F1-score

89.57

88.81

95.16

94.44

96.96

91.35

88.22

BERT-imConvNet-BiLSTM

CRF

P(precision)

92.36

92.44

97.32

95.45

98.42

95.48

91.30

R(recall)

94.73

92.48

96.39

96.00

99.20

93.28

91.45

F1-score

93.53

92.46

96.85

95.73

98.81

94.37

91.38