Skip to main content

Table 5 Comparative results of different models for extraction relation

From: Subsequence and distant supervision based active learning for relation extraction of Chinese medical texts

Model

Precision

Recall

F1 value

CNN-CNN-LSTM

39.62

26.07

31.25

BiLSTM-LSTM

44.55

34.56

38.65

BERT-LSTM

49.66

53.4

51.28

BERT-CRF

52.58

52.05

52.13

Chinese-RoBERTa-CRF

58.02

53.84

55.45