Skip to main content

Table 7 Performance comparison when adding a decoder layer with random weights when using Strategy 1 (importing only the enconder part of AE), for each of the 3 AEs — Basic AE, Denoising AE and Sparse AE — for breast cancer detection, with RNA-Seq input

From: Using autoencoders as a weight initialization method on deep neural networks for disease detection

 

Top Layers (AEs)

Accuracy (%)

MCC

Precision (%)

Recall (%)

F1 score

Approach A

AE: Encoding Layer (n =2)

88.40 ±5.52

0.59 ±0.17

68.39 ±19.13

64.80 ±10.84

65.91 ±13.72

 

AE: Complete Autoencoder

91.77 ±3.13

0.69 ±0.12

80.57 ±11.79

67.00 ±11.24

72.91 ±10.86

 

AE: Encoding Layer (n =3)

92.53 ±2.25

0.72 ±0.09

80.75 ±7.45

72.31 ±11.29

76.50 ±8.12

 

DAE: Encoding Layer (n =2)

83.53 ±1.74

0.25 ±0.14

51.39 ±25.04

25.60 ±15.57

31.23 ±17.51

 

DAE: Complete Autoencoder

87.30 ±1.90

0.53 ±0.05

63.43 ±7.13

58.60 ±5.17

60.67 ±4.58

 

DAE: Encoding Layer (n =3)

87.47 ±2.81

0.57 ±0.08

62.88 ±10.52

68.00 ±8.99

64.51 ±6.24

 

SAE: Encoding Layer (n =2)

79.73 ±3.86

0.02 ±0.05

9.80 ±12.48

3.00 ±3.16

4.11 ±4.09

 

SAE: Complete Autoencoder

84.07 ±2.40

0.41 ±0.07

53.13 ±8.05

47.80 ±4.85

50.13 ±5.62

 

SAE: Encoding Layer (n =3)

76.33 ±8.91

0.36 ±0.11

41.26 ±12.14

62.20 ±12.80

47.83 ±8.30

Approach B

AE: Encoding Layer (n =2)

99.33 ±0.52

0.98 ±0.02

97.85 ±2.32

98.20 ±1.48

98.01 ±1.55

 

AE: Complete Autoencoder

99.30 ±0.37

0.98 ±0.01

99.00 ±1.06

96.80 ±2.35

97.87 ±1.15

 

AE: Encoding Layer (n =3)

99.17 ±0.53

0.97 ±0.02

98.43 ±1.98

96.60 ±3.27

97.46 ±1.65

 

DAE: Encoding Layer (n =2)

99.20 ±0.65

0.97 ±0.02

97.83 ±2.54

97.40 ±1.90

97.60 ±1.95

 

DAE: Complete Autoencoder

99.23 ±0.52

0.97 ±0.02

98.60 ±2.08

96.80 ±1.69

97.68 ±1.57

 

DAE: Encoding Layer (n =3)

99.33 ±0.38

0.98 ±0.01

99.20 ±1.40

96.80 ±1.69

98.02 ±1.08

 

SAE: Encoding Layer (n =2)

96.70 ±1.24

0.89 ±0.05

95.29 ±4.78

84.60 ±6.47

89.45 ±4.14

 

SAE: Complete Autoencoder

97.40 ±1.12

0.90 ±0.04

95.78 ±4.02

88.40 ±4.79

91.87 ±3.52

 

SAE: Encoding Layer (n =3)

97.27 ±0.64

0.90 ±0.02

93.58 ±1.91

89.80 ±3.71

91.61 ±2.09

  1. The first row presents the results for Approach A, where we fix the resulting weights of the AE pre-training; the second one shows the results for Approach B, where we allow the subsequent fine-tuning of all the weights of the model. All the presented results are the 10-fold cross-validation mean values, at the validation set, by selecting the best performing model according to its F1 score. n represents the number of layers of the encoder