Skip to main content

Table 8 Comparative table of the different models used for comparison. Each of these models was evaluated in 5 classes: Any, scaling, PCA, and scaling + PCA, the hyperparameters used, and, finally, their accuracy is mentioned

From: A comparative study of CNN-capsule-net, CNN-transformer encoder, and Traditional machine learning algorithms to classify epileptic seizure

Algorithms

Conditions on the dataset

Tuning Hyperparameters

Cross Validation [%]

Accuracy [%]

DTC

Any

criterion = gini, min_samples_leaf =2, min_samples_split = 10

48.00 Ā± 2.0

48.74

Scaling

48.00 Ā± 2.0

49.43

PCA

55.00 Ā± 2.0

53.61

Scaling + PCA

54.00 Ā± 1.0

54.91

MLP

Any

activation = relu, hidden_layer_sizes = (100,50), learning_rate = constant, solver = Adam

30.00 Ā± 1.0

52.65

Scaling

69.00 Ā± 1.0

72.39

PCA

37.00 Ā± 1.0

59.09

Scaling + PCA

69.00 Ā± 1.0

71.22

KNN

Any

algorithm = auto, leaf_size = 1, n_neighbors= 1, p = 2, weights = ā€˜uniformā€™

56.00 Ā± 1.0

54.26

Scaling

56.00 Ā± 1.0

54.30

PCA

57.00 Ā± 1.0

57.65

Scaling + PCA

57.00 Ā± 1.0

57.61

ETC

Any

n_estimators = 300, random_state = 20, weights = ā€˜uniformā€™

72.00 Ā± 1.0

73.48

Scaling

72.00Ā± 1.0

73.48

PCA

75.00 Ā± 1.0

75.87

Scaling + PCA

76.00 Ā± 1.0

76.39

SVM

Any

C = 10, gama = ā€˜scaleā€™, kernel = ā€˜rbfā€™

20.00 Ā± 1.0

19.96

Scaling

64.00 Ā± 1.0

64.30

PCA

19.00 Ā± 1.0

20.70

Scaling + PCA

70.00 Ā± 1.0

71.09

RFC

Any

max_depth = None, min_samples_split = 2, n_estimators = 500, random_state = 40

73.00 Ā± 1.0

73.26

Scaling

73.00 Ā± 1.0

73.22

PCA

74.00 Ā± 1.0

73.48

Scaling + PCA

74.00 Ā± 2.0

73.39

GB

Any

learning_rate = 0.1, max_depth = 7, n_estimators = 200, random_state = 10

69.00 Ā± 1.0

69.22

Scaling

0.69 Ā± 1.0

69.26

PCA

72.00 Ā± 1.0

72.61

Scaling + PCA

71.00 Ā± 1.0

71.74

CNN+Fully

Any

optimizer=Adam(lr=0.001), epochs=500, batch_size=128

79.00 Ā± 2.0

83.83

Scaling

85.00 Ā± 1.0

85.04

PCA

55.00 Ā± 4.0

60.52

Scaling + PCA

72.00 Ā± 3.0

72.04

CNN+Capsule-Net

Any

num_caps = 16, optimizer=Adam(lr=0.001), epochs=500, batch_size=128

79.00Ā± 2.0

73.30

Scaling

86.00 Ā± 1.0

87.13

PCA

56.00 Ā± 3.0

56.65

Scaling + PCA

72.00 Ā± 3.0

74.30

CNN+Tf

Any

NUM_HEADS = 16, NUM_LAYERS= 2, epochs=500, batch_size=128

79.00 Ā± 2.0

74.48

Scaling

86.00 Ā± 2.0

88.34

PCA

48.00 Ā± 1.0

49.35

Scaling + PCA

72.00 Ā± 2.0

67.39

CNN+TF+Fully

Any

NUM_HEADS = 16, NUM_LAYERS= 1, epochs=500, batch_size=128

75.00 Ā± 1.0

76.74

Scaling

85.00 Ā± 1.0

85.91

PCA

57.00 Ā± 2.0

56.22

Scaling + PCA

75.00 Ā± 1.0

75.00

CNN+Tf+Capsule-Net

Any

NUM_HEADS = 8, NUM_LAYERS = 1,num_caps = 16, epochs=500, batch_size=128

79.00 Ā± 2.0

80.83

Scaling

86.00 Ā± 1.0

85.09

PCA

53.00 Ā± 2.0

57.78

Scaling + PCA

71.00 Ā± 2.0

72.35