Skip to main content

Table 3 Hyperparameter optimization using the grid search algorithm

From: Machine learning-based prediction model for late recurrence after surgery in patients with renal cell carcinoma

Algorithms

Hyperparameters

Kernel SVM

kernel: (linear, rbf*)

C: (0.01, 0.1, 1*)

gamma: (0.01, 0.05, 0.1, 0.5*, 5, 10)

Logistic regression

Penalty: (L1, L2*)

C: (0.001, 0.01, 0.1, 1, 10*, 100)

KNN

n-neighbors: (2,4*,6,8,10)

Naïve Bayes

alpha: (0, 0.1, 1*, 5, 10, 20, 30)

Random forest

n_estimators: (10, 50, 100, 150, 200*)

max_depth: (4, 8, 12, 16*,20)

Gradient boost

n_estimators: (10, 100, 200, 500*,1000)

learning_rate: (0.05*, 0.01, 0.005, 0.001)

max_depth: (1,3*, 6, 9, 12)

AdaBoost

n_estimators: (10, 100, 200, 500*, 1000)

learning_rate: (0.05*, 0.01, 0.005, 0.001)

XGBoost

n_estimators: (10, 100, 200, 500, 1000*)

learning_rate: (0.05*, 0.01, 0.005, 0.001)

max_depth: (1*, 3, 6, 9, 12)

  1. Penalty: Specify the norm used in the penalization (L1 = L1 regularization, L2 = L2 regularization); C, inverse of regularization strength; n-neighbors, number of neighbors; alpha, additive smoothing parameter (0 for no smoothing); n_estimators, number of trees; max_depth, maximum depth of the tree
  2. SVM support vector machine, KNN k-nearest neighbour, XGBoost extreme gradient boosting
  3. *Parameter finally selected through parameter optimization