Skip to main content

Table 3 Hyperparameter optimization using grid search

From: Machine learning models for predicting the onset of chronic kidney disease after surgery in patients with renal cell carcinoma

Algorithms

Hyperparameters

Kernel SVM

kernel: (linear, rbf*)

C: (0.1, 1, 10*)

gamma: (0.1*, 0.5, 1)

Logistic regression

C: (0.1, 1, 10, 100*, 1000)

Decision tree

max_depth: (1, 5, 10, 15*, 20)

min_samples_split: (1, 5, 10*, 15, 20)

KNN

n_neighbors: (1*, 2, 3, 4, 5)

Random forest

n_estimators: (10, 100, 1000, 10,000*)

max_depth: (1, 5, 10, 15, 20*)

Gradient boost

n_estimators: (10, 100, 500, 1000*, 5000)

learning_rate: (0.01, 0.05*, 0.1, 0.5)

AdaBoost

n_estimators: (10, 100, 500, 1000*, 5000)

learning_rate: (0.01, 0.05, 0.1, 0.5*)

XGBoost

n_estimators: (10, 100*, 500, 1000, 5000)

learning_rate: (0.01, 0.05, 0.1, 0.5*)

LightGBM

n_estimators: (10, 100, 500*, 1000, 5000)

learning_rate: (0.01, 0.05*, 0.1, 0.5)

  1. * Optimal parameters obtained through a grid search