Model | Hyper-parameter space | Best Combination of Hyperparameters | AUC in the training cohort | AUC in the test cohort |
---|---|---|---|---|
XGBoost | {‘max_depth’: [2, 3, 5–7, 9, 12, 15, 17, 25], ‘min_child_weight’: [1, 3, 5, 7], ‘gamma’:[ 0, 0.05 ,0.1,0.2, 0.3, 0.5, 0.7, 0.9, 1], ‘subsample’:[ 0.6, 0.7, 0.8, 0.9, 1], ‘colsample_bytree’:[0.6, 0.7, 0.8, 0.9, 1], ‘learning_rate’:[0.01, 0.015, 0.025, 0.05, 0.1]} | {‘max_depth’: [2], ‘min_child_weight’: [3], ‘gamma’:[0.2], ‘subsample’:[ 0.7], ‘colsample_bytree’:[0.8], ‘learning_rate’:[0.01]} | 0.903 | 0.844 |
GNB | / | / | 0.797 | 0.808 |
NN | {‘alpha’: [0.1, 0.01, 0.001, 0.0001], ‘hidden_layer_sizes’:[(50,),(100,)], ‘solver’:[‘sgd’, ‘adam’], ‘activation’:[‘tanh’,‘relu’], ‘learning_rate’:[‘constant’, ‘adaptive’]} | {‘activation’: ‘tanh’, ‘alpha’: 0.1, ‘hidden_layer_sizes’:(50,), ‘learning_rate’: ‘constant’, ‘solver’: ‘adam’} | 0.855 | 0.822 |
Ridge | {‘alpha’: [0.001, 0.01, 0.1, 1, 10, 100, 1000],‘solver’:[‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’]} | {‘alpha’: 10, ‘solver’: ‘svd’} | 0.829 | 0.836 |
LR | {‘C’: [0.001, 0.01, 0.1, 1, 10, 100], ‘penalty’:[‘l2’], ‘solver’: [‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’]} | {‘C’: 0.1, ‘penalty’: ‘l2’, ‘solver’: ‘newton-cg’} | 0.833 | 0.850 |