Skip to main content

Table 4 Hyper-parameters of Gradient Boosting (XGBoost), Maxout networks, and DUNs

From: A machine learning model to predict the risk of 30-day readmissions in patients with heart failure: a retrospective analysis of electronic medical records data

  GRADIENT BOOSTING (XGBOOST)  
Parameter name Distribution and search range Best parameter
learning_rate Log-uniform [−5.0, −0.5] 0.007
max_depth Discrete uniform [3, 25] 5
min_child_weight Discrete uniform [1, 10] 1
n_estimators Discrete uniform [100, 1000] 398
gamma Log-uniform [−10, 0] 0.042
alpha Log-uniform [−10, 0] 0.0003
lambda Log-uniform [−10, 0] 0. 116
subsample Discrete uniform (units of 0.05) [0.5, 1.0] 0.70
colsample_bytree Discrete uniform (units of 0.05) [0.5, 1.0] 0.80
  MAXOUT NETWORKS and DUNs  
Parameter name Distribution and search range Best parameter
Maxout networks DUNs
Number of epochs Discrete uniform [20, 100] 22 100
Number of inner layers Discrete uniform [2, 5] 3 5
Number of inner neurons Discrete uniform [100, 1000] 914 759
Number of maxout Discrete uniform [2, 5] 5
Activation function Random choice from: sigmoid, tanh, softplus, softsign Sigmoid Sigmoid
Dropout rate of: - input layer Uniform [0.001, 0.5] 0.446 0.397
- inner layers Uniform [0.001, 0.5] 0.394 0.433