Models | Parameters | Values | Parameters Mean |
---|---|---|---|
LR | penalty | L1 | penalty function |
SVM | kernel | linear | kernel function |
C | 5 | penalty parameter of the error term | |
ANN | kernel initializer | uniform | kernel initializer function |
activation1 | relu | activation of hidden layer | |
activation2 | sigmoid | activation of output layer | |
optimizer | Adam | training optimization algorithm | |
epochs | 300 | number of times shown to the network | |
batch size | 20 | batch size | |
dropout | 0.0 | dropout rate | |
RF | n estimators | 695 | number of iterations |
max depth | 4 | maximum depth of variable interactions | |
max features | 7 | number of features for the best split | |
XGBoost | learning rate | 0.1 | learning rate |
n estimators | 100 | number of iterations | |
eta | 0.01 | control of learning rate | |
max depth | 3 | maximum depth of variable interactions | |
gamma | 0.6 | minimum loss reduction required to make a further partition on the tree’ leaf node | |
subsample | 0.7 | subsample ratio | |
co-sample by tree | 0.6 | subsample ratio of columns when constructing each tree | |
min child weight | 2 | sum of the minimum weights that leaf nodes need to observe | |
LightGBM | learning rate | 0.1 | learning rate |
n estimators | 100 | number of iterations | |
max depth | 8 | maximum depth of variable interactions | |
num leaves | 10 | number of leaves in each tree | |
bagging fraction | 0.7 | percentage of sampling used in each iteration | |
feature fraction | 0.9 | ratio of features to build the tree in each iteration | |
min data in leaf | 5 | minimum number of records in a leaf | |
min split gain | 0.0 | smallest gain of the split |