Skip to main content

Table 3 List of given grids of hyper-parameters and selected hyper-parameters used in the machine learning models for the prediction of buprenorphine treatment discontinuation

From: A machine learning based two-stage clinical decision support system for predicting patients’ discontinuation from opioid use disorder treatment: retrospective observational study

Treatment stage for making prediction

Machine learning model

Given hyperparameters

Selected hyperparameters

First stage models with baseline predictors

Logistic regression

solver: newton-cg, lbfgs, liblinear

solver: newton-cg

  

penalty: l1, l2, elasticnet, none

penalty: none

  

penalty/regularization strength, C: 0.001, 0.01, 0.1, 1, 10

C: 0.001

 

Decision tree

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 10, 20, 30, 40, 50

min_samples_leaf: 20

 

Random forest

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 5, 10, 20, 30, 40

min_samples_leaf: 20

  

n_estimators: 100, 200, 300, 400, 500

n_estimators: 100

 

Extreme gradient boosting

learning_rate: 0.0001, 0.001, 0.01, 0.1, 1

learning_rate: 1

  

max_depth: 10, 20, 30, 40, 50

max_depth: 40

 

Neural network

activation: relu, tanh, sigmoid, hard_sigmoid, linear

activation: relu

  

neurons: 10, 50, 100

neurons: 100

  

optimizer: SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

optimizer: Nadam

  

epochs: 1, 10

epochs: 10

  

batch_size: 1000, 2000

batch_size: 1000

 

Support vector machine

degree: 3, 4, 5, 6

degree: 3

  

gamma: 0.001, 0.01, 0.1

gamma: 0.1

  

C: 1, 10, 100

C: 10

Second stage models including 2 months PDC as continuous measure

Logistic regression

solver: newton-cg, lbfgs, liblinear

solver: liblinear

  

penalty: l1, l2, elasticnet, none

penalty: l2

  

penalty/regularization strength, C: 0.001, 0.01, 0.1, 1, 10

C: 0.1

 

Decision tree

criterion: gini, entropy

criterion: entropy

  

min_samples_leaf: 10, 20, 30, 40, 50

Min_samples_leaf: 40

 

Random forest

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 5, 10, 20, 30, 40

min_samples_leaf: 10

  

n_estimators: 100, 200, 300, 400, 500

n_estimators: 200

 

Extreme gradient boosting

learning_rate: 0.0001, 0.001, 0.01, 0.1, 1

learning_rate: 1

  

max_depth: 10, 20, 30, 40

max_depth: 20

 

Neural network

activation: relu, tanh, sigmoid, hard_sigmoid, linear

activation: linear

  

neurons: 10, 50, 100

neurons: 100

  

optimizer: SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

optimizer: RMSprop

  

epochs: 1, 10

epochs: 10

  

batch_size: 1000, 2000

batch_size: 1000

 

Support vector machine

degree: 3, 4, 5, 6

degree: 3

  

gamma: 0.001, 0.01, 0.1

gamma: 0.1

  

C: 1, 10, 100

C: 100

Second stage models including 3 months PDC as continuous measure

Logistic regression

solver: newton-cg, lbfgs, liblinear

solver: liblinear

  

penalty: l1, l2, elasticnet, none

penalty: l2

  

penalty/regularization strength, C: 0.001, 0.01, 0.1, 1, 10

C: 0.01

 

Decision tree

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 10, 20, 30, 40, 50

Min_samples_leaf: 40

 

Random forest

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 5, 10, 20, 30, 40

min_samples_leaf: 10

  

n_estimators: 100, 200, 300, 400, 500

n_estimators: 100

 

Extreme gradient boosting

learning_rate: 0.0001, 0.001, 0.01, 0.1, 1

learning_rate: 1

  

max_depth: 10, 20, 30, 40

max_depth: 30

 

Neural network

activation: relu, tanh, sigmoid, hard_sigmoid, linear

activation: tanh

  

neurons: 10, 50, 100

neurons: 100

  

optimizer: SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

optimizer: Adam

  

epochs: 1, 10

epochs: 10

  

batch_size: 1000, 2000

batch_size: 1000

 

Support vector machine

degree: 3, 4, 5, 6

degree: 3

  

gamma: 0.001, 0.01, 0.1

gamma: 0.1

  

C: 1, 10, 100

C: 100

Second stage models including 2 months PDC as categorical measure

Logistic regression

solver: newton-cg, lbfgs, liblinear

solver: liblinear

  

penalty: l1, l2, elasticnet, none

penalty: l2

  

penalty/regularization strength, C: 0.001, 0.01, 0.1, 1, 10

C: 0.01

 

Decision tree

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 10, 20, 30, 40, 50

Min_samples_leaf: 30

 

Random forest

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 5, 10, 20, 30, 40

min_samples_leaf: 10

  

n_estimators: 100, 200, 300, 400, 500, 600

n_estimators: 500

 

Extreme gradient boosting

learning_rate: 0.0001, 0.001, 0.01, 0.1, 1

learning_rate: 1

  

max_depth: 10, 20, 30, 40

max_depth: 10

 

Neural network

activation: relu, tanh, sigmoid, hard_sigmoid, linear

activation: tanh

  

neurons: 10, 50, 100

neurons: 100

  

optimizer: SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

optimizer: RMSprop

  

epochs: 1, 10

epochs: 10

  

batch_size: 1000, 2000

batch_size: 2000

 

Support vector machine

degree: 3, 4, 5, 6

degree: 3

  

gamma: 0.001, 0.01, 0.1

gamma: 0.1

  

C: 1, 10, 100

C: 10

Second stage models including 3 months PDC as categorical measure

Logistic regression

solver: newton-cg, lbfgs, liblinear

solver: liblinear

  

penalty: l1, l2, elasticnet, none

penalty: l2

  

penalty/regularization strength, C: 0.001, 0.01, 0.1, 1, 10

C: 0.1

 

Decision tree

criterion: gini, entropy

criterion: gini

  

min_samples_leaf: 10, 20, 30, 40, 50

Min_samples_leaf: 40

 

Random forest

criterion: gini, entropy

criterion: entropy

  

min_samples_leaf: 5, 10, 20, 30, 40

min_samples_leaf: 10

  

n_estimators: 100, 200, 300, 400, 500, 600

n_estimators: 500

 

Extreme gradient boosting

learning_rate: 0.0001, 0.001, 0.01, 0.1, 1

learning_rate: 1

  

max_depth: 10, 20, 30, 40

max_depth: 40

 

Neural network

activation: relu, tanh, sigmoid, hard_sigmoid, linear

activation: relu

  

neurons: 10, 50, 100

neurons: 50

  

optimizer: SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

optimizer: RMSprop

  

epochs: 1, 10

epochs: 10

  

batch_size: 1000, 2000

batch_size: 2000

 

Support vector machine

degree: 3, 4, 5, 6

degree: 3

  

gamma: 0.001, 0.01, 0.1

gamma: 0.1

  

C: 1, 10, 100

C: 10