Skip to main content
Fig. 2 | BMC Medical Informatics and Decision Making

Fig. 2

From: Confidence-based laboratory test reduction recommendation algorithm

Fig. 2

Model Architecture Framework. In the LSTM network module, the shared LSTM layer received all input features, and outputs hidden features that contained general information derived from original data. The attention-based LSTM layer augmented input embeddings by concatenating hidden features and duplicating original features. One attention-based layer learned a subset of features for the following stability predictor. The other attention-based layer learned entire feature vectors to obtain complicated information for the following normality, value, and selection predictor. In the selective network module, we had four 2-layer MLP predictors to make task-specific predictions for Hgb stability, Hgb normality, Hgb value, and selection probability in parallel. Stability and normality predictors were treated as primary predictions that focus on selected Hgb samples. The value predictor served as the auxiliary prediction that covered all Hgb samples, including the non-selected ones

Back to article page