Skip to main content

Table 2 Quality assessment of the CDSSs

From: Laboratory test ordering in inpatient hospitals: a systematic review on the effects and features of clinical decision support systems

References CDSS design Data entry source Implementation characteristic
Is it integrated with CPOE? Does it give real time feedback at point of care? Does the CDS suggest a recommended course of action? CDSS Classification* Is it automated through EHR? Does clinical staff enter data specifically for intervention? Was it pilot tested or used an iterative process of development/ implementation? Was there any user training/clinician education? Are the authors also the developers and part of the user group for the CDS? Was there use of audit and- feedback (or other internal incentive)? Are there any other implementation components not already discussed?
Bates et al. [28] Yes Yes No C Yes No NM** NM Yes No 50% of the tests with a computer order were not screened for redundancy because they were ordered as part of an order set
BoonFalleur et al. [31] No No No B No No Yes NM NM No No
Bridges et al. [34] Yes Yes No B NM No Yes NM No No Clinicians likely experienced an “adjustment” period once they became familiar with the alert,
Dalal et al. [35] Yes Yes No D Yes No Yes Yes Yes No No
Eaton et al. [36] Yes Yes No B No NM No NM NM No No
Gottheil et al. [30] Yes Yes No C NM No Yes NM Yes Yes The importance of stakeholder engagement prior to the intervention and having decision leaders in each department to champion our cause
Klatte et al. [37] Yes Yes No D Yes Yes Yes NM NM No No
Levick et al. [38] Yes Yes No B No No NM NM Yes No Use of alerts should be used judiciously and in the appropriate environment
Lippi et al. [32] Yes Yes No B NM No NM NM Yes Yes No
Nicholson et al. [39] Yes Yes No C NM Yes NM NM Yes No No
Niès et al. [33] Yes Yes No C Yes No Yes NM Yes No Testing options were constrained by unbundling serum metabolic panel tests into single components and reducing the ease of repeating targeted tests
Quan et al. [40] Yes Yes No D NM No NM NM NM Yes No
Procop et al. [41] Yes Yes No D NM No Yes Yes No Yes No
Rosenbloom et al. [42] Yes Yes No C NM Yes NM NM Yes Yes Designers of CDS interventions should take into account the paradoxical prompting that such interventions might generate
Rudolf et al. [43] Yes Yes No C NM Yes NM NM NM Yes Providers could use workarounds to place daily orders, entering the orders in a manner that would not trigger the audits. For example, placing staggered sets of orders to occur every other day or writing in daily orders on templates could have circumvented our auditing process and accounting of daily testing for this analysis
Samuelson et al. [44] Yes Yes No C NM Yes NM NM NM NM No
Sum            
Yes
No
NM
15
1
0
15
1
0
0
16
0
A: 0
B: 5
C: 7
D: 4
4
3
9
5
10
1
8
1
7
2
0
14
8
2
6
6
9
1
 
  1. *Intervention Classification: “A” interventions provided information only; “B” interventions presented information on appropriateness or guidelines specifically tailored to the individual patient, often as a pop-up or alert. Some of these interventions also recommended alternative interventions, but did not include any barrier for the clinician to order the test; “C” interventions in general were similar to “B” interventions, but required the ordering clinician to justify with free text why they were overriding the decision support recommendation that a study was inappropriate (ie, a “soft stop”). “D” interventions included a “hard stop,” meaning the intervention prevented the clinician from ordering a test contrary to the CDS determination of inappropriateness, until additional discussion with or permission obtained from another clinician or pathologist
  2. **Not Mentioned
\