Skip to main content

Table 2 Coding criteria in the taxonomy and inter-rater reliability

From: A systematic review of theoretical constructs in CDS literature

Performance expectancy

(IRR: 0.79)

High

(Researchers conducted interview(s) before designing the CDS tool

OR Researchers performed usability testing with at least five users and more than 75% of participants agreed the CDS tool was useful

OR Clinicians agreed that CDS tool provided patient-specific recommendation.)

AND The CDS tool was context sensitive (e.g., CDS triggered by workflow events and CDS targeted to the right providers)

 

Medium

(Researchers performed usability testing with at least five users and 50%-75% of participants agreed the CDS tool was useful

OR The CDS tool was context sensitive.)

AND Local users were involved in developing the CDS tool [33]

 

Low

Researchers performed usability testing and less than 50% participants agreed the CDS tool was useful

OR Local users were involved in developing the CDS tool [33]

OR The CDS tool was context sensitive

 

Unknown

Not enough relevant information mentioned in the article

Effort expectancy

(IRR: 0.85)

High

The CDS tool interrupted the workflow

OR The CDS tool required a lot of effort to use (e.g., providing the dashboard, requesting documentation of reason for any non-compliance [33], and at least some manual input of values [34])

OR The CDS tool required clinician initiative to use [33]

OR The estimated user interaction time was greater than 3 s

OR 75% + of users reported that the CDS tool required high effort to use

 

Medium

The CDS tool interrupted the workflow and required moderate cognitive load (e.g., users needed to click one or few buttons to accept or dismiss, had a pop-up window to display the message)

OR The estimated user interaction time was 1–3 s

OR 50–74% of users reported that the CDS tool required high effort to use

 

Low

The CDS tool was quick and required less than 1 s to use. Users did not need to find information somewhere else

OR 50% + of users reported that the CDS tool was easy to use

 

Unknown

Not enough relevant information mentioned in the article

Social influence

(IRR: 0.71)

High

Peer pressure existed during the implementation (e.g., rated user’s performance, patient was encouraged to be involved in the decision)

OR Participants knew that they were being monitored on an individual basis (e.g., written justification was visible in the EHR [35], 1-to-1 feedback)

OR CDS was a part of a larger program implementation (e.g., falls program)

OR CDS was part of a regulatory measure (e.g., electronic clinical quality improvement)

 

Medium

Researchers provided an education program for all of the users

OR Users received a communication from the supervisor about use of the CDS tool

OR CDS was targeted toward more than one provider of practice group

 

Low

No tracking or no monitoring existed in the program

OR Researchers provided an education program for some users

OR Researchers did not provide an education program

OR CDS was targeted toward patients

 

Unknown

Not enough relevant information mentioned in the article

Facilitating conditions

 

A. IT infrastructure or user customization (e.g., tailored the timing and frequency of prompts, tailored text, highlighted text, links to supporting information, make justification publicly visible, and interactive dashboard)

B. IT support for users (e.g., technical assistance to address hardware and software issues)

C. Training (e.g., the specialized instruction concerning the system provided)

D. Other facilitating factors (e.g., economics decrease the cost for the unnecessary tests)

Motivational control

(IRR: 0.87)

High

Users could simply ignore the CDS tool, (e.g. low control on their behavior, passive alerts, and a changeable alert threshold)

 

Medium

Users had to respond to the CDS tool but involved low effort, (e.g. pop up with easy exit out)

 

Low

Users were forced to use the CDS tool with at least medium effort (e.g., pop up with text input, interrupt the workflow, and cannot ignore it)

 

Unknown

Not enough relevant information mentioned in the article

  1. IRR, inter-rater reliability; CDS, clinical decision support; IT, information technology