Skip to main content

The impact of a diagnostic decision support system on the consultation: perceptions of GPs and patients

Abstract

Background

Clinical decision support systems (DSS) aimed at supporting diagnosis are not widely used. This is mainly due to usability issues and lack of integration into clinical work and the electronic health record (EHR). In this study we examined the usability and acceptability of a diagnostic DSS prototype integrated with the EHR and in comparison with the EHR alone.

Methods

Thirty-four General Practitioners (GPs) consulted with 6 standardised patients (SPs) using only their EHR system (baseline session); on another day, they consulted with 6 different but matched for difficulty SPs, using the EHR with the integrated DSS prototype (DSS session). GPs were interviewed twice (at the end of each session), and completed the Post-Study System Usability Questionnaire at the end of the DSS session. The SPs completed the Consultation Satisfaction Questionnaire after each consultation.

Results

The majority of GPs (74%) found the DSS useful: it helped them consider more diagnoses and ask more targeted questions. They considered three user interface features to be the most useful: (1) integration with the EHR; (2) suggested diagnoses to consider at the start of the consultation and; (3) the checklist of symptoms and signs in relation to each suggested diagnosis. There were also criticisms: half of the GPs felt that the DSS changed their consultation style, by requiring them to code symptoms and signs while interacting with the patient. SPs sometimes commented that GPs were looking at their computer more than at them; this comment was made more often in the DSS session (15%) than in the baseline session (3%). Nevertheless, SP ratings on the satisfaction questionnaire did not differ between the two sessions.

Conclusions

To use the DSS effectively, GPs would need to adapt their consultation style, so that they code more information during rather than at the end of the consultation. This presents a potential barrier to adoption. Training GPs to use the system in a patient-centred way, as well as improvement of the DSS interface itself, could facilitate coding. To enhance patient acceptability, patients should be informed about the potential of the DSS to improve diagnostic accuracy.

Peer Review reports

Background

Computerised clinical decision support systems (DSS) are increasingly important in primary care for providing patient-specific, evidence-based advice for General Practitioners (GPs) [1,2,3]. GPs in the UK are family physicians with a gatekeeping role, controlling access to specialist services. They deal with a wide range of disease areas and have the difficult task of detecting uncommon but potentially serious diseases among common non-serious complaints.

Despite evidence that DSS improve GPs performance [4,5,6], their adoption in clinical practice is very limited [7, 8] and includes mainly alerts and reminders designed to support prescribing, treatment and disease management decisions [9, 10]. Measuring performance and quantifiable benefits of a DSS is necessary but having good results on these measures does not necessarily predict adoption in practice [2].

There are several reasons GPs may be reluctant to adopt a DSS [1, 3, 8]. Usability issues including lack of integration into the clinical work is cited as a main barrier to broad adoption [9, 11,12,13]. This includes lack of integration with the EHR, which is important in order to trigger relevant patient information at appropriate points in the cognitive workflow and to prevent double entry of data, to both the DSS and EHR [14, 15].

Other concerns that specifically relate to diagnostic DSS are perceived need, and perceived challenge to one’s authority as a knowledgeable professional. It may be easier for GPs to acknowledge the need for support of memory-based tasks (e.g., prescribing, screening), rather than judgment-based tasks, such as diagnosis. They may, for example, not believe that their expert judgment can be reduced to a few rules, despite substantial evidence that the “actuarial” method (using a clinical prediction rule or formula that is based on and combines the evidence) performs better than unaided clinical judgment [16].

GPs may also be concerned that their patients and/or colleagues would think less of them, if they used a diagnostic DSS. They may finally think that it will be time-consuming, and detract from the doctor-patient relationship [17]. There is indeed evidence that, in hypothetical clinical scenarios, GPs who do not use decision aids are thought as having higher diagnostic ability than those who do [18, 19] though this difference may be attenuated if the decision aid has been “developed at a prestigious institution” [18]. However, evidence that patients may derogate GPs who use decision aids comes from studies where participants were students reading hypothetical medical scenarios. We reviewed the literature, and to our knowledge, this is the first study that examined patients’ perceptions of GPs using a DSS in a naturalistic environment with standardised patients.

As part of the Translational Medicine and Patient Safety in Europe (TRANSFoRm) project (www.transformproject.eu) we designed, developed and evaluated a diagnostic DSS prototype for use in primary care [6, 20,21,22].

The DSS prototype was designed to support GPs cognitive requirements in the diagnostic process with the aim of designing a usable system that will integrate with the GPs’ clinical work [22]. We employed cognitive engineering methods [23, 24] to identify key decisions and uncover decision requirements in the diagnostic process. The decision requirements then guided the design of the system, specifically aiming to help GPs generate more diagnostic hypotheses to reduce narrow focus on one diagnosis developing early in the clinical encounter, and remind GPs of the key questions that they need to ask. Key features of the tool are described in Table 1. The evaluation of the prototype in a high-fidelity simulation found it to improve diagnostic accuracy and management without increasing consultation time and investigations ordered [6]. In this paper we report findings about the users’ perceived usability and acceptability of the DSS, and the patients’ satisfaction from the consultation, as measured during the evaluation study using interviews and standardised questionnaires. We aimed to identify facilitators and barriers to future DSS adoption.

Table 1 Key features of the diagnostic DSS prototype

Method

Design and procedure

A detailed description of the DSS evaluation study is provided elsewhere [6]. To summarise, 34 GPs who were using Vision EHR at their practice diagnosed 12 standardised patients (SPs) in simulated consultations at King’s College London. Each GP first consulted with 6 SPs using only their usual EHR system - Vision (‘baseline session’), and on another day, with 6 different but matched for difficulty SPs, using the EHR with the integrated DSS prototype (‘DSS session’). Before the DSS session, participants were introduced to the DSS prototype and its functionality, and performed training scenarios with the DSS (total training time was 20–30 min). GPs were interviewed after the baseline and the DSS sessions.

The first question enabled participants to comment about the session in general:

  • “Do you have any comments about today’s session? Feel free to comment on anything you want”.

At the end of the baseline session, GPs were asked about the likely usefulness of a diagnostic decision support tool in clinical practice:

  • “Do you think that a computerised diagnostic support tool, integrated with the patient EHR, would have helped with any of the patients today? Which patients?”

At the end of the DSS session, GPs were interviewed about their experience using the DSS prototype, and were asked whether it had helped them:

  • “Do you think that the diagnostic support tool helped with any of the patients today? Which patients?”

GPs were also asked if they had suggestions for improving the tool and the data collection process. GPs then completed the Post-Study System Usability Questionnaire (PSSUQ). The IBM PSSUQ [25] consists of 19 questions, answered on 7-point Likert scales, from 1 “strongly disagree” to 7 “strongly agree”, allowing for a “not applicable” option. The PSSUQ evaluates 4 dimensions: overall satisfaction; system usefulness; information quality and interface quality (see Additional file 1). It is accepted as a valid and reliable instrument, and it is used frequently in research [26].

At the end of each consultation, after leaving the room, the SPs completed a standardized Consultation Satisfaction Questionnaire (CSQ) [27]. The CSQ is a validated and reliable questionnaire [28,29,30] consists of 18 questions in which participants respond to statements about how they felt about the consultation on a 5-point Likert scales from ‘strongly agree’ to ‘strongly disagree’. The CSQ evaluates 4 dimensions: general satisfaction; professional care; depth of relationship and length of consultation (Additional file 2).

Analysis

A thematic analysis approach was used to identify themes and subthemes [31] related to the usability of the DSS and suggestions for improving the tool.

The qualitative data from interviews and questionnaires were transcribed in full, stored, coded and analysed using NVivo Version 10 software. Out of the 68 interviews (34 GPs being interviewed twice – after the baseline and after the DSS sessions) and 34 completed questionnaires (PSSUQ), 10 interviews (5 after the baseline session and 5 after the DSS session) and 5 questionnaires were pilot coded by TP and OK separately to develop an initial coding framework. Where there were differences these were discussed to resolve them until consensus about the coding was reached. Based on the coding framework, TP coded the rest of the interviews and questionnaires. The final coding was then reviewed and discussed by all authors, and minor changes to the coding were made. Statistical analyses of the questionnaire data are reported by [6].

Results

Thirty-four GPs from Greater London using Vision (www.inps4.co.uk/vision) EHR system participated in the study (17 males). The average number of years practicing family medicine was 12.6 (SD 12.57, range 1 month to 40 years) and the average number of years using the Vision EHR system was 7.2 (SD 5.6, range 5 months to 17 years).

Perceptions of GPs

Thematic analysis resulted in 3 major themes: perceived usefulness of the DSS, impact of the DSS on the consultation, and suggestions for improving the DSS. These are described below.

Theme 1. Perceived usefulness of the DSS

Perceived usefulness relates to the degree to which users believe that using the technology will help them improve their work and problem solving performance [32, 33]. We identified three sub-themes that relate to GPs’ perceived usefulness of the DSS prototype: supporting GPs in the diagnostic process, useful user interface features and perceived usability.

Supporting GPs in the diagnostic process

At the end of the baseline session, when participants were asked whether a diagnostic support system could have helped them with the patients they had just seen, 8 (23.5%) gave positive responses, 17 (50%) gave neutral answers (do not know, it depends), while 9 participants (26.5%) did not think that such a system could have helped.

After using the diagnostic support system, at the end of the DSS session, participants were asked if the DSS had helped them diagnose, and which patients it had helped them with. We categorised their responses into ‘always helpful’, ‘sometimes helpful’, ‘unsure’ and ‘not helpful’. Eight GPs (23.5%) found the diagnostic tool helpful in all the cases and seventeen GPs (50%) found the tool helpful in some of the cases - mainly those considered to be complex:

“[The tool helped] in all of them to an extent. The prompts were useful, organising your own mind. In general, it widened up more things to consider.” (GP 24)

“Yes, the tool helps in elaborating the symptoms more thoroughly, you analyse the different symptoms.” (GP 22)

“I think in straightforward cases it doesn’t really help, like in the UTI… Only when uncertainty increases [it helps]. In the last case [colorectal cancer] it helped. I used it as a checklist, could it be cancer?” (GP3)

Four GPs (11.8%) said that they were unsure if it helped them and five (14.7%) said that the DSS did not help them:

“I personally don’t think tools help me too much, interfere with my process of thought, would have done better without it, it was unnatural.” (GP 5)

“It hindered me, it slowed me down. I already had my working diagnosis in mind. The system had 20 items, I had about 5 in my head.” (GP 15)

“[The DSS] didn’t help but did not distract me from the way I thought.” (GP 14)

GPs elaborated on how the diagnostic tool had helped them: 16 (47%) said that it reminded them to ask important questions that otherwise they might have forgotten, and to ask more targeted questions; 10 GPs (29%) said that it helped them consider more differentials, especially less common ones.

“Reminded me to ask questions. Served as a prompt - reminded me to ask about blood in stool which I would have forgotten.” (GP6)

“You start to think about the unusual things more, compared to without the system. You ask more questions. You start wide and then focus on it, sometimes you need to open up again.” (GP3)

“Helpful to refresh your memory about things that are rare…we mainly stick to the phrase: ‘common things are common’.” (GP7)

“I think it widens your diagnosis, like the guy that had COPD and then ended up with AS. I would get to the diagnosis, but it would be longer without the tool.” (GP16)

Four participants (12%) claimed that they felt more uncertain and reflective with the tool:

“I am more reflective with this: have I missed something here?” (GP8)

“You ask more questions, more uncertainty even if you were certain.” (GP3)

We note that certainty about the diagnosis, measured on a 0–10 VAS, significantly improved when the DSS was used [6].

Useful user interface features

GPs explicitly mentioned the following UI features to be most useful:

  • Integration with Vision – nine GPs (26%) mentioned the advantage of integrating the DSS with Vision, indicating that it is well integrated and/or could be better integrated (see suggestions for improvements).

    “Useful, good to have it in Vision, you can access it quickly.” (GP 10)

    “It integrates very well with Vision.” (GP6)

    “[The tool] should be integrated better with Vision, should be a part of Vision like an added box [part of Vision main screen].” (GP 35)

  • Initial list of suggested diagnoses – four GPs indicated that the initial list was useful.

    “The first bit when you put the reason for encounter and receive the list is a good idea.” (GP14)

    “I did get used to it and it had benefits, especially the list in the beginning, makes you think.” (GP25)

  • Symptoms associated with each diagnosis – eight GPs (23.5%) found useful the option to click on a suggested diagnosis, view its associated symptoms and signs and indicate their presence or absence.

    “There were a lot of questions I wrote in free-text but the system didn’t get it. I then ticked the checkboxes under the diagnosis - which was very useful.” (GP33)

    “It was really easy and efficient clicking in the list and asking the relevant symptoms.” (GP7)

    “It was helpful in one of the cases actually, a symptom I forgot to ask. In the last case also – ‘being comfortable when lying flat’ - even if it was negative.” (GP 17)

    Most GPs (31/34, 91%) clicked on a suggested diagnosis to view the associated symptoms and signs at least once in the DSS session.

Perceived usability

Table 2 presents mean agreement with each of the 19 statements of the PSSUQ (the post-study system usability questionnaire). Ratings were provided on 7-point Likert scales from 1 (strongly disagree) to 7 (strongly agree), with higher ratings indicating higher satisfaction. The DSS scored well, with average ratings on most questions above 4 (the scale midpoint). GPs were satisfied with how easy (Q1) and simple (Q2) the tool was to use and learn (Q7); the information provided was easy to understand (Q13), clear (Q11) and well organised (Q15); they liked using the interface (Q17) and thought that it was pleasant (Q16). However, the average ratings on Q4 (timely task completion) and Q9 (informative error messages) were below 4, suggesting low satisfaction. GPs’ responses to Q9 were about program bugs, as the interface did not provide any error messages.

Table 2 Post-study system usability questionnaire (PSSUQ) results

Theme 2: Impact of the DSS on the consultation

We identified two sub-themes relating to the impact of the DSS on the consultation: impact on consultation style and GP-patient interaction, and time concerns.

Impact on consultation style and GP-patient interaction

During the baseline session, 73.5% of the participants recorded information only at the end of the consultation. It is thus not surprising that half of the participants felt that the diagnostic tool, which required data entry during the consultation, influenced their consultation style:

“It’s completely different to how I normally work, unsettling and confusing, I’m not quite sure where I am, to what extent I’m following a template and how I take history.” (GP8)

“You need to get used to it…I do my consultations in a different way, but it works quite quick.” (GP12)

“It felt different, I definitely used different questions. From a doctor’s point of view it’s good, from the patient’s point of view it’s weird…I would have asked more open questions relative to closed ones.” (GP16)

“It throws my standard history examination style, stop me from doing unnecessary investigations.” (GP 9)

Eight GPs (23%) were concerned that typing into the computer during the consultation would interfere with the doctor-patient interaction and communication, because it might reduce eye contact with the patient:

“I normally chat and look at the patients, it throws my normal thing.” (GP9)

“I usually don’t code during the consultation, less contact with patient, my style is to listen for a long time.” (GP3)

Time concerns

Thirteen GPs (38%) felt that the consultation took longer with the tool than without, mainly because they had to search for and select the right code for each symptom (from a drop-down menu based on predictive text from the underlying knowledge base). Using only the EHR, GPs wrote mainly free text, which they perceived to be faster:

“Using the tool in the present format is time consuming, need to speed it up.” (GP7)

“It will be hard to use it in a 10-min. consultation.” (GP22)

“It was too time consuming selecting all the symptoms, you need to minimise interaction during consultation, I would group relevant symptoms together.” (GP13)

Across GPs, the average consultation time did not significantly differ between baseline and DSS sessions [6]. The 13 GPs who expressed concerns about time took slightly longer when using the DSS (mean time 15.45 min) than in the baseline session (mean time 13.53 min), paired samples t-test: 2.13, df = 12, p = 0.055.

Despite GPs’ concerns about time, they believed that they could become better using the DSS (see Table 2, Q8 mean rating 4.27) and nine GPs (26%) explicitly mentioned that they got used to the DSS and improved with time.

“Obviously familiarity over time will improve.” (GP19)

“I can see its use. It’s a new thing, with any new system the initial use will affect the customer relationship. As it was going on, it was getting better.” (GP26)

“I did get used to it and it had benefits.” (GP25)

Theme 3: Suggestions for improving the DSS

Participants suggested a number of ways, which they believed could improve the DSS:

  1. (1)

    Advanced technology, such as predictive texting, to enable a more natural way of entering data, whether coded or free text.

    “One comment box, write everything you want, using a smart technology, the system could read everything you wrote and come up with a diagnosis on its own.” (GP 15)

    “Why can’t the system analyse free text?” (GP 6).

  2. (2)

    Additional functionalities, such as adding investigations (e.g., X-rays, blood tests, ECG, etc.) to the relevant symptom/sign list for each suggested diagnosis; alerting the GP if the patient came for the same reason for encounter in the last 6 months.

    “Advising what tests one should do; if for example there is blood in sputum - suggest chest x-ray - possible investigations.” (GP 11)

    “If the patient came for the same reason in the last 6 months - indicate it in the system.” (GP 6)

  3. (3)

    Complete integration with EHR systems, including integration with the latest National Institute for Health and Care Excellence (NICE) guidelines and QoF (Quality and Outcomes Framework). For example, after selecting a diagnosis, provide the most updated guidelines describing the appropriate next steps.

    “After you select a diagnosis, for example, asthma the system should ask have you done this, a table at the end, for example, saturation, BP, Pulse and what is missing. What legally I have to do in Asthma.” (GP 9)

    “[The DSS] should integrate with QoF data, for example, COPD performance management domain instead of entering again [the data] to the QoF

    system.” (GP 13)

  4. (4)

    Changes to the interface design. Such suggestions included: highlighting serious diagnoses on the initial list; displaying the precise likelihood of each suggested diagnosis; reducing the length of the suggested diagnoses list; and grouping examination results together to facilitate coding of findings (e.g., selecting ‘abdominal examination’ could display relevant attributes such as: tenderness, guarding, rebound, etc. and the user would select if the attribute was present or absent. The same can be done for basic examination, such as: temperature, blood pressure, pulse).

    “Group examinations, for example respiratory symptoms, same for abdominal. abdo pain – yes/no, if yes then mass/guarding/PR examination and then the diagnoses on the right will change and display.” (GP 33)

    “Add prevalence, probabilities. The tool didn’t help in that, not enough to flash - it can be cancer, but add probabilities, it could be cancer above 30%. Move them up the list and make you do something.” (GP 29)

Patients satisfaction with the consultation

The standardised patients (SPs) filled in the CSQ questionnaire at the end of each consultation and wrote free-text comments. Satisfaction did not differ between the baseline and DSS sessions, on average and in any of the CSQ dimensions: professional care, depth of the doctor-patient relationship, length of consultation and general satisfaction [6].

All comments were analysed thematically [31] and divided broadly into comments about GP characteristics (e.g., “very kind and professional doctor”), and comments about the use of the computer (e.g., “he was constantly looking at the computer”).

SPs made comments about the GP looking at the computer in six out of 204 consultations at baseline (3%), and in 30 out of 204 consultations (15%) in the DSS session:

“There were times when the doctor was talking to me, not needing to check anything in the computer, but still looking at it.” (SP2)

“The technology was in the way of his communication with me.” (SP1)

The SPs did not make any comments about the DSS specifically or about changes to the GP’s consultation style and way of questioning when using the DSS.

Discussion

It is encouraging that GPs recognised the usefulness of the DSS. Perceived usefulness is an important facilitator and driver for adoption [8]. Before experiencing its use, the majority of GPs were ‘agnostic’ about its usefulness (77%). This is in line with a recent Nesta report examining GPs’ responses on a survey about adopting innovations in practice [34]: almost 40% said that they would like to adopt IT innovations, but only 7% said that they would like to adopt diagnostic technologies. We found that after using our DSS, the majority of the GPs found it useful, because it helped them consider more diagnoses and ask more targeted questions (74%).

GPs identified three user interface features as helpful in the diagnostic process: (1) the EHR integration; (2) the initial patient-tailored list of potential diagnoses and; (3) the option to expand a diagnosis to its list of relevant symptoms and signs. When designing the DSS, we considered the first feature necessary for ease of use and future adoption. This feature is critical for almost any DSS to trigger relevant patient information at appropriate points in the physician’s workflow. For example, as part of the ESPRC CONSULT project (EP/P010105/1), we are integrating a DSS with the EHR to promote holistic care in stroke patients. The second feature was designed and tested extensively in studies with large samples of GPs in two European countries [20, 21]. It is the most defining feature of the DSS, which sets it apart from other existing diagnostic support systems. The third feature was elicited during an extensive user and decision requirements elicitation process that involved multiple analytical methods and data sources [22].

As reported elsewhere, the DSS did not influence perceptions of the GPs’ professionalism and care, or the general patient satisfaction from the consultation [6]. There was some indication that GPs tended to look at their computer more when using the DSS: the SPs commented about it more often when the DSS was used than when it was not. Nevertheless, such comments were only made in 15% of the consultations, where the DSS was used. We should also take into account that this was the first time that the GPs used the DSS, and after a brief training. Longitudinal studies have shown that the impact of health information technologies on physician-patient interactions (e.g., patient satisfaction, communication about medical issues) improves significantly after using the technology for a period of time [35]. More extensive training and practice with the DSS, and an improved interface design can result in a more seamless integration with the consultation. Furthermore, educating patients about the benefits of the DSS is likely to enhance patient acceptance.

We recognise that a significant barrier for the adoption of the DSS would be the change to many GPs’ style of consultation and documentation, which would be required for the effective use of the DSS. This involves the coding of individual symptoms and signs during the consultation, so that the order of suggested diagnoses is updated according to the evidence accumulated. When using their usual EHR (Vision), most study participants (73.5%) recorded patient information only at the end of the consultation [6]. To the extent that this is representative of how GPs document the clinical consultation, and it is not specific to Vision, it can present substantial challenges to adoption. Pitted against this challenge are the GPs’ perceived usefulness of the DSS, and the lack of any measurable influence on patient satisfaction from the consultation when GPs coded during the consultation (when using the DSS). Furthermore, the dynamic updating of the diagnostic list as information is coded into the EHR, provides an incentive for GPs to code information during the consultation – currently, they do not have to code.

A recent report identified time constraints as the main barrier for the adoption of innovations by GPs [34]. In our DSS evaluation study, a substantial minority of GPs (38%) found using the DSS time consuming. Although the GPs who expressed concerns about time took slightly longer when using the DSS, this was not the case for the GP sample as a whole. Reducing the time the GP interacts with the computer can be achieved in a number of ways, for example, by improving the integration of the DSS with the EHR and other related systems; grouping examination results together in order to reduce the number of clicks required to enter information; and finding more natural ways to record data such as predictive texting and voice to text. Another solution could involve patients entering their data prior to the consultation, which could also enhance a shared understanding of the health record.

We evaluated the DSS prototype in a high-fidelity simulation with 34 GPs, while most system evaluations are performed with 5 to 10 users [36]. This significant amount of users enabled us to report quantitative results [6] in addition to the qualitative findings.

To maintain some control over the study design, we employed actors rather than real patients. These actors are specifically trained to depict patients, in order to assess clinical communication skills and quality of care, and are used to rating satisfaction from the consultation (e.g., [37]). Extensive medical-education literature describes the successful use of SPs and reports that SPs can capture variation in clinical practice [38,39,40].

Nevertheless, the use of actors rather than real patients is a limitation, as there may be a concern that, since they are not experiencing the specific health complaint, their assessments may not be a valid reflection of patient perceptions. Actors however have been patients in their own right and have consulted the GP; they have expectations from the consultation which would influence their assessments. Real patients would still provide their own perspective, influenced by their own experiences and expectations, and would not represent the whole range of patient expectations. This is the first study that examines patients’ perceptions of GPs using a DSS in a naturalistic environment, during a simulated consultation with standardised patients. Previous research employed students as participants, dealing with hypothetical scenarios [e.g., 18,19]. While hypothetical scenarios (vignettes) appear to be a valid and comprehensive method to measure physicians’ quality of care [41], they may have limitations in external validity [18] compare to a naturalistic environment using SPs. In addition, SPs are likely to have more experience in interactions with physicians than students [19].

Conclusions

GPs reported that the DSS helped them in the diagnostic process, specifically in considering more diagnoses and asking more targeted questions. However, to use the DSS effectively, GPs would need to adapt their consultation style, so that they code more information during rather than at the end of the consultation. This could be a potential barrier to adoption. Training GPs to use the system in a patient-centred way, as well as improvement of the DSS interface itself, could facilitate coding. To enhance patient acceptability and satisfaction, patients should be informed about the potential of the DSS to improve diagnostic accuracy. The feasibility and acceptability of engaging the patient with the DSS before, during and after the consultation could also be explored. Future work includes updating the DSS prototype based on the feedback we received from the GPs, and evaluating its usability and acceptability in real practice.

Abbreviations

CSQ:

Consultation Satisfaction Questionnaire

DSS:

Decision support system

EHR:

Electronic health records

GP:

General Practitioner

PSSUQ:

Post-Study System Usability Questionnaire

RfE:

Reason for encounter

SP (s):

Standardised patient (s)

References

  1. Varonen H, Kortteisto T, Kaila M, EBMeDS Study Group. What may help or hinder the implementation of computerized decision support systems (CDSSs): a focus group study with physicians. Fam Pract. 2008;25(3):162–7.

    Article  PubMed  Google Scholar 

  2. Short D, Frischer M, Bashford J. Barriers to the adoption of computerised decision support systems in general practice consultations: a qualitative study of GPs’ perspectives. Int J Med Inform. 2004;73(4):357–62.

    Article  PubMed  Google Scholar 

  3. Kawamoto K, Del Fiol G. Clinical decision support systems in healthcare. In: Nelson R, Staggers N, editors. Health informatics: an interprofessional approach. Elsevier; 2017. 15. p.170-183.

  4. Ramnarayan P, Kapoor RR, Coren M, Nanduri V, Tomlinson AL, Taylor PM, et al. Measuring the impact of diagnostic decision support on the quality of clinical decision making: development of a reliable and valid composite score. J Am Med Inform Assoc. 2003;10(6):563–72.

    Article  PubMed  PubMed Central  Google Scholar 

  5. van Rosse F, Maat B, Rademaker CM, van Vught AJ, Egberts AC, Bollen CW. The effect of computerized physician order entry on medication prescription errors and clinical outcome in pediatric and intensive care: a systematic review. Pediatrics. 2009;123(4):1184–90.

    Article  PubMed  Google Scholar 

  6. Kostopoulou O, Porat T, Corrigan D, Mahmoud S, Delaney BC. Diagnostic accuracy of GPs when using an early intervention decision support system: a high-fidelity simulation. Br J Gen Pract. 2017. doi: 10.3399/bjgp16X688417.

  7. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, et al. Effect of clinical decision-support systems a systematic review. Ann Intern Med. 2012;157(1):29–43.

    Article  PubMed  Google Scholar 

  8. Shibl R, Lawley M, Debuse J. Factors influencing decision support system acceptance. Decis Support Syst. 2013;54:953–61.

    Article  Google Scholar 

  9. Musen MA, Middleton B, Greenes RA. Clinical decision-support systems. In: Shortliffe EH, Cimino JJ, editors. Biomedical informatics. London: Springer; 2014. p. 643–74.

    Chapter  Google Scholar 

  10. Chana N, Porat T, Whittlesea C, Delaney B. Improving specialist drug prescribing in primary care using task and error analysis. Br J Gen Pract. 2017;67:e157.

    Article  PubMed  Google Scholar 

  11. Sidebottom AC, Collins B, Winden TJ, Knutson A, Britt HR. Reactions of nurses to the use of electronic health record alert features in an inpatient setting. Comput Inform Nurs. 2012;30(4):218–26.

    Article  PubMed  Google Scholar 

  12. Wu HW, Davis PK, Bell DS. Advancing clinical decision support using lessons from outside of healthcare: an interdisciplinary systematic review. BMC Med Inform Decis Mak. 2012;12(1):90.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Tawfik H, Anya O, Nagar AK. Understanding clinical work practices for cross-boundary decision support in e-health. IEEE Trans Inf Technol Biomed. 2012;16(4):530–41.

    Article  PubMed  Google Scholar 

  14. Nurek M, Kostopoulou O, Delaney BC, Esmail A. Reducing diagnostic errors in primary care. A systematic meta-review of computerized diagnostic decision support systems by the LINNEAUS collaboration on patient safety in primary care. Eur J Gen Pract. 2015;21(sup1):8–13.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Jaspers MW, Smeulers M, Vermeulen H, Peute LW. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc. 2011;18(3):327–34.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Dawes RM, Faust D, Meehl PE. Clinical versus actuarial judgment. Science. 1989;243(4899):1668–74.

    Article  CAS  PubMed  Google Scholar 

  17. Mollon B, Chong JJ, Holbrook AM, Sung M, Thabane L, Foster G. Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. BMC Med Inform Decis Mak. 2009;9(1):11.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Arkes HR, Shaffer VA, Medow MA. Patients derogate physicians who use a computer-assisted diagnostic aid. Med Decis Making. 2007;27:189–202.

    Article  PubMed  Google Scholar 

  19. Shaffer VA, Probst CA, Merkle EC, Arkes HR, Medow MA. Why do patients derogate physicians who use a computer-based- diagnostic support system? Med Decis Making. 2013;33(1):108–18.

    Article  PubMed  Google Scholar 

  20. Kostopoulou O, Lionis C, Angelaki A, Ayis S, Durbaba S, Delaney BC. Early diagnostic suggestions improve accuracy of family physicians: a randomized controlled trial in Greece. Fam Pract. 2015;32(3):323–8. doi:10.1093/fampra/cmv012.

    Article  PubMed  Google Scholar 

  21. Kostopoulou O, Rosen A, Round T, Wright E, Douiri A, Delaney BC. Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients. Br J Gen Pract. 2015;65(630):e49–54.

    Article  PubMed  Google Scholar 

  22. Porat T, Kostopoulou O, Woolley A, Delaney BC. Eliciting user decision requirements for designing computerized diagnostic support for family physicians. J Cogn Eng Decis Making. 2016;10(1):57–73.

    Article  Google Scholar 

  23. Miller A, Militello L. The role of cognitive engineering in improving clinical decision support. In: Bisantz AM, Burns CM, Fairbanks RJ, editors. Cognitive systems engineering in health care. Taylor & Francis Group; 2014. p. 7–26.

  24. Hettinger AZ, Roth E, Bisantz AM. Cognitive engineering and health informatics: applications and intersections. J Biomed Inform. 2017;67:21.

    Article  PubMed  Google Scholar 

  25. Lewis JR. IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. Int J Hum Comput Interact. 1995;7(1):57–78.

    Article  Google Scholar 

  26. Lewis JR. Psychometric evaluation of the PSSUQ using data from five years of usability studies. Int J Hum Comput Interact. 2002;14(3&4):463–88.

    Article  Google Scholar 

  27. Baker R. Development of a questionnaire to assess patients’ satisfaction with consultations in general practice. Br J Gen Pract. 1990;40(341):487–90.

    CAS  PubMed  PubMed Central  Google Scholar 

  28. Baker R, Whitfield M. Measuring patient satisfaction: a test of construct validity. Qual Saf Health Care. 1992;1(2):104–9.

    Article  CAS  Google Scholar 

  29. Poulton BC. Use of the consultation satisfaction questionnaire to examine patients’ satisfaction with general practitioners and community nurses: reliability, replicability and discriminant validity. Br J Gen Pract. 1996;46(402):26–31.

    CAS  PubMed  PubMed Central  Google Scholar 

  30. Kinnersley P, Stott N, Peters T, Harvey I, Hackett P. A comparison of methods for measuring patient satisfaction with consultations in primary care. Fam Pract. 1996;13(1):41–51. doi:10.1093/fampra/13.1.41.

    Article  CAS  PubMed  Google Scholar 

  31. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

    Article  Google Scholar 

  32. Lu HP, Gustafson DH. An empirical study of perceived usefulness and perceived ease of use on computerized support system use over time. Int J Inf Manag. 1994;14(5):317–29.

    Article  Google Scholar 

  33. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS Quart. 2003;425–478.

  34. Stokes K, Barker R, Pigott R. Which doctors take up promising ideas? New insights from open data. Nesta, 2014 online: https://www.nesta.org.uk/sites/default/files/which_doctors_take_up_promising.pdf. Accessed 29 Jan 2017.

  35. Hsu J, Huang J, Fung V, Robertson N, Jimison H, Frankel R. Health information technology and physician-patient interactions: impact of computers on communication during outpatient primary care visits. J Am Med Inform Assoc. 2005;12(4):474–80.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Nielsen J. Why you only need to test with 5 users: Alertbox 2000. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/. Accessed 29 Jan 2017.

  37. Das J, Kwan A, Daniels B, Satyanarayana S, Subbaraman R, et al. Use of standardised patients to assess quality of tuberculosis care: a pilot, cross-sectional study. Lancet Infect Dis. 2015;15(11):1305–13.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Badger LW, DeGruy F, Hartman J, Plant MA, Leeper J, Ficken R, … & Nutt L. Stability of standardized patients’ performance in a study of clinical decision making. Fam Med. 1995;27(2):126–131.

  39. Pieters HM, Touw-Otten FWWM, De Melker RD. Simulated patients in assessing consultation skills of trainees in general practice vocational training: a validity study. Med Educ. 1994;28(3):226–33.

    Article  CAS  PubMed  Google Scholar 

  40. Rethans JJ, Van Boven CP. Simulated patients in general practice: a different look at the consultation. Br Med J (Clin Res Ed). 1987;294(6575):809–12.

    Article  CAS  Google Scholar 

  41. Evans SC, Roberts MC, Keeley JW, Blossom JB, Amaro CM, Garcia AM, … & Reed GM. Vignette methodologies for studying clinicians’ decision-making: validity, utility, and application in ICD-11 field studies. Int J Clin Health Psychol. 2015;15(2):160–170.

Download references

Acknowledgements

We would like to thank In Practice Systems Ltd, and particularly Dr Mike Robinson, former medical director, for his support during the integration of the DSS with Vision EHR. We would also like to thank Dr Samhar Mahmoud who programmed the DSS prototype and integrated it with the Vision3 EHR system, and Derek Corrigan who developed and implemented the ontology for the DSS.

Funding

This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 247787 [TRANSFoRm]. The funder had no role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.

Availability of data and materials

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Authors’ contributions

TP contributed to the study design, collected, analysed and interpreted the data and drafted the manuscript. BD obtained the funding for TRANSFoRm, contributed to the study design and provided critical comments on the manuscript. OK helped to obtain the funding, designed the DSS evaluation study, and contributed to the analysis of the data and the writing of the manuscript. All authors approved the submitted manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Ethical approval for the study was granted by the Proportionate 438 Review Sub-committee of the Health and Social Care (REC B), reference 14/NI/1043. Written informed consent was obtained from participants.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Talya Porat.

Additional files

Additional file 1:

Post-Study System Usability Questionnaire (PSSUQ). Consists of 19 questions, answered on 7-point Likert scales (from 1 “strongly disagree” to 7 “strongly agree”). (PDF 62 kb)

Additional file 2:

Consultation Satisfaction Questionnaire (CSQ). Consists of 18 questions, answered on 5-point Likert scales (from “strongly agree” to “strongly disagree”). (PDF 57 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Porat, T., Delaney, B. & Kostopoulou, O. The impact of a diagnostic decision support system on the consultation: perceptions of GPs and patients. BMC Med Inform Decis Mak 17, 79 (2017). https://doi.org/10.1186/s12911-017-0477-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-017-0477-6

Keywords