Skip to main content

Human-centered design of clinical decision support for management of hypertension with chronic kidney disease

Abstract

Background

Primary care providers face challenges in recognizing and controlling hypertension in patients with chronic kidney disease (CKD). Clinical decision support (CDS) has the potential to aid clinicians in identifying patients who could benefit from medication changes. This study designed an alert to control hypertension in CKD patients using an iterative human-centered design process.

Methods

In this study, we present a human-centered design process employing multiple methods for gathering user requirements and feedback on design and usability. Initially, we conducted contextual inquiry sessions to gather user requirements for the CDS. This was followed by group design sessions and one-on-one formative think-aloud sessions to validate requirements, obtain feedback on the design and layout, uncover usability issues, and validate changes.

Results

This study included 20 participants. The contextual inquiry produced 10 user requirements which influenced the initial alert design. The group design sessions revealed issues related to several themes, including recommendations and clinical content that did not match providers' expectations and extraneous information on the alerts that did not provide value. Findings from the individual think-aloud sessions revealed that participants disagreed with some recommended clinical actions, requested additional information, and had concerns about the placement in their workflow. Following each step, iterative changes were made to the alert content and design.

Discussion

This study showed that participation from users throughout the design process can lead to a better understanding of user requirements and optimal design, even within the constraints of an EHR alerting system. While raising awareness of design needs, it also revealed concerns related to workflow, understandability, and relevance.

Conclusion

The human-centered design framework using multiple methods for CDS development informed the creation of an alert to assist in the treatment and recognition of hypertension in patients with CKD.

Peer Review reports

Background

There is a need to improve recognition and control of hypertension in patients with chronic kidney disease (CKD). Among patients with CKD, 52% have diagnosed hypertension, 19% have pre-hypertension, and 16% have undiagnosed hypertension [1]. Of the patients with CKD and uncontrolled hypertension, just 40% are prescribed anti-hypertensive medications [2]. Additionally, less than 10% of patients with an estimated glomerular filtration rate (eGFR) under 60 mL/min/1.73 m2 are aware that they have CKD, and just 15% of patients with CKD had a documented diagnosis [2]. One method to assist physicians in the recognition and management of these patients is to provide decision support for the primary care provider within the electronic health record.

Clinical decision support (CDS) provides “clinicians or patients with clinical knowledge and patient-related information, intelligently filtered or presented at appropriate times, to enhance patient care.” [3] While rapid adoption of electronic health records (EHR) has led to increased CDS use, some studies suggest that primary care providers (PCPs) are resistant to its implementation [4,5,6,7]. In practice, CDS can lead to unintended consequences like alert fatigue, workflow obstruction, increased workload, and alert dismissal [8,9,10]. These issues are more prevalent when the CDS fits workflow poorly, has a low alert severity level, lacks informational sources, has a poor layout, fires multiple times per encounter, fires on multiple patients a day, or does not match providers’ mental models of disease [11,12,13,14,15,16]. Given that the success of novel CDS depends on whether providers use it, development and design should include feedback and input from intended users to create a usable system [17]. This feedback should aim to improve understanding of clinical tasks, workflows, physician use of the EHR, and organizational culture [18,19,20].

Previous goals of CDS design have centered around efficiency, error rates, and guideline adherence rather than the overall usability of CDS features [21, 22]. While the CDS may be efficient, there is a greater possibility of adverse clinical outcomes when formal requirements gathering and usability testing has not focused on user preferences. These outcomes include inappropriate prescriptions, under- and overprescribing, medical errors, and CDS dismissal [23,24,25,26,27,28]. While many studies have addressed the layout, timing and efficiency, or informativeness of CDS, usability issues may still persist [29].

In this study, we followed a human-centered design (HCD) process that solicits feedback from clinicians at multiple stages and using multiple methods. Human-centered design can be defined as “an approach to interactive systems that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability; and counteracts possible adverse effects of use on human health, safety and performance.” [30] Human-centered design has become a more common design and development process for provider and patient facing health IT applications in the last 10 years [31,32,33,34,35]. Following this approach has been shown to result in systems that are easier to use and lead to greater adoption [36,37,38,39].

The use of CDS has promise in the context of CKD, with recent studies showing that it leads to an increase in the rate of urine albumin monitoring [23, 40]. However, there was not a significant decrease in blood pressure (BP) in prior studies of CDS for CKD [40]. These studies primarily focused on CKD, alerting clinicians to order referrals to nephrology or ordering additional urine tests. Our study aimed to develop and validate CDS that synthesized existing EHR data (laboratory tests, medication orders, and vital signs) to increase recognition of CKD and uncontrolled hypertension in CKD patients and deliver evidence-based personalized CKD and hypertension management recommendations. Specifically, we designed alerts for three overarching categories of CDS that recommend (1) initiation of a specific class of anti-hypertension medication [Angiotensin Converting Enzyme Inhibitor (ACE)/Angiotensin Receptor Blocker (ARB)], (2) increasing the dose of ACE/ARB medications, and (3) initiation of a diuretic (Hydrochlorothiazide) in patients already prescribed the maximum dose of an ACE/ARB. The objective of the study was to improve the CDS content and design using an iterative human-centered design strategy. Through this process, we aimed to create CDS that fits the providers’ workflow, presents relevant data and recommendations, and promotes higher quality care for patients with CKD.

Methods

To design and develop the alerts, we followed a human-centered design process focusing on the involvement of users at each stage of design and development, an understanding of user needs and an iterative process (Fig. 1). Prior to creating prototypes of the alerts, we conducted contextual inquiry sessions to help gather user requirements for the alerts. Group design sessions along with iterative design were used to validate existing requirements and gather additional requirements and feedback on alert design and layout. Finally, we conducted one-on-one formative usability sessions to uncover additional issues and evaluate existing design decisions. All sessions were recorded using Morae (TechSmith Corporation, Okemos, Michigan) screen recording software and a backup digital audio recorder. This study was approved by the Mass General Brigham Institutional Review Board under the Human Research Protection Program.

Fig. 1
figure 1

Human-centered design process for design of a best practice advisory

Recruitment

Primary Care Physicians (PCPs) from Brigham and Women’s 15 affiliated primary care clinics were emailed a recruitment letter and frequently asked questions document by the principal investigator. Based on the simplicity of the system interface and the iterative process, the recruitment goal was a minimum of 18 total participants with 6–8 per activity until we reached saturation in our findings [41, 42]. A research team member followed up to schedule sessions with interested participants. Participants who responded after we reached our expected number of participants for that activity were invited to participate in subsequent activities. Informed consent was obtained from all participants.

Alert design

Brigham and Women’s affiliated primary care clinics use Epic Hyperspace (Epic Systems Corporation, Verona, Wisconsin) as the EHR. Best Practice Advisories (BPAs) were used as the format of CDS for alerting providers and giving treatment recommendations. Our decision support was intended to address three categories of antihypertensive treatment in CKD: initiation of ACE/ARB treatment, increasing dose, and adding a diuretic [43,44,45,46]. Providers have the option of accepting the alert or overriding it. Acknowledge reasons allow the provider to give a reason for overriding the alert, either from a coded list or a free-text description [47]. Further description of the CDS is described in other publications [46, 48].

Contextual inquiry sessions

We gathered user requirements through contextual inquiry sessions. The goal of the inquiry sessions was to understand the different activities, steps, and thinking processes involved in managing uncontrolled blood pressure using the EHR, to generate user requirements for CDS.

The contextual inquiry sessions were conducted at the participant’s office or virtually, using their own computer with the software that they use to do their daily work. The moderator provided the participant with a short introduction that included information about the structure of the session and demographic questions. Participants were interviewed on their use of the EHR to manage chronic disease, the overall structure of their visits, and any challenges or issues important to them in regard to hypertension. During the second part of the session, participants were asked to show and explain their process of how they prepare for and then go through an encounter with a patient with CKD and uncontrolled hypertension, their workflow and how they use the EHR to support their activities. Clinical scenarios were used to understand how the providers interact with the EHR to support their decision making related to the patient’s management of CKD and hypertension.

Group design sessions

Based on the user requirements elucidated in the contextual inquiry sessions, we designed several mockups of the alerts in categories 1 and 2 (initiation of an anti-hypertensive medication and increased dose of an anti-hypertensive medication, respectively). Not all user requirements could be incorporated due to limitations of the EHR. The structure of each mock-up included the rationale and relevant statistics, guidelines, order options and acknowledge reasons (Fig. 2). To validate user requirements and learn more about provider preferences, we presented mockups to focus group participants, with several options for each category of alert. Participants were asked open-ended questions, such as “What is the first thing you notice?”, “Tell us what you like about what you are seeing here”, and “Tell us what you don’t like.” Due to the COVID-19 pandemic, group design sessions were conducted via videoconferencing using Zoom (Zoom Video Communications, San Jose, California).

Fig. 2
figure 2

Alert structure. 1: justification; 2: relevant data; 3: guidelines/additional information; 4: order options; 5: acknowledge reasons

Individual think aloud sessions

Based on the results of the design sessions, we refined the alert design. We also added an additional mockup for a third category of alerts (initiation of a diuretic (HCTZ) in patients already prescribed the maximum dose of an ACE/ARB). We developed working versions of the alerts that would trigger in the sandbox environment of the EHR to use during the next phase of usability testing.

We conducted individual formative think aloud usability testing sessions to uncover additional usability issues and validate design decisions. We developed five scenarios in which one of the five alert variations would trigger (Additional file 1: Appendix A). A research team member logged into the environment and passed control of the mouse and keyboard to the participant so that they could interact with the EHR. We made changes to the mockups after conducting sessions with the first four participants and tested the newer versions with the remaining four participants.

Analysis

The human factors specialist (PG) reviewed the recordings of the contextual inquiry sessions to identify common themes across participants. The themes were reviewed with the broader research team. Themes were then translated into user requirements. Potential solutions to address these requirements in the CDS were brainstormed and subsequently investigated for feasibility within the EHR by the research team.

Group design sessions were transcribed. Transcriptions were reviewed by two members of the research team (SA, EW) who independently coded participant comments by feature/functionality and design element and came together with the human factors specialist to review discrepancies, revise codes, and reach consensus. The human factors specialist (PG) reviewed the coded comments and grouped them into 7 major themes.

Two research team members (SA, EW) reviewed the transcripts and video of the usability tests to identify usability issues encountered by the participants during the testing sessions. The research team members documented the individual user feedback, an associated quote, and contextual information describing the usability issue [49]. A content analysis was conducted by the research team to group individual participant usability issues and comments into a set of usability findings which were reviewed by the human factors specialist.

Throughout the design process, we held standing meetings with the research team and 1–2 additional subject matter experts in nephrology and informatics. Subject matter expert meetings were used to review user feedback, discuss requirements and make preliminary decisions about content and design. Build meetings were used to discuss the limitations of Epic and technical issues. All Team meetings included multiple highly experienced informatics researchers and clinical trial experts and was the final decision-making body.

Results

Recruitment emails were sent to 212 PCPs. Of the respondents that indicated interest in participating in the human-centered design process, 6 took part in the contextual inquiry session, 9 in the group design sessions, and 8 in the individual think aloud sessions. 3 of the contextual inquiry participants also partook in the individual think aloud sessions for a total of 20 participants. 8 of the 20 participants were female and 12 were male. All were PCPs consisting of 18 physicians, 1 nurse practitioner and 1 physician assistant (Table 1).

Table 1 Participant characteristics and stage(s) of involvement

Contextual inquiry sessions

The sessions were conducted from May 2019 to October 2019. Six participants agreed to participate. 5 out of the 6 participants were male. Participants averaged more than 17 years in clinical practice and have used the current EHR for 2–7 years.

The contextual inquiry sessions resulted in several insights related to how physicians use the EHR to structure their visits with patients, retrieve and document patient data specifically related to hypertension and CKD, and determine a plan for these patients.

We found that providers review labs and vitals with the patient in the exam room, as they work through each condition. Most providers preferred to view data graphically or in tabular format when available to identify trends and correlations between labs and medications. They also said that during the visit, they scan the data for abnormal values or other things that stand out. All providers used the note as the focal point of their visit by reviewing their previous notes and capturing important information in the current note. Providers also described typically talking with the patient first and addressing their reason for visit, then addressing other issues if time allowed.

Multiple providers described specific challenges faced when making decisions about adding or changing a medication for hypertension. Many challenges included issues with accessing relevant historical data such as medication history and side effects. In addition, knowledge of how adherent a patient is to their current medication regimens can be difficult to attain. Other providers highlighted the issues of capturing accurate BP values due to availability of appropriate cuffs or other factors affecting a patient’s BP. Providers often repeat BP measurements during the visit. In addition, home BP monitoring is captured only in provider notes and therefore can be difficult to track over time.

Multiple observations and comments from providers centered around accessing information about the context of a specific BP measurement. Information about where the BP was taken, when the BP was taken in relation to the time of the visit, how the BP was taken and the patient’s physical, emotional, and mental state when the BP was taken is often considered in the provider’s decision making.

Finally, the issue of alert fatigue was raised: many providers ignored alerts and expressed their concerns about receiving multiple alerts that often do not have clear and actionable messages. Providers expressed challenges in clearly identifying high priority alerts and how to interact with them.

The above insights were translated into 10 user requirements (Table 2) by the human factors specialist (PG).

Table 2 User requirements and potential solutions based on the insights from the contextual inquiry sessions

Group design sessions

These sessions were held in April 2020. We recruited 9 participants in total, comprising 3 focus groups. After performing the transcription categorization and coding, comments were grouped into seven categories of usability findings. (Table 3) Changes to the mock-ups were made based on this feedback. (Figs. 3, 4).

Table 3 User feedback themes from the group design sessions and changes made to address the feedback
Fig. 3
figure 3

Changes in alert content and formatting through each iteration

Fig. 4
figure 4

Specific changes to the acknowledge reasons. These were made to better fit provider needs and workflow

Individual think aloud sessions

The individual think aloud sessions were conducted from May to June 2020. We recruited eight PCPs from Brigham and Women’s Hospital affiliated primary care practices. Participants had an average of 6 years of experience using the Epic EHR, and years in practice ranged from one to 40 years. All participants identified as intermediate or expert users of technology. The most common findings were that users disagreed with recommended clinical actions, requested additional clinical information, did not have consensus on informing patients of CKD and hypertension in the after-visit summary, and had concerns about the alert placement within their workflow. Some also required additional explanation of the function and behavior of acknowledge reasons. (Table 4).

Table 4 User feedback on the alerts during individual think-aloud sessions

Based on the findings from the individual sessions, we implemented several changes (Figs. 2, 3). Users had different preferences for starting doses of medications, so we included an acknowledge reason to allow them to indicate that they intend to order a different dose. The length of time to defer the BPA was changed from 1 month to the next visit to be clearer. Finally, options to discuss with the patient and to review the chart were added to account for provider workflow since there were constraints in terms of the available trigger points in the EHR. To provide enough clinical context, we included dates for all the lab values related to diagnosis of CKD presented in the alert, as well as the most recent potassium value. Because participants expressed conflicting views on including information about CKD for the patient in the After Visit Summary, we decided to remove this. We were unable to address the issue of modifying an existing order from the alert rather than discontinuing the existing order and adding a new one since this is a constraint within Epic. In addition, we were not able to address the interaction of the order buttons and acknowledge reasons that some users had difficulty with. When the “Do Not Order” button is selected for each of the orders, an acknowledge reason is required. Once you select an acknowledge reason, all the order buttons automatically switch to “Do Not Order” even if the user previously chose to order one of them, such as a basic metabolic panel. In some cases, the provider did not notice this, thereby believing they had placed an order for a basic metabolic panel which was deleted.

Discussion

This usability study for designing CDS for medication management of hypertension and CKD revealed the importance of: (1) incorporating the CDS with both the clinical workflow (i.e., after the BP has been checked) and clinical decision-making process, (2) providing actionable and clear recommendations, (3) including relevant contextual information, and (4) providing a simple and efficient interface. Our study showed that participation from users throughout the design process results in feedback that can be translated into user requirements and validation for design decisions.

Challenges related to the adoption of CDS such as alert fatigue and provider burden are well known. Our findings, along with those identified in other studies, highlight improvements to CDS that include removing extra clicks, allowing deferral to a later date, adding more labs and their specific dates, and making the alerts more concise and visually appealing [41, 42]. Moreover, specific challenges with providing guidance to primary care providers regarding CKD and hypertension include variation in provider thresholds for prescribing, and access to rapidly changing and often conflicting guidelines for prevention and management [50]. Our results highlight some of these challenges. Providers expressed the need for enough relevant clinical data within the alert; they also expressed a reluctance to make a prescribing decision without reviewing additional clinical data, talking to the patient or repeating measurements to ensure they consider the current context and all of the patient’s individual factors. This aligns with research that found that managing hypertension for patients with CKD is not a “one size fits all” approach but rather requires a targeted approach [43, 51]. Discovering this user need early in our design process led to design decisions that allowed the providers flexibility in deferring the alert until a later time.

In addition, some of the decisions we made addressed clinical areas without official contraindications but are common concerns. We added disclaimers about the acceptability of an increase in serum creatinine after starting certain recommended medications and a warning about teratogenicity risk for women of childbearing age.

In some cases, the provider preferences we discovered differed from previous research. For example, we previously learned that PCPs wanted a reminder to think about a referral or e-consult to nephrology and this was confirmed by primary care stakeholders during our design process [35]. However, during our testing we found that providing both an option to order a referral to renal or an e-consult with renal with each alert was not necessary. Some research suggests that there are some barriers to co-managing patients with CKD between primary care providers and nephrologists, specifically around the roles and responsibilities, which could potentially explain the reluctance to seek an e-consult [35, 36]. In addition, we found that our providers did not feel that showing a picture of the department head in the alert and having a separate link to lab data would be helpful as was found in prior research [52, 53].

We were not able to fully address some of the more frequently identified issues due to constraints of the EHR system. Ideally, we would be able to create a system that met all the user’s requirements. Providers have diverse encounter workflows. The alert firing at the start of an encounter may be a significant roadblock for some providers and lead them to ignore the alert [54]. However, providers indicated this was preferable given the EHR constraints. For those providers who rarely interact with the patient chart in the room or work closely with other clinicians, managing when and who addresses the alert is challenging. The constraints regarding the design of the order buttons and acknowledge reasons could significantly impact the user’s ability to complete the task and lead to a lot of frustration [55]. Allowing the provider to edit an existing order could improve the user’s efficiency with the system by saving several mouse clicks and add value to using the BPA. While we did have some challenges balancing the user requirements with the available solutions within the EHR, the system did offer the flexibility to turn off alerts for any individual provider. It also allowed for flexibility when presenting whether the current prescription was an ACE or an ARB for the third category of BPA that recommends initiating a diuretic on patients currently on the maximum dose of an ACE or ARB. Throughout the design process, the ability to edit the display text and add links allowed for appropriate updates following feedback on relevant data, information, and guidelines.

While the human-centered design process outlined in this study can apply to a wide range of settings, there may be some limitations to the final alert design due to the context of testing. Based on local prescription trends, we included a diuretic rather than a calcium channel blocker as the next line agent and we chose hydrochlorothiazide over chlorthalidone. These trends may vary by region or provider. The testing process was limited due to the COVID-19 outbreak, causing all sessions to be conducted remotely. As a result, we were unable to perform near-live simulation and the circumstance of being behind schedule in clinic, which impacted in-person user requirements gathering and could impact the interaction with the alert. Provider understanding of a real patient’s history may also impact their treatment plan. In addition, we were limited by our own institutions governance structure in terms of what technology solutions were allowed. Finally, the providers that agreed to participate in this testing could represent a group that would be more willing to spend the time to read through the information included in the alerts.

Conclusion

By involving the user at multiple stages of design, we were able to refine our alerts to make them better suited for clinical decision making by providing relevant data, justification, treatment options, and acknowledge reasons. While we used this HCD framework to design CDS tailored to PCPs using Epic in the context of CKD recognition and treatment, this method can be an effective way to discover user needs for CDS regardless of the EHR or clinical scenario.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

References:

  1. Crews DC, Plantinga LC, Miller ER 3rd, Saran R, Hedgeman E, Saydah SH, Williams DE, Powe NR. Prevalence of chronic kidney disease in persons with undiagnosed or prehypertension in the United States. Hypertension. 2010;55(5):1102–9.

    Article  CAS  PubMed  Google Scholar 

  2. Plantinga LC, Tuot DS, Powe NR. Awareness of chronic kidney disease among patients and providers. Adv Chronic Kidney Dis. 2010;17(3):225–36.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Osheroff JA, Pifer EA, Sittig DF, Jenders RA, Teich JM. Clinical decision support implementers’ workbook. Chicago: HIMSS; 2004. p. 68.

    Google Scholar 

  4. Khalifa M, Zabani I. Improving utilization of clinical decision support systems by reducing alert fatigue: strategies and recommendations. Stud Health Technol Inform. 2016;226:51–4.

    PubMed  Google Scholar 

  5. Phansalkar S, Zachariah M, Seidling HM, Mendes C, Volk L, Bates DW. Evaluation of medication alerts in electronic health records for compliance with human factors principles. J Am Med Inform Assoc. 2014;21(e2):e332-340.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Shah NR, Seger AC, Seger DL, Fiskio JM, Kuperman GJ, Blumenfeld B, Recklet EG, Bates DW, Gandhi TK. Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006;13(1):5–11.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Edrees H, Amato MG, Wong A, Seger DL, Bates DW. High-priority drug-drug interaction clinical decision support overrides in a newly implemented commercial computerized provider order-entry system: override appropriateness and adverse drug events. J Am Med Inform Assoc. 2020;27(6):893–900.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Jankovic I, Chen JH. Clinical decision support and implications for the clinician burnout crisis. Yearb Med Inform. 2020;29(1):145–54.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. 2020;3:17–17.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Trinkley KE, Blakeslee WW, Matlock DD, Kao DP, Van Matre AG, Harrison R, Larson CL, Kostman N, Nelson JA, Lin C-T, et al. Clinician preferences for computerised clinical decision support for medications in primary care: a focus group study. BMJ Health Care Inform. 2019;26(1):e000015.

    Article  PubMed Central  Google Scholar 

  11. Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak. 2017;17(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Campbell EM, Sittig DF, Ash JS, Guappone KP, Dykstra RH. Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2006;13(5):547–56.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Curran RL, Kukhareva PV, Taft T, Weir CR, Reese TJ, Nanjo C, Rodriguez-Loya S, Martin DK, Warner PB, Shields DE, et al. Integrated displays to improve chronic disease management in ambulatory care: a SMART on FHIR application informed by mixed-methods user testing. J Am Med Inform Assoc. 2020;27(8):1225–34.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Gregory ME, Russo E, Singh H. Electronic health record alert-related workload as a predictor of burnout in primary care providers. Appl Clin Inform. 2017;8(3):686–97.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Kesselheim AS, Cresswell K, Phansalkar S, Bates DW, Sheikh A. Clinical decision support systems could be modified to reduce “alert fatigue” while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30(12):2310–7.

    Article  Google Scholar 

  16. Phansalkar S, van der Sijs H, Tucker AD, Desai AA, Bell DS, Teich JM, Middleton B, Bates DW. Drug-drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc. 2013;20(3):489–93.

    Article  PubMed  Google Scholar 

  17. Westerbeek L, Ploegmakers KJ, de Bruijn G-J, Linn AJ, van Weert JCM, Daams JG, van der Velde N, van Weert HC, Abu-Hanna A, Medlock S. Barriers and facilitators influencing medication-related CDSS acceptance according to clinicians: a systematic review. Int J Med Informatics. 2021;152:104506.

    Article  Google Scholar 

  18. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, Spurr C, Khorasani R, Tanasijevic M, Middleton B. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003;10(6):523–30.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Neri PM, Redden L, Poole S, Pozner CN, Horsky J, Raja AS, Poon E, Schiff G, Landman A. Emergency medicine resident physicians’ perceptions of electronic documentation and workflow: a mixed methods study. Appl Clin Inform. 2015;6(1):27–41.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Lam Shin Cheung J, Paolucci N, Price C, Sykes J, Gupta S. A system uptake analysis and GUIDES checklist evaluation of the Electronic Asthma Management System: a point-of-care computerized clinical decision support system. J Am Med Inform Assoc. 2020;27(5):726–37.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Arts DL, Abu-Hanna A, Medlock SK, van Weert HC. Effectiveness and usage of a decision support system to improve stroke prevention in general practice: a cluster randomized controlled trial. PLoS ONE. 2017;12(2):e0170974.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  22. Sim LL, Ban KH, Tan TW, Sethi SK, Loh TP. Development of a clinical decision support system for diabetes care: a pilot study. PLoS ONE. 2017;12(2):e0173021.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  23. Abdel-Kader K, Fischer GS, Li J, Moore CG, Hess R, Unruh ML. Automated clinical reminders for primary care providers in the care of CKD: a small cluster-randomized controlled trial. Am J Kidney Dis. 2011;58(6):894–902.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157(1):29–43.

    Article  PubMed  Google Scholar 

  25. Harrison MI, Koppel R, Bar-Lev S. Unintended consequences of information technologies in health care—an interactive sociotechnical analysis. J Am Med Inform Assoc. 2007;14(5):542–9.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, Strom BL. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005;293(10):1197–203.

    Article  CAS  PubMed  Google Scholar 

  27. Lopez-Rodriguez JA, Rogero-Blanco E, Aza-Pascual-Salcedo M, Lopez-Verde F, Pico-Soler V, Leiva-Fernandez F, Prados-Torres JD, Prados-Torres A, Cura-González I. Potentially inappropriate prescriptions according to explicit and implicit criteria in patients with multimorbidity and polypharmacy. MULTIPAP: a cross-sectional study. PLoS ONE. 2020;15(8):e0237186.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Orenstein EW, Boudreaux J, Rollins M, Jones J, Bryant C, Karavite D, Muthu N, Hike J, Williams H, Kilgore T, et al. Formative usability testing reduces severe blood product ordering errors. Appl Clin Inform. 2019;10(5):981–90.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Russ AL, Chen S, Melton BL, Johnson EG, Spina JR, Weiner M, Zillich AJ. A novel design for drug-drug interaction alerts improves prescribing efficiency. Jt Comm J Qual Patient Saf. 2015;41(9):396–405.

    PubMed  Google Scholar 

  30. Standardization IOf: ISO 9241-210 ergonomics of human-system interaction—part 210: human-centred design for interactive systems. 2010.

  31. Health IT Usability. https://www.nist.gov/programs-projects/health-it-usability.

  32. Karsh BT, Weinger MB, Abbott PA, Wears RL. Health information technology: fallacies and sober realities. J Am Med Inform Assoc. 2010;17(6):617–23.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform. 2004;37(1):56–76.

    Article  PubMed  Google Scholar 

  34. Melles M, Albayrak A, Goossens R. Innovating health care: key characteristics of human-centered design. Int J Qual Health Care. 2020;33(Supplement_1):37–44.

    Article  PubMed Central  Google Scholar 

  35. Shahmoradi L, Safdari R, Ahmadi H, Zahmatkeshan M. Clinical decision support systems-based interventions to improve medication outcomes: a systematic literature review on features and effects. Med J Islam Repub Iran. 2021;35:27–27.

    PubMed  PubMed Central  Google Scholar 

  36. Brunner J, Chuang E, Goldzweig C, Cain CL, Sugar C, Yano EM. User-centered design to improve clinical decision support in primary care. Int J Med Inform. 2017;104:56–64.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Keniston A, McBeth L, Pell J Sr, Bowden K, Ball S, Stoebner K, Scherzberg E, Moore SL, Nordhagen J, Anthony A, et al. Development and implementation of a multidisciplinary electronic discharge readiness tool: user-centered design approach. JMIR Hum Factors. 2021;8(2):e24038.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Nguyen KA, Patel H, Haggstrom DA, Zillich AJ, Imperiale TF, Russ AL. Utilizing a user-centered approach to develop and assess pharmacogenomic clinical decision support for thiopurine methyltransferase. BMC Med Inform Decis Mak. 2019;19(1):194–194.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Toni E, Pirnejad H, Makhdoomi K, Mivefroshan A, Niazkhani Z. Patient empowerment through a user-centered design of an electronic personal health record: a qualitative study of user requirements in chronic kidney disease. BMC Med Inform Decis Mak. 2021;21(1):329–329.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Litvin CB, Hyer JM, Ornstein SM. Use of clinical decision support to improve primary care identification and management of chronic kidney disease (CKD). J Am Board Fam Med. 2016;29(5):604–12.

    Article  PubMed  Google Scholar 

  41. Faulkner L. Beyond the five-user assumption: benefits of increased sample sizes in usability testing. Behav Res Methods Instrum Comput. 2003;35(3):379–83.

    Article  PubMed  Google Scholar 

  42. Macefield R. How to specify the participant group size for usability studies: a practitioner’s guide. J Usability Stud. 2009;5(1):34–5.

    Google Scholar 

  43. Pugh D, Gallacher PJ, Dhaun N. Management of hypertension in chronic kidney disease. Drugs. 2019;79(4):365–79.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Sinha AD, Agarwal R. Clinical pharmacology of antihypertensive therapy for the treatment of hypertension in CKD. Clin J Am Soc Nephrol. 2019;14(5):757–64.

    Article  CAS  PubMed  Google Scholar 

  45. Cheung AK, Chang TI, Cushman WC, Furth SL, Hou FF, Ix JH, Knoll GA, Muntner P, Pecoits-Filho R, Sarnak MJ, Tobe SW, Tomson CRV, Mann JFE. KDIGO 2021 clinical practice guideline for the management of blood pressure in chronic kidney disease. Kidney Int. 2021;99(3):S1–87.

    Article  Google Scholar 

  46. Samal L, D'Amore JD, Gannon MP, Kilgallon JL, Charles J, Mann DM, Siegel LC, Burdge K, Shaykevich S, Waikar SS et al. Impact of kidney failure risk prediction clinical decision support on monitoring and referral in primary care management of chronic kidney disease: a randomized pragmatic clinical trial. kidney medicine. Kidney Med (in press).

  47. Wright A, McEvoy DS, Aaron S, McCoy AB, Amato MG, Kim H, Ai A, Cimino JJ, Desai BR, El-Kareh R, et al. Structured override reasons for drug-drug interaction alerts in electronic health records. J Am Med Inform Assoc. 2019;26(10):934–42.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Kilgallon JL, Gannon M, Burns Z, McMahon G, Dykes P, Linder J, Bates DW, Waikar S, Lipsitz S, Baer HJ, et al. Multicomponent intervention to improve blood pressure management in chronic kidney disease: a protocol for a pragmatic clinical trial. BMJ Open. 2021;11(12):e054065.

    Article  PubMed  PubMed Central  Google Scholar 

  49. McDonald N, Schoenebeck S, Forte A. Reliability and inter-rater reliability in qualitative research: norms and guidelines for CSCW and HCI practice. Proc ACM Hum Comput Interact. 2019;3(12):1–23.

    Google Scholar 

  50. Cha R-H, Lee H, Lee JP, Song YR, Kim SG, Kim YS. Physician perceptions of blood pressure control in patients with chronic kidney disease and target blood pressure achievement rate. Kidney Res Clin Pract. 2017;36(4):349–57.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Sobrinho A, da Silva LD, Perkusich A, Pinheiro ME, Cunha P. Design and evaluation of a mobile application to assist the self-monitoring of the chronic kidney disease in developing countries. BMC Med Inform Decis Mak. 2018;18(1):7.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Chattopadhyay D, Verma N, Duke J, Bolchini D. Design and evaluation of trust-eliciting cues in drug-drug interaction alerts. Interact Comput. 2018;30(2):85–98.

    Article  Google Scholar 

  53. Kunstler BE, Furler J, Holmes-Truscott E, McLachlan H, Boyle D, Lo S, Speight J, O’Neal D, Audehm R, Kilov G, et al. Guiding glucose management discussions among adults with type 2 diabetes in general practice: development and pretesting of a clinical decision support tool prototype embedded in an electronic medical record. JMIR Form Res. 2020;4(9):e17785.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Ramirez M, Maranon R, Fu J, Chon JS, Chen K, Mangione CM, Moreno G, Bell DS. Primary care provider adherence to an alert for intensification of diabetes blood pressure medications before and after the addition of a “chart closure” hard stop. J Am Med Inform Assoc JAMIA. 2018;25(9):1167–74.

    Article  PubMed  Google Scholar 

  55. Zheng K, Hanauer DA, Padman R, Johnson MP, Hussain AA, Ye W, Zhou X, Diamond HS. Handling anticipated exceptions in clinical care: investigating clinician use of “exit strategies” in an electronic health records system. J Am Med Inform Assoc JAMIA. 2011;18(6):883–9.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Adam Wright, Allison McCoy, Gearoid McMahon, Hojjat Salmasian provided input and insight in the Subject Matter Expert (SME) meetings. John Kilgallon assisted with data collection and manuscript submission.

Funding

This work was supported by the National Institute of Diabetes and Digestive and Kidney Diseases Grant Number R01DK116898.

Author information

Authors and Affiliations

Authors

Contributions

PG, SA, EW, ZB, and LS designed the study and collected data. MG, PG, SA wrote the main manuscript text. MG and PG prepared the figures. All authors reviewed the manuscript.

Corresponding author

Correspondence to Pamela M. Garabedian.

Ethics declarations

Ethics approval and consent to participate

All subjects were provided with information about the study and methods carried out in accordance with relevant guidelines. Informed consent was obtained from all research subjects. This study was approved by the ethics committee of the Mass General Brigham Human Research Protection Program (Protocol #: 2018P000692) that approves research at Brigham and Women’s hospital.

Consent for publication

Not applicable.

Competing interests

The authors do not have competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1. Appendix A

Usability Test Script.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Garabedian, P.M., Gannon, M.P., Aaron, S. et al. Human-centered design of clinical decision support for management of hypertension with chronic kidney disease. BMC Med Inform Decis Mak 22, 217 (2022). https://doi.org/10.1186/s12911-022-01962-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-022-01962-y

Keywords