Skip to main content
  • Research article
  • Open access
  • Published:

Examining clinician choice to follow-up (or not) on automated notifications of medication non-adherence by clinical decision support systems



Maintaining medication adherence can be challenging for people living with mental ill-health. Clinical decision support systems (CDSS) based on automated detection of problematic patterns in Electronic Health Records (EHRs) have the potential to enable early intervention into non-adherence events (“flags”) through suggesting evidence-based courses of action. However, extant literature shows multiple barriers—perceived lack of benefit in following up low-risk cases, veracity of data, human-centric design concerns, etc.—to clinician follow-up in real-world settings. This study examined patterns in clinician decision making behaviour related to follow-up of non-adherence prompts within a community mental health clinic.


The prompts for follow-up, and the recording of clinician responses, were enabled by CDSS software (AI2). De-identified clinician notes recorded after reviewing a prompt were analysed using a thematic synthesis approach—starting with descriptions of clinician comments, then sorting into analytical themes related to design and, in parallel, a priori categories describing follow-up behaviours. Hypotheses derived from the literature about the follow-up categories’ relationships with client and medication-subtype characteristics were tested.


The majority of clients were Not Followed-up (n = 260; 78%; Followed-up: n = 71; 22%). The analytical themes emerging from the decision notes suggested contextual factors—the clients’ environment, their clinical relationships, and medical needs—mediated how clinicians interacted with the CDSS flags. Significant differences were found between medication subtypes and follow-up, with Anti-depressants less likely to be followed up than Anti-Psychotics and Anxiolytics (χ2 = 35.196, 44.825; p < 0.001; v = 0.389, 0.499); and between the time taken to action Followed-up0 and Not-followed up1 flags (M0 = 31.78; M1 = 45.55; U = 12,119; p < 0.001; η2 = .05).


These analyses encourage actively incorporating the input of consumers and carers, non-EHR data streams, and better incorporation of data from parallel health systems and other clinicians into CDSS designs to encourage follow-up.

Peer Review reports


Medication adherence—that is, a person consistently and correctly following a mutually agreed upon, collaborative plan made with their clinician for using medication to manage a condition [1]—is core to successfully managing chronic conditions across a variety of populations [2,3,4,5,6,7].

Medications are often an essential component of plans to reduce the risk of relapse for many people diagnosed with complex mental illnesses; as such, they are core parts of lives of many people living with schizophrenia [3, 8], bipolar disorder [9, 10], and major depressive disorder [11]. Investigating means of promoting and maintaining adherence to these medications is, therefore, a priority in clinical mental health research [4, 5]. Indeed, between 1 in 2 and 1 in 4 people who take antipsychotics become non-adherent during the course of their illness, and half of people diagnosed with bipolar are estimated to become non-adherent to their medications at least once during the long-term course of their illness [10, 12,13,14]. This can occur for a variety of reasons. Supporting one perspective, a systematic review exploring the identification of “potentially modifiable” (ie., feasible bases for developing interventions) reasons for non-adherence to anti-psychotics identified poor insight, substance abuse, negative attitudes toward medication and side-effects—among others [6]. However, other studies have argued that non-adherence is not solved by policing or sidelining concerns and beliefs about medication [10]. First, people—regardless of the perceived severity of their illnesses—can and do discontinue medication for valid and well thought out reasons [15]. Additionally, the generalisation of population-level outcomes used to justify common pharmacotherapeutic treatments to individuals may delay recovery for some [16]. Regardless of perspective, core to protecting long-term mental and physical health outcomes of people with complex mental illnesses is ongoing support from their clinicians and community [16,17,18]; and, where non-adherence is identified as problematic, that support is provided in a context that encourages candour, trust, and transparency from and between all parties [15, 19, 20].

However, complexity of illness is not the sole factor predicting non-adherence, nor do “lower-risk” diagnoses and psychiatric medications necessarily result in less risk to people’s health if they abruptly discontinue; namely antidepressants [7, 13, 17, 18, 21, 22], which are prescribed to an increasing share of the population. Indeed, two Selective Seratonin Reuptake Inhibitors (SSRIS)—Escitalopram (5.47 million prescriptions) and Sertraline (5.12 million prescriptions)—appeared in the top ten drugs by prescription count in Australia in 2020–21 [23]. In a naturalistic study of a sample from the United States of America, it was found that — when participants were asked about their medication adherence in the year prior to their participation in the study—22% of anti-depressant users had discontinued antidepressants without clinician advice or approval [13]. Another study found that the rate of discontinuation also increases over time in anti-depressant users, showing adherence rates of only 37.6% at 3 months, and 18.9% at six months [21]. These data, alone, are not necessarily cause for concern—but are important to keep in mind when contrasting the relative low-risk assigned to anti-depressant discontinuation effects in policy [17] with recent literature [7, 18, 24]. For example, a recent systematic review found 56% of people discontinuing anti-depressants experienced withdrawal effects, 46% of whom described them as severe and longer than outlined in current UK and US guidelines [18]. These potentially urgent grounds for intervention are complicated further by these events often coinciding with the termination of the clinical relationship [22], making important clinical scaffolds for discontinuation—identifying facilitators for successful discontinuation, co-designing a personalised plan with the person discontinuing, relapse planning, involving a family member or trusted other, and setting up continuity of care provision with the person discontinuing [25, 26]—a virtual impossibility.

Clinical decision support systems

Given the adverse consequences for many people who discontinue their medication [26,27,28], early intervention is key [29]. Digital tools offer potential means for health services to proactively provide care and support in these contexts. Clinical decision support systems (CDSS) are an example of such a tool. CDSS first curate data from sources that can include but are not limited to sources such as: electronic health records/clinical information systems [30, 31], sensing technologies ranging from consumer products like mattress sensors to therapeutic devices like continuous glucose monitoring systems [32, 33], SMS surveys of clients [34], and self-monitoring apps [35, 36], amongst others. These data are then presented to users in a manner that informs a clinical decision—either through algorithmic interpretation, using decision rules based on a pre-existing knowledgebase to suggest a course of action [37], or simply through a more intuitive presentation of the raw data [38]. These systems assist in making sense of sometimes vast data, transforming individuals’ patterns in service use, medication adherence, and in some cases elements of their day-to-day life to a form more immediately legible to clinicians [39]. This making-legible of raw data in turn enables the development and delivery of interventions with, theoretically, highly granular levels of client-specificity that would not be feasibly achievable at scale and within the time constraints of a human agent; both augmenting human delivered support at the point of care and potentially enabling tailored, automated follow-up independent of traditional in person contact with a clinician [37]. In the context of medication adherence this latter consideration is particularly important, with multiple authors emphasising the lack of a “one-size-fits-all” intervention, and need to tailor any approach on a client-by-client basis using nuanced, ecological insights into their lives as a basis [19, 20, 40]. Finally, where these data streams are real-time or close-to-real time, clinical teams can be enabled to monitor to evaluate the success of these interventions and respond to any changes in the client’s state, should they arise, in a proactive and timely manner.

While the systems described above certainly have the potential to reduce the burden of relapse and deterioration of mental health associated with non-adherence on clients and services, the evidence in the literature is ambivalent [31, 41,42,43,44,45]. Indeed, reviews have consistently noted the low quality of evidence, risk of bias, and need for further research in this field [42, 45, 46]. Regardless, CDSS are already in use for the management of some high-risk medications within mental health services in Australia—for example, in clozapine management to enable proactive intervention into non-adherence triggered relapse and early detection of adverse events and side effects [47, 48]. Authors of a recent, 5-year database study of antipsychotic utilisation and persistence in a large Australian sample conclude that oral Clozapine’s significant persistence in comparison to both other oral anti-psychotics and Long-Acting Injectable antipsychotics could be attributable not only to efficacy but intensity of follow-up [49].

Contextual barriers to decision support

Thorough and multifaceted work on a variety of fronts is required when designing these tools and their associated interventions. Proficiently executing facets of CDSS development like user interfaces, user experience, and balancing alert fatigue with under-prompting are rightly identified by many as important for success [46, 50,51,52]. However, equally important is the manner in which a CDSS integrates into both the workflows and self-perceptions of its future users [53, 54].

Regarding the latter, clinicians have shown resistance to the use of algorithms in healthcare—both “analogue” in the early days of guideline based care [55, 56], and digital [41, 53]. This resistance stems from clinicians’ strengths in adaptive expertise [57], but can also limit acceptance of other experts’ opinions [53, 55]. For researchers committed to actualising the potential for CDSS to enable proactive and evidence based care, knowledge of these complexities and their effect on behaviour is crucial to success but can be elusive—emerging more prominently in the naturalistic, day-to-day work performed by clinicians than under controlled circumstances [41, 49, 53, 56, 58]. This is important to consider in the context of study designs for evaluating CDSS. The results of Randomised Control Trials (RCTs), where clinician actions are strictly protocolisation, may not fully reflect the behavioural and practical realities of clinical practice [41, 58, 59]. Time limited, protocolised workflows introduce an artificial “order” to clinical work for trial durations, resulting in masking the biases and work practices that may occur automatically, as a result of external pressures, or for any other reason in day-to-day practice outside of the trial [51, 52, 60].

Outside of these external factors, it is important to note that—for many clinicians—it is preferable for a variety of reasons to rely on their experience and judgments rather than that of a system [53]. These factors can severely impact the efficacy of CDSS interventions, regardless of trial and software design quality [41]. As such, it is important not only to understand at a systemic and organisational cultural level why CDSS implementations face challenges, but also to develop methods and design tools that can collect data from which we can establish how and why individual clinicians make decisions using these systems, naturalistically and in the moment [53, 61].

The current study

This study presents results from the pilot of a real-time, CDSS-integrated technique to gather data showing how, why, and when clinicians acted on automated medication non-adherence flags, aiming to: (1) Describe patterns of follow-up behaviour within these non-adherence data; (2) Identify areas for design intervention within CDSS; and (3) Identify any relationship between client and medication subtype characteristics and the likelihood of follow-up. These flags were generated by a CDSS—Actionable Intime Insights (AI2), a web-based medication and appointment adherence CDSS—using data from the Australian Medicare claims databases [29, 30, 39]. Free-text justifications of decisions to follow-up or not follow-up were input by clinicians throughout the trial, and extracted in parallel with other flag metadata, including medication subtype, client ID, days taken to action the flag.

Descriptive data outlining decision behaviours—alongside medication-subtype and client characteristics—were extracted from the raw AI2 flags. These data were then analysed and synthesised using parallel qualitative and mixed methods [62,63,64,65]: first, through thematic synthesis, with analytical themes generated through qualitative synthesis of the descriptive codes [63]; second, hypothesised relationships between medication-subtype and client characteristics with follow-up were tested using inferential statistical techniques [62, 65]. The discussion presents a summary and synthesis of these findings, focussing on the implications for future CDSS design and implementation studies.


Ethics and consent to participate

The AI2 study protocol was approved by the Southern Adelaide Local Health Network Clinical Research Ethics Committee (AK03478) and published prospectively [29]. An informed consent was obtained from clinicians participating in this study. As per the My Health Records Act (2012) legislation, all consenting clinicians have rights to use AI2 CDSS to access health records of patients for the purposes of care provision without requiring explicit consent. The extraction and analysis of de-identified AI2 CDSS data for this study was in accordance with the guidelines approved by the ethics committee.

Abridged primary trial procedure

As this study analyses data from the AI2 implementation, relevant details of the design of that study have been included here to contextualise this analysis.



Two clinical monitors used the AI2 decision support software prospectively with 354 clients under their care managment, choosing to follow-up or not follow-up on flags as they were raised:

  1. 1.

    A Social Worker and Team Leader within the service; and

  2. 2.

    A Senior Consulting Psychiatrist, Author JS

Clients seen by clinical monitors had: (1) attended the community mental health clinic associated with this study at least once before in the six months prior to the study; (2) prescribed medication for their mental health condition; (3) had a My Health Record (Australia’s national digital health record); and (4) were registered in the clinic’s client information systems and subsequently enrolled in AI2 and monitored for non-adherence between 1 July 2019 and 28 February 2020.

Materials: AI2

Figure 1 illustrates the interactive flow with AI2 experienced by clinicians in the trial in more detail; more detail about the software and primary trial is reported elsewhere [29, 30, 38, 39, 66]. The pilot trial studied the implementation and impact of AI2 by incorporating it into the usual provision of care at the pilot site. As such, the protocol included no specifications about when to follow up, worked within the team structures and staffing resources available at the site, and within the day-to-day working norms of the clinical monitors [29, 30]. This approach was chosen to allow for insights closer to the naturalistic conditions facing implementations in practice.

Fig. 1
figure 1

Flow diagram demonstrating interactive patterns with AI2 alerts

The procedure for clinicians using AI2 involves following steps:

  1. 1.

    Reviewing non-adherence flags on the dashboard (Fig. 2).

    Fig. 2
    figure 2

    AI2 Alerts Dashboard using primary trial investigator names for example purposes

  2. 2.

    If a non-adherence flag, in the reviewing clinician’s judgment, warrants further investigation they examine the client’s records—including:

    1. 2a)

      The timeline within AI2 (Fig. 3), which visualises patterns in medication pick-up and appointment attendance data collected in near real-time [38];

      Fig. 3
      figure 3

      Timeline Detail, with example “pop-up” information-boxes associated with clicking on flags, MBS, and PBS events shown

    2. 2b)

      Relevant data in the implementing service’s clinical information system;

  3. 3.

    Based on these data, the clinician then chooses to follow up (or not) on the flag

  4. 4.

    Finally, they record this choice in AI2, and provide brief comments in the follow-up notes form (Fig. 4).

Following this, specific to the trial site, the clinician emailed a coordinating Registered Nurse appointed to oversee the follow-up and data entry. Clients who were followed-up were contacted by their case manager, clinical monitors associated with the study, or their GPs. In the case of clients being uncontactable, contact was made with their GP, housing provider, pharmacist, and other contacts to determine their clinical status [30].

The current study

Data collection technique: theoretical background and design

The objective of the AI2 pilot implementation study was to establish a more comprehensive, naturalistic evidence base from which the multi-faceted requirements of full-scale implementations of CDSS can be elucidated [29]. Extracting patterns in clinician-user behaviour was identified as a key adjunct to the primary quantitative analysis in answering the research questions of the trial—with the aim to iteratively improve the fit the intervention to clinician workflows. [67]. However, these naturalistic conditions also necessitated careful design. Researchers needed to balance adversely impacting clinical workflows with encouraging action on flags. To the former, researchers risked either potentially discouraging use of the tool, or—conversely—creating an artificial level of adherence to a protocolised version of our imagined usage of the tool that would result in key implementation barriers in the day-to-day of health services being missed. To the latter, the researchers equally did not want to inadvertently fail to achieve the primary aim of the trial through this naturalism—to encourage action of flags and understand the effect of this proactive care on health services and individual clients.

This design problem is common to guidelines engagement interventions more broadly, and solutions to encourage adherence without compromising workflows or naturalism have proven less than intuitive. For example, while integration with clinical information systems (CIS) seems an intuitive option, this work is technically difficult [37]. Additionally, CIS are often misused—in a benign way—to sidestep time-consuming or poorly designed features. An example of this can be found in Förberg and Colleagues’ CIS-integrated decision support study, where nurse participants’ use of a more generic template to log data—rather than the template intended for use to record outcomes for the clinical action of interest—inadvertently resulted in participants missing the guideline reminders that formed the core of the intervention [41]. Within and outside of CIS-based interventions factors such as alert fatigue, and a reduction in perceived “seriousness” of alerts in the context of an overwhelming amount of data can also challenge designers and implementation scientists in this space [41, 68,69,70]. One method with a more established history of success in addressing tendencies to avoid or dismiss advice is designing systems to require entering a reason when overriding advice, which one systematic review found resulted in higher adherence to CDSS advice [68]. However, the authors note that highly insistent systems can either frustrate clinicians into underuse or encourage un-critical acceptance of automatically generated advice [68]. Additionally, this review also note that systems using structured data collection techniques can inadvertently bias responses through priming [68].

To balance these concerns, we settled on a simple data-collection technique—a non-compulsory, free text field at the time of alert actioning—in which we asked clinicians to briefly note their reasoning behind following-up or not following-up on an alert (Fig. 4). This was operationalised within a concurrent-nested mixed methods design, collecting these data in parallel to the primary non-adherence flag metadata—such as medication subtype, the time lag for actioning the flag, and so on—of interest to the primary AI2 implementation study.

Fig. 4
figure 4

AI2 Alert actioning and decision data collection interface

We begin with the extraction of descriptive data—inductive and descriptive coding of the decision notes, descriptive statistics, and sorting of these combined data into a priori categories (Followed-Up, or Not Followed-up)—from the raw flags. This was followed by parallel analyses—beginning with Thematic Synthesis of the descriptive codes exploring design insights, and followed by mixed-methods hypothesis testing exploring relationships between client and medication-subtype characteristics and follow-up behaviour. Table 1 outlines the aims, objectives, hypotheses (where appropriate), and outcomes (including measures and tests, where appropriate) of this study (Table 2).

Table 1 Aims, objectives and hypotheses, and outcomes
Table 2 Follow-up categories and descriptive codes × number of flags

Analysis plan

De-identified decision note and flag metadata from AI2 were analysed using NVivo R1 (QSR Software, 2021) and SPSS v.27 (IBM Corp, 2020). The data included 771 medication and appointment non-adherence flags across 304 clients. The trial occurred between 11th Jan 2020 and 14th of November 2020. Importantly, a client may have multiple flags on the same date based on different algorithms. On these occasions, the assessing clinician duplicated their notes across both flags, as they were actioned in the same manner at the same time in all cases. For the purposes of analysis, these duplicates were coded identically so as not to unbalance numbers of flags across categories.

Coder details and risk of bias

For the initial descriptive analysis of the codes this study utilised a single coder with clinical oversight—provided by Author JS, one of the clinical monitors in the trial—to ensure closeness of the analysis to the clinical context on which it reports [63]. While a single coder is not preferable in most qualitative approaches, there are mitigating circumstances in the case of this study. First, the small scale and reduced scope of this pilot analysis, and the focus on design insights of the findings tempers the potential generalisability of these findings clinically, mitigating the risks of publishing these data. Second, the relative simplicity and brevity of the qualitative data included for analysis in this study (examples are given within the results section of this paper) reduce the potential for different categorical interpretations of the flags. Third, authors JS and NB contributed to the generation and refinement of the more inferential, perspective-driven analytical themes—meaning the core generalisable design findings of this study represent consensus between multiple authors and mitigating single-coder bias.

Finally, the risk of bias associated with a single coder was also managed by engaging a researcher external to the project. Author DT, who conducted the analyses in this study, began work at Flinders after the cessation of the trial, has no prior relationships with any participants or clinicians involved in the trial, is not involved in the AI2 project, and his pay and role originate from an entirely separate project. His role in clinical mental health services—as a peer practitioner—is separate to that of both clinical monitors, but has also encompassed triage, assessment, and intervening in non-adherence.

Descriptive data analyses

Flags relating to medication non-adherence with clinical note data were included for anaylsis. First, research questions were set aside, and decision notes associated with medication non-adherence flags were inductively coded based on the behavioural justifications for follow-up they described [63]. These flags were then sorted into the deductive, a priori categories embedded in research question one and the study protocol [29]—Followed-up and Not Followed-up—and further subcategories inductively derived following a framework method approach [62, 65].

Following this, quantitative data, matched to the decision notes, were extracted from AI2. These data—flag ID, system-generated client ID, “flag raised” time stamp, “flag actioned” time stamp, medication subtype—were imported into IBM SPSS Statistics (version 27, 2021). Data analyses were conducted by author DT and reviewed by author NB. Time stamps were used to compute a days-to-action variable for each flag, providing the number of days before flags were actioned for each flag. Shapiro–Wilk Tests of Normality were used to determine the normality of the resulting distributions associated with these data. Descriptive statistics were produced through mixing the qualitatively derived framework method categories and flag metadata of interest.

Inferential analyses

The qualitative element of these analyses continued the thematic synthesis derived approach of initiated in the descriptive analyses through the generation of analytical themes—that is, inferential, generative, and exploratory themes that “go beyond” the implications of the raw data and identify sites for CDSS design intervention [63]. These themes were derived through both individual and consensus exploration of patterns between and within descriptive codes by the authors of the study. These insights, along with their respective descriptive code bases, were reported.

This work was augmented utilising a mixed-methods approach, mixing the qualitatively-derived Follow-Up categories with quantitative data derived from the metadata—relating to medication subtype, patterns in client adherence, and time taken to follow-up. Hypothesised relationships between these metadata and their impact of on clinicians’ follow-up behaviour were derived from the literature presented in the background to this paper; the nature of these relationships and how they will be tested is outlined in Table 1. This mixing is justified; indeed, combining these data provides a coherent integration of longitudinal and rich mixed data, augmenting the standalone quantitative and qualitative data. What constitutes “follow-up” is deeply contextual to both the type of clinician and service under investigation, as are the behaviours that inform these decisions. This mixed-methods approach flexibly allows for high-level comparisons (at the follow-up level) and nuanced exploration of variations in how services and clinicians conceptualise and operationalise these constructs. This means this approach is, ultimately, reusable—allowing for replications that reflect the nuances of new contexts and clinicians while still accommodating comparisons and syntheses between contexts.

Reporting guidelines

This paper reports data conformant with APA-JARS MMARS standards [71, 72]. See Additional file 1: Appendix 1 for an annotated copy of these guidelines with section references for relevant data.


Descriptive analyses

Following these initial analyses, 331 flags for 179 clients met the inclusion criteria for further analysis. Clinical decision related notes fell into two top-level categories: Followed-up (n = 71; 22%) and Not Followed-up (n = 260; 78%). The Followed-up category was further subcategorised into: (1) Adverse outcomes (n = 20); (2) Confirmed evidence of non-adherence (n = 12); and (3) Confirmed adherence (n = 46), either from the client, their family or their GP. The Not Followed-up category was further subcategorised into: (4) Unclear without further investigation (n = 133), where there was evidence of non-adherence, action was deemed unnecessary by the clinician, and the clinical notes did not specify or minimally specified the evidence for their decision; and (5) Likely to be Clinician Supported (n = 140), where there were multiple data-points supporting the hypothesis that the client was being well-managed. Subcategories, frequencies, and example codes for this analysis are provided in Table 3.

Table 3 Flags × medication type

Qualitative analysis: design insights from thematic synthesis of decisions notes

Three major themes, two with sub-themes, were identified across follow-up categories; Table 4 shows associated descriptive codes and case frequencies for each.

Table 4 Analytical Themes and Descriptive Codes × Number of Flags

A.1 Access to contextual information enables decision making

This theme contained two subthemes: A.1.1) Data gathered from other record-keeping systems; and A.1.2) Data gathered person-to-person. Beginning with the former, in 56 cases the screening clinician was able to determine the status of people flagged for non-adherence through querying other record-keeping systems. Regarding the other codes in this category, while medications and prescriptions issued in residential care, long acting injectables, with the support of a case manager, or as part of the clozapine protocol would be visible on PBS records as this version of AI2 operated on a weekly refresh it is reasonable that the clinician—upon confirming any of the latter—would not spend time following up on data that may be superceded by the next system refresh. In both cases, these insights would either have been requested from other systems and databases or noted within the trialling service’s clinical information system. Accessing these data constitutes a form of follow-up; while the person flagged as non-adherent was not directly contacted, non-AI2 data provided veracity for the clinician’s decision. This code also highlights the impact of the lack of integration within Australian contexts in which medical support is provided on attempts to monitor adherence and, indeed, on the maintenance of comprehensive records for people with complex interactions with health and carceral systems [73].

In 32 cases, person-to-person data (A1.2) was an important part of confirming adherence status. Sources included family, case managers, other clinicians, or the client themselves. This is important to note.

A.2 Deferral of action to closer clinical contacts of the non-adherent person

In 123 cases, the screening clinician deferred to the judgment of the clinician who most recently saw the person flagged as non-adherent. Most regularly cited were general practitioners, sometimes in combination with AI2 showing compliance with other medications (n = 13), or a new medication in place of the medication ceased (n = 2), but in the majority of cases with no other justification (n = 68).

A.3 Rules don’t always meet the contextual needs of prescribers and clients

This theme contained two subthemes: A.3.1) Medications are prescribed and taken in more than one way; and A.3.2) This style of follow-up is not always warranted or appropriate. To the former, people often take medications in patterns that differ from the most common use. Medication taken pro re nata—or, when needed—is course of action undertaken regularly in mental health services [74]. Additionally, changing dosage of a medication on a relatively fixed schedule—such as in some presentations of premenstrual dysphoric disorder [75]—also does not translate into a set-dose-per-day usage easily detected algorithmically. As such, the clinical monitor determined in 115 cases that the medication had been prescribed outside of the usecases monitored by the AI2 algorithm—but in line with what they might expect in practice for that drug. While any additional sources for making this determination were not cited in any of these cases, this code reinforces the potential utility of the data captured in A.1 for verifying these assertions.

In terms of the latter theme, in 51 cases the screening clinician made the determination that this style of follow-up was not warranted or appropriate (A.3.2). On the face of it, the two descriptive codes in this category contain radically different categories of risk—people known to by non-adherent, and those who pick up their prescriptions in an irregular manner. In terms of the latter, the labour costs involved in following up may outweigh the benefits. On the other hand, in the former, for repeatedly non-adherent clients a phone call to follow-up may be minimally impactful on their behaviour, or possibly adversely affect the therapeutic alliance with the service.

Mixed-Methods Analysis: Preliminary insights into client and medication subtype characteristics’ impact on follow-up behaviours.

H1: Differences between medication subtypes and their likelihood to be followed-up

The proportion of flags that were not followed up and provided insufficient evidence, on review by the research team, to assume adherence (see Table 2) differed significantly between medication types (χ2 = 67.37; p < 0.001). Pair-wise Chi-squared tests between the four largest medication subtypes showed Anti-depressants were significantly less likely to be followed up than Antipsychotics (χ2 = 35.196, p < 0.001, v = 0.389), and Anxiolytics (χ2 = 44.825, p < 0.001, v = 0.499), but not Mood Stabilisers (χ2 = 1.455; p = 0.228). The other medication subtypes were excluded from this analysis due to their small sample sizes limiting reliable reporting of results (Tables 5, 6, 7).

Table 5 Medication type × follow-up status chi-squared test of homogeneity
Table 6 Pairwise Fisher’s exact tests of independence for follow-up status between-medication subtypes
Table 7 Normality of distributions—days to action × medication subtypes

H2: Differences between medication subtypes and timeliness of follow-up

The distributions associated with Days to Action x Medication Subcategory were not all normally distributed. A Kruskal–Wallis H-Test showed no significant differences in the distribution of days to respond between medication subtypes (H = 12.825; p = 0.077).

H3. Differences in time-to-action between follow-up categories.

The days taken to action Not Followed-up(0) and Followed-up(1) flags were compared (Fig. 5). The normality of the distributions was tested using Shapiro–Wilk tests of Normality, which showed significant deviance from normality (p0 < 0.001; p1 = 0.026). A Mann–Whitney U test found a significant difference, however with a modest effect size, between the distributions of response times between follow-up categories (M0 = 31.78; Range0 = 116; IQR0 = 38; M1 = 45.55; Range1 = 129; IQR1 = 30; U = 6341; p < 0.001; Z = 4.043; η2 = 0.05).

Fig. 5
figure 5

Follow up status × Days to Action

H4: Time × Event differences within-clients with mixed-follow up status flags

Data for 179 clients was included in this analysis. Most clients were flagged once, although this varied up to six flags for some (Fig. 6). 39 clients were exclusively followed up, 133 clients were exclusively not followed up, and 9 clients had flags in both categories. These data were insufficient for further quantitative analysis.

Fig. 6
figure 6

Frequency of number of flags per client


This study provides insights into CDSS design and clinician behaviours, from which researchers and services can derive sites of intervention to better improve to medication guidelines in real-world clinical mental health services.

Summary of findings

The majority of clients who were flagged were not followed-up. In those that were, qualitative analysis showed that contextual information enabled decision making. Where there was no follow-up, there was a tendency to defer—where contact had been made recently—to the judgment and monitoring of more recent clinical contacts (usually primary care) of the client. Additionally, the clinical monitors in many cases determined that either the rules of the algorithm or the intervention itself did not meet the context of the client. These findings indicate, overall, that contextually aware CDSS designs—that is, design that can take into account the person’s environment, clinical relationships and medical needs, executed with or without automation—show potential for enabling naturalistic follow-up interventions.

These qualitative findings were further elaborated by the mixed-methods results, which indicate—preliminarily—that time and effort costs associated with following up lower-risk non-adherence events (such as anti-depressants) may be perceived to outweigh the benefits (H1). Additionally, the quantitative results indicate more broadly the lack of faith in the veracity of prompts to follow-up generated from EHR data alone; indeed, the finding that Followed-up flags took significantly longer than Not Followed-Up flags may indicate that more clear-cut non-adherence data were a key ingredient, at least in this trial, for encouraging action (H3). Finally, that there were insufficient data to test within-client changes in follow-up status (H4) indicates the complexities of what repeated non-adherence—either actual, or as an artefact of algorithms—can represent. Indeed, these findings further affirm the Thematic Synthesis finding that the inflexibility of algorithmic “rule-breaking” inherently produces a reliance on clinical judgment of non-adherence, the general lack of veracity indicated in both the thematic synthesis and results for H3, and the importance of integrating non-EHR data into CDSS to address these limitations.

Implications for further research

Encouraging client and caregiver engagement and autonomy

The majority of clients who were flagged for medication non-adherence in this study were not followed up, with a significant lack of follow-up for anti-depressant non-adherence (H1)—a class of drugs, as established in the background to this study, prescribed for many conditions [76, 77], with potentially severe discontinuation effects [18], but considered low risk due to both their sometimes short-term use and guidelines indicating minimal discontinuation effects [17]. Regardless of risk, these clients are difficult to identify in the data currently collected by AI2, may have severed their relationship with their clinician [13, 22, 78], and the costs (time and effort) associated with the current intervention may outweigh the impact on the possibly small proportion of people who would benefit—an assertion backed by Analytical Themes A.1 and A.3. In the context of the finding that followed-up flags took significantly longer to action (H3)—indicating that a longer period of time since the flag was first raised and, therefore, a more clear-cut indication of non-adherence gave clinicians more impetus to act—it is further indicated that follow-up, if it were to happen, would likely happen outside of the window where discontinuation effects and/or encouraging restarting medication were feasible outcomes. Analytical Theme A.1 offers an inroad for design insights into these findings which, when synthesised, highlight a need for increased veracity of data within the CDSS—which A.1 indicates may be achieved through the incorporation of different data streams between both different record-keeping systems and between human actors.

One avenue of achieving this is through incorporating clients and their caregivers as both empowered actors and data sources within systems. Indeed, a systematic review found CDSS studies that incorporated input and follow-up from clients and caregivers to be more effective, potentially through the empowering, engaging and, therefore, clinician accountability building effect handing consumers these data can have [68]. If well designed and implemented, these methods also have the potential to provide more actionable insights into the experiences of people who abruptly discontinue “lower-risk” drugs, such as anti-depressants [31, 42, 79]; addressing the cost–benefit dilemma of the current intervention. Finally, this approach could also be utilised in clients who identify themselves as struggling with adherence to provide motivational, health-promoting, or supportive content— an important and potentially efficacious adjunct for this group [79, 80].

Interoperability with, or automated data collection and follow-up between services and systems

Adherence is not a monolithic category [1]. Non-adherence can appear as (a) clients simply not picking up a prescription (non-fulfilment); (b) clients can pick up a prescription, but then stop taking medication after initially taking it (non-persistence)—which can be both deliberate or due to lack of capacity or resources on the client’s part; and c) taking medication, but not in the manner in which it was prescribed (non-conforming) [1, 4]. Considering that: all three of these categories can occur simultaneously with the client not informing their GP or clinician of their non-adherence, low adherence among clients with chronic and complex conditions to all of their prescribed medications within complex drug regimes [7, 81,82,83], the finding in A.2 that clinical monitors working with AI2 had a tendency to defer to closer clinical contacts of the person flagged as non-adherence, the volume of Not Followed-Up flags categorised in the Framework analysis as Unclear Without Further Investigation—it is clear that further development and evaluation of communications between clinical monitors and other clinicians involved in the care of people flagged by AI2 is necessary.

Indeed, adherence approaches at their best are collaborative [1, 4]—between clinicians, services, and clients—and the development of automated notification and data collection systems for further implementations should, therefore, also aim to integrate data from and follow-up with other services and systems. Data collected from other systems, services and clinicians could be feasibly extracted from other, yet-to-be-implemented areas of MyHealthRecord—such as prescription and dispense records, shared health summaries, and event summaries—using techniques such as natural language processing, or careful presentation of raw data to enable further insights. Additionally, automated email contact initiations could be utilised. Automated follow-up of other services, systems and clinicians could involve interventions such as prompting clinicians—via email or other methods—to consider the potential impacts of different follow-up paths or, more simply, reminding the clinician of the value of the intervention [84]—design patterns from other industries that have been suggested as being potentially applicable to health [61].

The continuing importance of human factors

Additional to our provocations to consider automated follow-up, it is important to continue to stress the contextual and human factors within-services that facilitate or block CDSS use [50, 53, 61]. This study provides a nuanced set of initial insights, using novel data, to the interaction design literature seeking to address this.

First, in all clinical decision support systems it is necessary to balance the impulse to notify against the actionability of the notification, both to avoid alert fatigue and minimise the risk of adverse outcomes or legal ramifications [69, 70, 85]. In the context of these difficult—or unnecessary—to action flags, the use of filters in AI2 and similar systems could be used to narrow the use case to target specific client groups—allowing for a greater sense of specificity, the optimisation of which may facilitate adherence to CDSS use [69]. For example, in this study site focussing on follow-up of anti-psychotics may have been preferable when considering, in hindsight, the quantity of data generated by AI2 and the service’s priorities for follow-up. This methodology provides an option for clinics to ease into proactive care while balancing existing duties—or scoping the resources required to expand coverage as they arise. Combined with other methods of automated follow-up, this may improve timeliness, clinician workload concerns, as well as client and clinician outcomes more broadly.


This study demonstrates the potential of the Medicare data for monitoring and following up on non-adherence. These data do not include services sought from private mental healthcare. However, because of the often chronic and high-cost nature of living with a mental health condition in Australia, Medicare funded services are widely utilised by people with mental illnesses in Australia. Additionally, while Medicare allows state based acute services to view federally regulated and funded GP activities and occasions of pathology, radiology, and so on, acute care services funded by state government like hospitals are not visible in this data. Finally, as these are pilot findings—collected from a small number of clinicians and analysed by a single coder—their generalisability should be considered cautiously.


This study highlights the interaction design challenges facing health services and researchers implementing proactive care processes using CDSS. In particular, these results point towards the importance of addressing perceptions of: (1) risks associated with non-adherence to different medication-types; (2) the veracity of non-adherence data provided by CDSS; and (3) the person’s environment, clinical relationships and medical needs, and how associated biases related to their adherence. We suggest the importance of considering context in increasingly automated follow-up interventions as a priority for future research.

Data availability

The datasets generated and/or analysed during the current study are not publicly available due to the potentially identifying information contained within, due to the study’s location in a small community. There data can, however, be available from the corresponding author on reasonable request and after consultation with the Southern Adelaide Local Health Network Clinical Research Ethics Committee.



Clinical decision support system(s)

AI2 :

The Actionable Intime Insights study and software [30]


Hypothesis (n)


My Health Record, the Australian National eHealth Record


As needed (Latin: Pro Re Nata)


Randomised Control Trial(s)


Electronic Health Record(s)


Clinical Information System(s)


  1. Jimmy B, Jose J. Client medication adherence: measures in daily practice. Oman Med J. 2011;26:155–9.

    Article  Google Scholar 

  2. Shahin W, Kennedy GA, Stupans I. The consequences of general medication beliefs measured by the beliefs about medicine questionnaire on medication adherence: a systematic review. Pharmacy (Basel). 2020;8:147.

    Article  Google Scholar 

  3. Lo W ak‐Lam, Ki‐Yan Mak D, Ming‐Cheuk Wong M, Chan O, Mo‐Ching Chui E, Wai‐Sau Chung D, et al. Achieving better outcomes for schizophrenia clients in Hong Kong: strategies for improving treatment adherence. CNS Neurosci Ther. 2021;27 Suppl 1:12–9.

  4. Masand PS, Roca M, Turner MS, Kane JM. Partial adherence to antipsychotic medication impacts the course of illness in clients with schizophrenia: a review. Prim Care Companion J Clin Psychiatry. 2009;11:147–54.

    Article  Google Scholar 

  5. Novick D, Montgomery W, Treuer T, Aguado J, Kraemer S, Haro JM. Relationship of insight with medication adherence and the impact on outcomes in clients with schizophrenia and bipolar disorder: results from a 1-year European outclient observational study. BMC Psychiatry. 2015;15:189.

    Article  Google Scholar 

  6. Velligan DI, Sajatovic M, Hatch A, Kramata P, Docherty JP. Why do psychiatric clients stop antipsychotic medication? A systematic review of reasons for nonadherence to medication in clients with serious mental illness. Client Prefer Adherence. 2017;11:449–68.

    Article  Google Scholar 

  7. Walsh CA, Cahir C, Tecklenborg S, Byrne C, Culbertson MA, Bennett KE. The association between medication non-adherence and adverse health outcomes in ageing populations: a systematic review and meta-analysis. Br J Clin Pharmacol. 2019;85:2464–78.

    Article  Google Scholar 

  8. Leucht S, Tardy M, Komossa K, Heres S, Kissling W, Salanti G, et al. Antipsychotic drugs versus placebo for relapse prevention in schizophrenia: a systematic review and meta-analysis. Lancet. 2012;379:2063–71.

    Article  CAS  Google Scholar 

  9. Vieta E, Günther O, Locklear J, Ekman M, Miltenburger C, Chatterton ML, et al. Effectiveness of psychotropic medications in the maintenance phase of bipolar disorder: a meta-analysis of randomized controlled trials. Int J Neuropsychopharmacol. 2011;14:1029–49.

    Article  CAS  Google Scholar 

  10. Chakrabarti S. Treatment-adherence in bipolar disorder: a client-centred approach. World J Psychiatry. 2016;6:399–409.

    Article  Google Scholar 

  11. Sim K, Lau WK, Sim J, Sum MY, Baldessarini RJ. Prevention of relapse and recurrence in adults with major depressive disorder: systematic review and meta-analyses of controlled trials. Int J Neuropsychopharmacol. 2015;19.

  12. Nosé M, Barbui C, Tansella M. How often do clients with psychosis fail to adhere to treatment programmes? A systematic review. Psychol Med. 2003;33:1149–60.

    Article  Google Scholar 

  13. Samples H, Mojtabai R. Antidepressant self-discontinuation: results from the collaborative psychiatric epidemiology surveys. PS. 2015;66:455–62.

    Article  Google Scholar 

  14. Verdoux H, Lengronne J, Liraud F, Gonzales B, Assens F, Abalan F, et al. Medication adherence in psychosis: predictors and impact on outcome: a 2-year follow-up of first-admitted subjects. Acta Psychiatrica Scand. 2000;102:203–10.

  15. Salomon C, Hamilton B. “All roads lead to medication?” Qualitative responses from an Australian first-person survey of antipsychotic discontinuation. Psychiatr Rehabil J. 2013;36:160–5.

    Article  Google Scholar 

  16. Zarate CA, Tohen M. Double-blind comparison of the continued use of antipsychotic treatment versus its discontinuation in remitted manic clients. AJP. 2004;161:169–71.

    Article  Google Scholar 

  17. Malhi GS, Bell E, Bassett D, Boyce P, Bryant R, Hazell P, et al. The 2020 Royal Australian and New Zealand College of Psychiatrists clinical practice guidelines for mood disorders. Aust N Z J Psychiatry. 2021;55:7–117.

    Article  Google Scholar 

  18. Davies J, Read J. A systematic review into the incidence, severity and duration of antidepressant withdrawal effects: are guidelines evidence-based? Addict Behav. 2019;97:111–21.

    Article  Google Scholar 

  19. Patel T. Medication nonadherence: time for a proactive approach by pharmacists. Can Pharm J. 2021;:17151635211034216.

  20. Kim J, Combs K, Downs J, Tilman F. Medication adherence: the elephant in the room. US Pharmacist. 2018;43:30–4.

    Google Scholar 

  21. Ereshefsky L, Saragoussi D, Despiégel N, Hansen K, François C, Maman K. The 6-month persistence on SSRIs and associated economic burden. J Med Econ. 2010;13:527–36.

    Article  Google Scholar 

  22. Olfson M. Bringing antidepressant self-discontinuation into view. PS. 2015;66:449–449.

  23. Top 10 drugs 2020–21. Australian Prescriber. 2021;44.

  24. Maund E, Stuart B, Moore M, Dowrick C, Geraghty AWA, Dawson S, et al. Managing antidepressant discontinuation: a systematic review. Ann Fam Med. 2019;17:52–60.

    Article  Google Scholar 

  25. Schweitzer I, Maguire K. Stopping antidepressants. Aust Prescr. 2001;24:51–5.

    Article  Google Scholar 

  26. Donald M, Partanen R, Sharman L, Lynch J, Dingle GA, Haslam C, et al. Long-term antidepressant use in general practice: a qualitative study of GPs’ views on discontinuation. Br J Gen Pract. 2021;71:e508–16.

    Article  Google Scholar 

  27. Hui CLM, Honer WG, Lee EHM, Chang WC, Chan SKW, Chen ESM, et al. Long-term effects of discontinuation from antipsychotic maintenance following first-episode schizophrenia and related disorders: a 10 year follow-up of a randomised, double-blind trial. Lancet Psychiatry. 2018;5:432–42.

    Article  Google Scholar 

  28. Luykx JJ, Tiihonen J. Antipsychotic discontinuation: mind the client and the real-world evidence. Lancet Psychiatry. 2021;8:555–7.

    Article  Google Scholar 

  29. Oakey-Neate L, Schrader G, Strobel J, Bastiampillai T, van Kasteren Y, Bidargaddi N. Using algorithms to initiate needs-based interventions for people on antipsychotic medication: implementation protocol. BMJ Health Care Inform. 2020;27.

  30. Bidargaddi N, Schrader G, Myles H, Schubert KO, van Kasteren Y, Zhang T, et al. Demonstration of automated non-adherence and service disengagement risk monitoring with active follow-up for severe mental illness. Aust N Z J Psychiatry. 2021;:0004867421998800.

  31. Moja L, Passardi A, Capobussi M, Banzi R, Ruggiero F, Kwag K, et al. Implementing an evidence-based computerized decision support system linked to electronic health records to improve care for cancer clients: the ONCO-CODES study protocol for a randomized controlled trial. Implement Sci. 2016;11:153.

    Article  Google Scholar 

  32. Heinemann L. Future of Diabetes Technology. J Diabetes Sci Technol. 2017;11:863–9.

    Article  Google Scholar 

  33. Edouard P, Campo D, Bartet P, Yang R-Y, Bruyneel M, Roisman G, et al. Validation of the Withings Sleep Analyzer, an under-the-mattress device for the detection of moderate-severe sleep apnea syndrome. J Clin Sleep Med. 2021.

    Article  Google Scholar 

  34. Berrouiguet S, Baca-García E, Brandt S, Walter M, Courtet P. Fundamentals for future mobile-health (mHealth): a systematic review of mobile phone and web-based text messaging in mental health. J Med Internet Res. 2016;18:e5066.

    Article  Google Scholar 

  35. Morgiève M, Genty C, Azé J, Dubois J, Leboyer M, Vaiva G, et al. A digital companion, the emma app, for ecological momentary assessment and prevention of suicide: quantitative case series study. JMIR Mhealth Uhealth. 2020;8:e15741.

    Article  Google Scholar 

  36. Perry R, Oakey-Neate L, Fouyaxis J, Boyd-Brierley S, Wilkinson M, Baigent M, et al. MindTick: case study of a digital system for mental health clinicians to monitor and support clients outside clinics. Telehealth Innovations in Remote Healthcare Services Delivery. 2021;:114–23.

  37. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. npj Digit Med. 2020;3:1–10.

  38. Ledesma A, Bidargaddi N, Strobel J, Schrader G, Nieminen H, Korhonen I, et al. Health timeline: an insight-based study of a timeline visualization of clinical data. BMC Med Inform Decis Mak. 2019;19:170.

    Article  Google Scholar 

  39. Bidargaddi N, van Kasteren Y, Musiat P, Kidd M. Developing a third-party analytics application using Australia’s National Personal Health Records System: case study. JMIR Med Inform. 2018;6:e28.

    Article  Google Scholar 

  40. Stirratt MJ, Curtis JR, Danila MI, Hansen R, Miller MJ, Gakumo CA. Advancing the science and practice of medication adherence. J Gen Intern Med. 2018;33:216–22.

    Article  Google Scholar 

  41. Förberg U, Unbeck M, Wallin L, Johansson E, Petzold M, Ygge B-M, et al. Effects of computer reminders on complications of peripheral venous catheters and nurses’ adherence to a guideline in paediatric care: a cluster randomised study. Implement Sci. 2016;11:10.

    Article  Google Scholar 

  42. Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and client outcomes: a systematic review. JAMA. 2005;293:1223–38.

    Article  CAS  Google Scholar 

  43. Klimis H, Chow CK. Are digital health services the key to bridging the gap in medication adherence and optimisation? Heart Lung Circ. 2021;30:943–6.

    Article  Google Scholar 

  44. Steinkamp JM, Goldblatt N, Borodovsky JT, LaVertu A, Kronish IM, Marsch LA, et al. Technological interventions for medication adherence in adult mental health and substance use disorders: a systematic review. JMIR Ment Health. 2019;6:e12493.

    Article  Google Scholar 

  45. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, et al. Effect of clinical decision-support systems. Ann Intern Med. 2012;157:29–43.

    Article  Google Scholar 

  46. Moja L, Kwag KH, Lytras T, Bertizzolo L, Brandt L, Pecoraro V, et al. Effectiveness of computerized decision support systems linked to electronic health records: a systematic review and meta-analysis. Am J Public Health. 2014;104:e12-22.

    Article  Google Scholar 

  47. SA Health Office of the Chief Psychiatrist. SA Health Clozapine Management Clinical Guideline. 2017.

  48. Wilson B, McMillan SS, Wheeler AJ. Implementing a clozapine supply service in Australian community pharmacies: barriers and facilitators. J Pharm Policy Pract. 2019;12:19.

    Article  Google Scholar 

  49. Taylor M, Dangelo-Kemp D, Liu D, Kisely S, Graham S, Hartmann J, et al. Antipsychotic utilisation and persistence in Australia: a nationwide 5-year study. Aust N Z J Psychiatry. 2021;:00048674211051618.

  50. Miller K, Mosby D, Capan M, Kowalski R, Ratwani R, Noaiseh Y, et al. Interface, information, interaction: a narrative review of design and functional requirements for clinical decision support. J Am Med Inform Assoc. 2018;25:585–92.

    Article  Google Scholar 

  51. Heselmans A, Aertgeerts B, Donceel P, Geens S, Van de Velde S, Ramaekers D. Family physicians’ perceptions and use of electronic clinical decision support during the first year of implementation. J Med Syst. 2012;36:3677–84.

    Article  Google Scholar 

  52. Patterson ES, Doebbeling BN, Fung CH, Militello L, Anders S, Asch SM. Identifying barriers to the effective use of clinical reminders: bootstrapping multiple methods. J Biomed Inform. 2005;38:189–99.

    Article  Google Scholar 

  53. Liberati EG, Ruggiero F, Galuppo L, Gorli M, González-Lorenzo M, Maraldi M, et al. What hinders the uptake of computerized decision support systems in hospitals? A qualitative study and framework for implementation. Implement Sci. 2017;12:113.

    Article  Google Scholar 

  54. Murray E, Burns J, May C, Finch T, O’Donnell C, Wallace P, et al. Why is it difficult to implement e-health initiatives? A qualitative study Implementation. Science. 2011;6:6.

    Google Scholar 

  55. Woolf SH, Grol R, Hutchinson A, Eccles M, Grimshaw J. Potential benefits, limitations, and harms of clinical guidelines. BMJ. 1999;318:527–30.

    Article  CAS  Google Scholar 

  56. Feder G, Eccles M, Grol R, Griffiths C, Grimshaw J. Using clinical guidelines. BMJ. 1999;318:728–30.

    Article  CAS  Google Scholar 

  57. Mylopoulos M, Woods NN. When I say … adaptive expertise. Med Educ. 2017;51:685–6.

    Article  Google Scholar 

  58. Haddad PM, Tiihonen J, Haukka J, Taylor M, Patel MX, Korhonen P. The place of observational studies in assessing the effectiveness of depot antipsychotics. Schizophr Res. 2011;131:260–1.

    Article  Google Scholar 

  59. Cutrer WB, Ehrenfeld JM. Protocolization, standardization and the need for adaptive expertise in our medical systems. J Med Syst. 2017;41:200.

    Article  Google Scholar 

  60. Christensen M, Welch A, Barr J. Husserlian descriptive phenomenology: a review of intentionality, reduction and the natural attitude. JNEP. 2017;7:113.

    Article  Google Scholar 

  61. Wu HW, Davis PK, Bell DS. Advancing clinical decision support using lessons from outside of healthcare: an interdisciplinary systematic review. BMC Med Inform Decis Mak. 2012;12:90.

    Article  Google Scholar 

  62. Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13:117.

    Article  Google Scholar 

  63. Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45.

    Article  Google Scholar 

  64. Mayoh J, Onwuegbuzie AJ. Toward a conceptualization of mixed methods phenomenological research. J Mixed Methods Res. 2015;9:91–107.

    Article  Google Scholar 

  65. Lacey A, Luff D. Qualitative data analysis. Qualitative data analysis. 2009;47.

  66. Knight A, Jarrad GA, Schrader GD, Strobel J, Horton D, Bidargaddi N. Monte Carlo Simulations demonstrate algorithmic interventions over time reduce hospitalisation in clients with schizophrenia and bipolar disorder. Biomed Inform Insights. 2018;10.

  67. Cheng KKF, Metcalfe A. Qualitative methods and process evaluation in clinical trials context: where to head to? Int J Qual Methods. 2018;17:1609406918774212.

    Article  Google Scholar 

  68. Roshanov PS, Fernandes N, Wilczynski JM, Hemens BJ, You JJ, Handler SM, et al. Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ. 2013;346: f657.

    Article  Google Scholar 

  69. Phansalkar S, van der Sijs H, Tucker AD, Desai AA, Bell DS, Teich JM, et al. Drug—drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc. 2013;20:489–93.

    Article  Google Scholar 

  70. Kesselheim AS, Cresswell K, Phansalkar S, Bates DW, Sheikh A. Clinical decision support systems could be modified to reduce “alert fatigue” while still minimizing the risk of litigation. Health Aff (Millwood). 2011;30:2310–7.

    Article  Google Scholar 

  71. Appelbaum M, Cooper H, Kline RB, Mayo-Wilson E, Nezu AM, Rao SM. Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report. Am Psychol. 2018;73:3–25.

    Article  Google Scholar 

  72. Levitt HM, Bamberg M, Creswell JW, Frost DM, Josselson R, Suárez-Orozco C. Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: the APA Publications and Communications Board task force report. Am Psychol. 2018;73:26–46.

    Article  Google Scholar 

  73. Hampton S, Blomgren D, Roberts J, Mackinnon T, Nicholls G. Prescribing for people in custody. Aust Prescr. 2015;38:33–4.

    Article  Google Scholar 

  74. Roughead L, Proctor N, Westaway K, Sluggett J, Alderman C. Medication safety in mental health. Sydney, NSW: Australian Commission on Safety and Quality in Health Care; 2017.

  75. Yonkers KA, Kornstein SG, Gueorguieva R, Merry B, Van Steenburgh K, Altemus M. Symptom-onset dosing of sertraline for the treatment of premenstrual dysphoric disorder: a multi-site, double-blind, randomized, Placebo-Controlled Trial. JAMA Psychiat. 2015;72:1037–44.

    Article  Google Scholar 

  76. Cascade EF, Kalali AH, Thase ME. Use of antidepressants. Psychiatry (Edgmont). 2007;4:25–8.

    Google Scholar 

  77. Piek E, van der Meer K, Hoogendijk WJG, Penninx BWJH, Nolen WA. Most antidepressant use in primary care is justified; results of the netherlands study of depression and anxiety. PLoS ONE. 2011;6:e14784.

    Article  CAS  Google Scholar 

  78. Malhi GS, Bassett D, Boyce P, Bryant R, Fitzgerald PB, Fritz K, et al. Royal Australian and New Zealand college of psychiatrists clinical practice guidelines for mood disorders. Aust N Z J Psychiatry. 2015;49:1087–206.

    Article  Google Scholar 

  79. Haga SB. Toward digital-based interventions for medication adherence and safety. Expert Opin Drug Saf. 2020;19:735–46.

    Article  Google Scholar 

  80. Furber G, Jones GM, Healey D, Bidargaddi N. A comparison between phone-based psychotherapy with and without text messaging support in between sessions for crisis clients. J Med Internet Res. 2014;16: e219.

    Article  Google Scholar 

  81. Col N, Fanale JE, Kronholm P. The role of medication noncompliance and adverse drug reactions in hospitalizations of the elderly. Arch Intern Med. 1990;150:841–5.

    Article  CAS  Google Scholar 

  82. Haynes RB, McDonald HP, Garg AX. Helping clients follow prescribed treatment: clinical applications. JAMA. 2002;288:2880–3.

    Article  Google Scholar 

  83. Holvast F, Oude Voshaar RC, Wouters H, Hek K, Schellevis F, Burger H, et al. Non-adherence to antidepressants among older clients with depression: a longitudinal cohort study in primary care. Fam Pract. 2019;36:12–20.

    Article  Google Scholar 

  84. Armstrong MJ, Gronseth GS, Dubinsky R, Potrebic S, Penfold Murray R, Getchius TSD, et al. Naturalistic study of guideline implementation tool use via evaluation of website access and physician survey. BMC Med Inform Decis Mak. 2017;17:9.

    Article  Google Scholar 

  85. Ash JS, Sittig DF, Poon EG, Guappone K, Campbell E, Dykstra RH. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007;14:415–23.

    Article  Google Scholar 

Download references


The authors would like to acknowledge the workers at the intervention site for their service and commitment to this study, and Lydia Oakey-Neate for her contribution to the protocolisation and trial design of AI2. Author DT also acknowledges the Kaurna people, upon whose unceded country of Kawantilla—the “flat north place” known in English as the Adelaide plains—this study was conducted and the researchers live and work.

Contributions to the literature

  • People experiencing mental ill-health who stop taking their medication unsupported can be at risk. We tested a method for collecting how clinicians justified their decisions to either follow-up or to not when notified of this by an automated system.

  • Our results indicated the importance of incorporating data outside of health-records to encourage follow-up, and that anti-depressants were unlikely to be followed up, and not-followed up flags were actioned later.

  • We encourage designers to incorporate data from other record-keeping systems, clinicians, and from people and their caregivers directly in their systems.


The AI2 trial was supported by the Medical Research Future Fund (MRFF) Rapid Applied Research Translation Program, undertaken by Health Translation South Australia.

Author information

Authors and Affiliations



Author DT contributed to Descriptive/inductive coding, generation of analytical themes, formulation of research questions, mixed-methods analysis, and write-up of first draft. Authors JS and NB contributed to formulation of research questions, data interpretation, and editing drafts of the paper. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Niranjan Bidargaddi.

Ethics declarations

Ethics approval and consent to participate

The protocol for this study was approved by the Southern Adelaide Local Health Network Clinical Research Ethics Committee (AK03478).

Consent for publication

Not applicable.

Competing interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

. Appendix one: APA MMARS Reporting Standards Conformance Table.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thorpe, D., Strobel, J. & Bidargaddi, N. Examining clinician choice to follow-up (or not) on automated notifications of medication non-adherence by clinical decision support systems. BMC Med Inform Decis Mak 23, 22 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: