Skip to main content
  • Research article
  • Open access
  • Published:

In pursuit of certainty: can the systematic review process deliver?

Abstract

Background

There has been increasing emphasis on evidence-based approaches to improve patient outcomes through rigorous, standardised and well-validated approaches. Clinical guidelines drive this process and are largely developed based on the findings of systematic reviews (SRs). This paper presents a discussion of the SR process in providing decisive information to shape and guide clinical practice, using a purpose-built review database: the Cochrane reviews; and focussing on a highly prevalent medical condition: hypertension.

Methods

We searched the Cochrane database and identified 25 relevant SRs incorporating 443 clinical trials. Reviews with the terms ‘blood pressure’ or ‘hypertension’ in the title were included. Once selected for inclusion, the abstracts were assessed independently by two authors for their capacity to inform and influence clinical decision-making. The inclusions were independently audited by a third author.

Results

Of the 25 SRs that formed the sample, 12 provided conclusive findings to inform a particular treatment pathway. The evidence-based approaches offer the promise of assisting clinical decision-making through clarity, but in the case of management of blood pressure, half of the SRs in our sample highlight gaps in evidence and methodological limitations. Thirteen reviews were inconclusive, and eight, including four of the 12 conclusive SRs, noted the lack of adequate reporting of potential adverse effects or incidence of harm.

Conclusions

These findings emphasise the importance of distillation, interpretation and synthesis of information to assist clinicians. This study questions the utility of evidence-based approaches as a uni-dimensional approach to improving clinical care and underscores the importance of standardised approaches to include adverse events, incidence of harm, patient’s needs and preferences and clinician’s expertise and discretion.

Peer Review reports

Background

Much has been written about evidence-based medicine (EBM) and its potential to inform practice. The terminology surrounding the aggregation of evidence in a systematic way implies that there is a practical import to this work, and the terms “evidence-based practice” (EBP) and “evidence-based approaches” (EBA) have arisen from this concept. The EBA are among the most significant contemporary ways of conceptualising health care, both theoretically and practically [1], and are viewed by advocates as the favoured means of integrating research findings into clinical practice [2].

Despite the appeal of EBA, and the widespread recognition of this approach in driving care, they present a number of challenges, and there has been a steady undercurrent of concern about their pervasive acceptance and promotion. Debate has focussed on several key factors, including the nature of evidence, particularly the privileging of some forms of evidence and the marginalising and devaluing of others [25]. Concerns that the domination of the EBA has (and will continue to) foster ‘a very rational, traditional, biomedical approach to research use/evidence-based practice’ [5] forms a crucial plank of the critique. Additional major critiques of EBA highlight concerns that they do not meet the challenges of rare diseases or unusual presentations, and may foster a “cookbook” approach to practice [6]. An additional view is that they may potentially have a deleterious effect on patient/physician relationships; including the charge that they create or contribute to an environment of paternalism and reductionism that can render patient preference and experience invisible and irrelevant [13, 7]. According to this critique, EBA can foster a therapeutic milieu in which patients are positioned as passive recipients in the medical encounter [1], and clinicians are limited in their ability to draw on intuitive and other forms of knowledge [7].

Notwithstanding the theoretical critique however, the aim of EBA is to facilitate the integration of the best available research evidence into practice by transparently assimilating information and aggregating data in ways that can help guide clinical decision-making [6]. Indeed, the EBA are attractive because they promote the idea of research-in-practice, yet present an alternative to each clinician locating, reading and assimilating the plethora of contemporary research literature on any clinical issue [6]. However, we argue that in some areas, even where a substantial body of literature exists, there is a lack of evidence to demonstrate that the EBA in fact provide the information necessary to effectively guide clinicians.

Purpose of this paper

The purpose of this paper is to explore the efficacy of the systematic review (SR) process in providing decisive information to shape and guide clinical practice. To do this we examined SRs on the management of blood pressure. We chose this because hypertension is a well-researched, common and persistent condition with a significant effect on health outcomes [8]. The Cochrane reviews on reducing blood pressure became the lens through which we explored the efficacy of the SR process in providing contemporary and clear information upon which clinical practice decisions could be made.

Methods

Independent manual searches of the reviews published on the Cochrane Database of Systematic Reviews (http://www.cochrane.org/cochrane-reviews) on 1/12/11 were undertaken to search for SRs on interventions to lower blood pressure. Reviews with the terms ‘blood pressure’ or ‘hypertension’ in the title were included. Once selected for inclusion, the abstracts were assessed independently by two authors. SRs were excluded if they pertained to blood pressure targets, pulmonary hypertension, hypertensive emergencies, acute cardiac events, gestational, postpartum, infant, neonatal and child hypertension. Furthermore, SRs were also excluded if the details of the number of clinical trials were not provided, and if the SR did not include any completed studies. Inclusions were independently audited by a third author.

Results

Twenty-five SRs representing a combined total of 443 clinical trials were identified (see Table 1). Of these SRs, the majority reported pharmaceutical-based interventions. Salt intake, weight-reducing diets, timing of medication and relaxation therapies were each the subject of a single SR. Twelve SRs made conclusive assertions, that is, they used language that denoted a degree of certainty. The remaining SRs (n = 13) were inconclusive (see Table 2).

Table 1 Included systematic reviews
Table 2 Authors’ conclusions

Discussion

In eight SRs, including four of the 12 SRs that were considered to be conclusive, the lack of adequate reporting of potential adverse effects or incidence of harm was specifically noted (see Table 2). These factors would exert an influence on clinicians and on the confidence they can have in selecting specific treatments as a result of any SR. There has long been concern that clinicians can be slow to take up research findings into their practice, with a common focus being on accessibility of current information and the most efficacious means of dissemination [9, 10]. However, findings from this exercise suggest there needs to be more of an onus on researchers to produce findings that are able to inform practice in a more useful way.

Attention is also drawn to the use and effect of extraneous words such as ‘appears to’ in reaching conclusions. It could be seen that use of this ambiguous language means that results are being presented cautiously but it could also have the effect of creating doubt or suggest an unwillingness or inability to have an opinion either way. Language is a problematic area in research [11], but much of the scrutiny about language and how it is used to present research focuses on the use of jargon or exclusionary language. When considering the use of ambiguous language - language which creates doubt and jeopardises meaning, questions about the nature of language and how it is used arise. How essential is the word (that causes the doubt) to the significance and meaning of the message? If we took those words out, would we have clear cut messages? If the words create the doubt, what is the point in having this process? If results cannot be interpreted with confidence, how can conclusions be reached through the SR process?

The inconclusive nature of several SRs raises a number of additional issues for consideration. The point of EBA as a systematic transparent process is to allow for the aggregation of data so that statistical significance may be achieved. In our sample, it was noted that some of the included studies were unreliable because of potential bias, questionable rigour or other factors which meant studies were deemed to be poor quality. The EBA have been the catalyst for the development of a ranking system, known as the hierarchy of evidence [12, 13].

Darlenski et al. [6] make the point that good clinicians will be able to draw on EBA, and we concur with this, even where findings are inclusive. Furthermore, it is recognised and understood that information drawn from EBA is only one of a number of factors that inform clinical judgement. But, clearly there is a need to provide some guidance about interventions that may never be able to be proven in the current evidentiary way. At the moment this is left to institutions through mechanisms such as clinical pathways, regulators such as the National Institute of Clinical Excellence (NICE), and the National Health and Medical Research Council. Some progress is being made as groups both in the community and within organisations are trying to develop methodologies to assist this process, such as through modelling.

Systematic reviews are only as good as the sum of the parts. Strategies such as standardised reporting of clinical trials through CONSORT methods offer some promise in increasing the rigour and usefulness of research (http://www.consort-statement.org/). The recognition of complex interventions in clinical care is another important consideration [14]. In this instance some interventions may have high internal validity, yet less external validity.

Conclusions

Though the EBA offer the promise of direction through clarity, in the case of reducing blood pressure, and despite the enormous amount of research that has been undertaken, the clarity offers little direction for the future, in the main, aggregating the past. Rather, it highlights evidentiary gaps and weaknesses. Indeed, one of the real values of the SR process is that it summarises what we know, which is potentially not what we need to know – what should I do for the 85 year old women sitting in my clinic?

Our results highlight a potential lack of efficacy of the SR process in driving future practice, which may leave future practice potentially open to other, less evidence-based directions. Krumholz [15] has recently highlighted the imperative of method in generating clinical research that is able to effectively inform practice. Through this exercise we have shown that even in areas where it might be thought to be strong, such as in the aggregation of drug trials, the results were often not clear and therefore their utility for informing practice is compromised. This may be a factor of the analytic techniques that are used. To be useful in clinical practice, there is a need to develop more robust methodologies and to ensure that research processes are well described and sufficiently transparent to allow data aggregation to occur with confidence.

Furthermore, it was noted that in some SRs inadequate information about prevalence of harm, complications and potential adverse effects were presented. In order that results are as clinically meaningful and useful as possible, there is a need to ensure reporting of presence or absence of these factors.

This review also emphasises the need for expertise and interpretation when developing clinical practice guidelines, particularly in the absence of robust Level 1 evidence and differences in clinical trial populations and patient groups. Indeed, it is certainly important to identify where the evidence does not support a particular line of practice. We argue that the corollary is not disproven by this approach, that is, practice may be improved by a particular intervention despite the null hypothesis, but the evidence isn’t strong enough in a normative way. There are many possible reasons why the level of evidence to support EBA in practice will not always be strong enough to inform practice. Assessing change in practice is difficult and robust methodologies have yet to be derived; the process is long-term and it will take some time for robust evidence to emerge; and, the evidence provided to support clinical decision-making may no longer be useful in practical settings. Based upon this review, data from SRs are only one part of the picture. Using systematic guideline development processes such as in ADAPTE framework are an important part of this process (http://www.adapte.org/www/).

Clearly there is a desire for contemporary and rigorous information upon which clinical treatment can be guided and individual clinical decisions made. The demand for this is evidenced in Cochrane’s own usage data, which states that “every day someone, somewhere searches The Cochrane Library every second, reads an abstract every two seconds and downloads a full-text article every three seconds” (http://www.cochrane.org: accessed 16/1/12).

Initially, the EBA have been viewed as a way of incorporating research into practice. The EBA does not always have high utility and reference for consistently assisting clinicians, particularly in groups not included in clinical trials. While the use of a single health issue may be viewed as a limitation of this paper; in using this common health issue – blood pressure management - as a lens, we have raised questions about whether SRs as the core of clinical guideline development can reliably influence practice in an interventional way, and about the strength of evidence that is available to inform clinical practice in some key areas.

References

  1. Mykhalovskiy E, Weir L: The problem of evidence-based medicine: directions for social science. Soc Sci Med. 2004, 59 (5): 1059-1069. 10.1016/j.socscimed.2003.12.002.

    Article  PubMed  Google Scholar 

  2. Cohen AM, Stavri PZ: A categorization and analysis of the criticisms of evidence-based medicine. Int J Med Informatics. 2004, 73 (1): 35-43. 10.1016/j.ijmedinf.2003.11.002.

    Article  Google Scholar 

  3. Little M: ‘Better than numbers…’ a gentle critique of evidence-based medicine. ANZ J Surg. 2003, 73 (4): 177-182. 10.1046/j.1445-1433.2002.02563.x.

    Article  PubMed  Google Scholar 

  4. Rycroft-Malone J, Seers K: What counts as evidence in evidence-based practice?. J Adv Nurs. 2004, 47 (1): 81-90. 10.1111/j.1365-2648.2004.03068.x.

    Article  PubMed  Google Scholar 

  5. Wall S: A critique of evidence-based practice in nursing: challenging the assumptions. Soc Theory Health. 2008, 6 (1): 37-53. 10.1057/palgrave.sth.8700113.

    Article  Google Scholar 

  6. Darlenski RB, Neykov NV: Evidence-based medicine: Facts and controversies. Clin Dermatol. 2010, 28 (5): 553-557. 10.1016/j.clindermatol.2010.03.015.

    Article  PubMed  Google Scholar 

  7. Broom A, Adams J: Evidence-based healthcare in practice: A study of clinician resistance, professional de-skilling, and inter-specialty differentiation in oncology. Soc Sci Med. 2009, 68 (1): 192-200. 10.1016/j.socscimed.2008.10.022.

    Article  PubMed  Google Scholar 

  8. Egan BM, Zhao Y: US trends in prevalence, awareness, treatment, and control of hypertension, 1988–2008. JAMA. 2010, 303 (20): 2043-2050. 10.1001/jama.2010.650.

    Article  CAS  PubMed  Google Scholar 

  9. Bero LA, Grilli R: Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The cochrane effective practice and organization of care review group. BMJ (Clin Res Ed). 1998, 317 (7156): 465-468. 10.1136/bmj.317.7156.465.

    Article  CAS  Google Scholar 

  10. Grol R, Grimshaw J: From best evidence to best practice: Effective implementation of change in patients’ care. Lancet. 2003, 362 (9391): 1225-1230. 10.1016/S0140-6736(03)14546-1.

    Article  PubMed  Google Scholar 

  11. Cordingley P: Research and evidence-informed practice: focusing on practice and practitioners. Cambridge J Educ [serial online]. 2008, 38 (1): 37-52. 10.1080/03057640801889964.

    Google Scholar 

  12. Evans D: Hierarchy of evidence: a framework for ranking evidence evaluating healthcare interventions. J Clin Nurs. 2003, 12 (1): 77-84. 10.1046/j.1365-2702.2003.00662.x.

    Article  PubMed  Google Scholar 

  13. Mantzoukas S: A review of evidence-based practice, nursing research and reflection: levelling the hierarchy. J Clin Nurs. 2008, 17 (2): 214-223.

    PubMed  Google Scholar 

  14. Craig P, Dieppe P: Developing and evaluating complex interventions: new guidance. 2007, Oxford: Medical Research Council

    Google Scholar 

  15. Krumholz HM: Documenting the methods history: Would it improve the interpretability of studies?. Circ Cardiovasc Qual Outcomes. 2012, 5: 418-419. 10.1161/CIRCOUTCOMES.112.967646.

    Article  PubMed  Google Scholar 

Pre-publication history

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debra Jackson.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

DS conceptualized the project, analysed the data, and contributed to literature and internet searches and drafted parts of the manuscript. DJ analysed the data, and contributed to literature and internet searches and drafted parts of the manuscript. PN audited the data analysis, contributed to literature and internet searches and drafted parts of the manuscript. PMD reviewed the data analysis, contributed to literature and internet searches and drafted parts of the manuscript. DS, DJ, PN, PMD critically reviewed and revised many versions of the drafted manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Saltman, D., Jackson, D., Newton, P.J. et al. In pursuit of certainty: can the systematic review process deliver?. BMC Med Inform Decis Mak 13, 25 (2013). https://doi.org/10.1186/1472-6947-13-25

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6947-13-25

Keywords