Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN): evolution of a content management system for point-of-care clinical decision support

  • Amelia Barwise1Email author,
  • Lisbeth Garcia-Arguello1,
  • Yue Dong1,
  • Manasi Hulyalkar1,
  • Marija Vukoja2,
  • Marcus J. Schultz3,
  • Neill K. J. Adhikari4,
  • Benjamin Bonneton5,
  • Oguz Kilickaya7,
  • Rahul Kashyap1,
  • Ognjen Gajic1 and
  • Christopher N. Schmickl1, 6
BMC Medical Informatics and Decision MakingBMC series – open, inclusive and trusted201616:127

DOI: 10.1186/s12911-016-0367-3

Received: 17 May 2016

Accepted: 21 September 2016

Published: 3 October 2016

Abstract

Background

The Checklist for Early Recognition and Treatment of Acute Illness (CERTAIN) is an international collaborative project with the overall objective of standardizing the approach to the evaluation and treatment of critically ill patients world-wide, in accordance with best-practice principles. One of CERTAIN’s key features is clinical decision support providing point-of-care information about common acute illness syndromes, procedures, and medications in an index card format.

Methods

This paper describes 1) the process of developing and validating the content for point-of-care decision support, and 2) the content management system that facilitates frequent peer-review and allows rapid updates of content across different platforms (CERTAIN software, mobile apps, pdf-booklet) and different languages.

Results

Content was created based on survey results of acute care providers and validated using an open peer-review process. Over a 3 year period, CERTAIN content expanded to include 67 syndrome cards, 30 procedure cards, and 117 medication cards. 127 (59 %) cards have been peer-reviewed so far. Initially MS Word® and Dropbox® were used to create, store, and share content for peer-review. Recently Google Docs® was used to make the peer-review process more efficient. However, neither of these approaches met our security requirements nor has the capacity to instantly update the different CERTAIN platforms.

Conclusion

Although we were able to successfully develop and validate a large inventory of clinical decision support cards in a short period of time, commercially available software solutions for content management are suboptimal. Novel custom solutions are necessary for efficient global point of care content system management.

Keywords

Point-of-care Decision-support tool Checklist Content Infrastructure Critical care Software Technology

Background

Checklists are a simple way to reduce errors in complex high-risk environments. Widely used in aeronautics for decades, Gawande et al. recently publicized the use of checklists in medicine [1, 2]. These observational studies demonstrated improvements in safety and outcomes when integrated into operating room routines, both in high-resource and low-resource countries [1, 3]. A simulation study by the same group further suggests that checklists also significantly improve surgical care during emergency situations where rapid and correct decision-making is crucial to ensure good patient outcomes [4].

Building upon these experiences and advances in informatics and human factor engineering, a novel electronic tool, the Checklist for Early Recognition and Treatment of Acute Illness and Injury (CERTAIN), is being developed by a large international collaboration with the overall objective of standardizing the approach world-wide to the evaluation and treatment of critically ill patients, in accordance with best-practice principles [5]. Similar to surgical checklists, CERTAIN may be particularly beneficial in low-resource settings with a scarcity of formally trained personnel [6].

During the evaluation of acutely decompensating patients, CERTAIN guides health-care providers through a structured approach, starting with a primary survey (ABCDE) followed by a secondary patient survey consisting of reason for admission, past medical history and the patient’s problem list. The latter is the gateway to the clinical decision support embedded into CERTAIN. In CERTAIN software, selection of a syndrome on the problem list leads to on-demand display of point-of-care key information in an index card format in the center of the screen with recommendations regarding further diagnostic and therapeutic steps (Fig. 1). Similarly, point-of-care key information is readily available or “just one click away” for selected procedures and medications. The process of creating and maintaining concise, accurate and up-to-date information in index card format (“cards”) for clinical decision support in CERTAIN is thus the cornerstone to CERTAIN’s success. Our experiences and lessons learned regarding this process will be the subject of the remainder of this paper (information about other aspects of the CERTAIN project including more analytical evaluations are available elsewhere [5, 79]).
Fig. 1

Panel a shows CERTAIN’s main display facilitating a structured approach to acutely decompensating patients. Panel b shows the integrated on-demand clinical decision support for the syndrome card “shock” in the center of the screen

Methods

A priori we postulated that an ideal content management system for CERTAIN should have the following characteristics:

Content

The content should cover a wide variety of clinically important topics, be easy to read, contain useful point of care information based on up-to-date evidence and be validated by expert reviewers. The content would initially be produced and deployed in English but translated into local languages as the project develops. The content should also be applicable in care environments which may have resource limitations and adaptable to local circumstances. The information provided should be supported by key references, web-links, and videos (e.g. demonstrating procedures) as appropriate. Given our objective to provide structured bedside decision support based on existing guidance and evidence, we deliberately decided against a process involving the development of new guidelines.

Infrastructure

The data management system that supports the clinical content must restrict access to authorized personnel and allow frequent, automated back-ups. Information should be stored centrally with the capability to instantly update all the different CERTAIN platforms (software, mobile application, PDF-booklet available for download and print). The data management system must further provide ease of access for authors and reviewers to facilitate development and validation of the content.

Results

We developed content for the software using the following steps. First, Bonneton et al. undertook an international survey of critical care professionals to identify the most common medical syndromes, medications, and procedures in acute care [7]. This survey also assessed the types of information considered to be most pertinent by bedside care providers (e.g. diagnostic tests vs epidemiologic data) [7]. Based on these survey results, we defined three different card categories – syndromes, medications, procedures – and created templates for each (for examples see Additional files 1, 2 and 3). A Mayo Clinic-based research physician, the “content lead editor”, organized the development and validation process for all cards (see Additional file 4: Figure S1 and Additional file 5: Figure S2).

Creation

Specific cards are drafted by content matter experts utilizing primary literature and published guidelines identified via multiple databases (e.g. Medline, EMBASE, National Guideline Clearing House, Cochrane Library, Up-To-Date). Both authors and reviewers are selected from the “expert panel”, a convenience sample of (currently) 81 physicians and 6 pharmacists from various backgrounds (e.g. critical care, anesthesia, emergency medicine), including many physicians involved in an ongoing study implementing CERTAIN into clinical practice [5].

Review and proofreading

Completed drafts of cards are then validated using an open peer-review process (Additional file 5: Figure S2): each new card is assigned to 2–3 reviewers chosen from the expert panel. Reviewers have the option to request reassignment to other cards based on their preference or expertise. Minor reviewer comments (e.g. grammar or spelling issues) are directly corrected by the content lead editor. Major comments are resolved by open discussion involving the original author, the other assigned reviewers, and the content lead editor who is supported by, and can seek further advice from, the “content management panel” at any time. This latter group consists of five senior physicians who are also part of the expert panel. Although all major comments to date have been resolved using this process, for content issues which remain unresolved, a mechanism exists to facilitate final arbitration via a modified Delphi process involving the entire expert panel [10].

Publication and updates

After a card is proofed and finalized, it serves as the blueprint to update the different CERTAIN platforms. To ensure that the content stays up-to-date, authors receive a request after one year to update their card with regards to changes in current evidence and guidelines, followed by the same review process as when creating a new card. In addition, this updating process can be triggered at any time by any CERTAIN user via an embedded feedback button in the software. While the main work flow for development and validation is based on the English content, cards are currently being translated simultaneously to other languages including Spanish, Chinese, Turkish, Croatian, Serbian, and Polish. We currently have 60 Spanish, 176 Chinese,152 Turkish,20 Serbo-Croatian and 65 Polish cards. The process is similar to the general validation process: experienced bilingual clinicians translate the cards which are then reviewed by bilingual peers prior to incorporation into the various CERTAIN platforms (for more details see Additional file 6).

Ownership

The coordinating center overseeing content management across all platforms is the Mayo Clinic, Rochester. This role includes development, review, updates, and translation of content and infrastructure and involves system organization, reminding authors and reviewers about update deadlines, recruiting authors, reviewers and translators for new content development and working with programmers to find technical solutions for data management issues. Individual card “ownership” and authorship is shared between the author and reviewers and acknowledged on all CERTAIN output (software, mobile app, PDF). If any original authors/reviewers cannot maintain “ownership” of a card the coordinating center assigns new individuals to adopt that card.

Discussion

Card authorship

While the general process outlined above has remained constant over time, many practical details were refined due to changing circumstances and needs as summarized in Table 1. For example, one major change was the increase in the number of authors (See-Appendix 6 list of authors/reviewers). Initially, these were restricted to a few members of our research group to facilitate rapid growth and consistent standards of the card inventory. However, given the need for annual revisions, it has not been feasible for one author to take ownership of more than just a few cards. The current number of cards created by 36 authors includes 67 syndromes, 117 medications, and 30 procedures (for a full list of cards see Appendix 5). Thus, over time we recruited more collaborators, often colleagues of original authors and reviewers, and trainees at the fellow level in the pulmonary and critical care department at our institution. So far 127 (59 %) cards have been reviewed, with each reviewer being assigned an average of two cards (range 1–7 cards).
Table 1

Summary of the evolution of the content management system

aMETRIC Multidisciplinary Epidemiology and Translational Research in Intensive Care

Software infrastructure

Cards were initially created, stored and modified in Microsoft Word® using Dropbox® links for sharing. Using those links, reviewers could download the Word® files, revise and add comments to the documents and then send them back to the content lead editor as an email attachment. Unfortunately Dropbox® did not support real time collaboration between different authors simultaneously. While technologically simple, this method became impractical due to the large amount of time needed to organize the collaboration process to enable reviewers to see each others’ comments and hence facilitate a virtual discussion. The potential for missed emails and slow file updates, and the lack of version control became unworkable for the content lead editor. We improved this process by switching to Google Docs®, which allows sharing of text documents via links and editing of the source files directly by multiple reviewers simultaneously. Drawbacks of this approach included loss of much of the initial formatting and the inability to easily create and update a PDF booklet. Microsoft Word® allowed linkage of all single documents together in one master file with an automated table of contents, creating a booklet which could be updated within minutes. In addition, although freely available in general, access to Google Docs® is restricted in certain countries (e.g. China) where some of our authors and reviewers live, thus limiting some collaborative opportunities.

Although not directly applicable to our project, because the various open source content management systems reviewed by Mooney et al. are geared towards creators of wikis or blogs, we agree with their final conclusion that when choosing an appropriate tool “first and foremost, security should be evaluated as well as the aptitude, availability and coverage of the user support community” [11]. Neither Google docs® nor Dropbox® met our security needs, to which there are several aspects: We want to be certain that cards can only be edited by invited and qualified individuals to ensure high quality of the cards which potentially impact clinical decision making and thus health outcomes; additionally we also want to enable local clusters (e.g. a specific hospital) or users to customize the content to reflect local/individual preferences with those modifications being editable and visible only by that local cluster or user, respectively; lastly the data needs to be backed up on local servers automatically at frequent intervals to ensure restorability and offline work in case of disrupted internet connection and/or failure of the cloud server etc. Of note, with regards to our content management system we are not concerned about any patient confidentiality issues, because while personal data can be entered into CERTAIN software (to facilitate charting and debriefing) the cards themselves do not contain any patient information.

Furthermore both Google docs® and Dropbox® required excessive work to execute simple tasks, including manually updating the software and mobile applications whenever content changed; sending email reminders to reviewers to ensure compliance with peer review deadlines; and keeping track of the cards’ expiration dates. Due to the absence of readily available software meeting our requirements, we have thus started developing a customized software solution that will allow authors and reviewers to co-create, edit and review content directly in a secure cloud server with the capability to automatically update the different platforms including the PDF booklet (see Table 2 and Additional file 7: Figure S3).
Table 2

Infrastructure of the customized content management system

Infrastructure services from AWS

Function

EC2

EC2 and Amazon Machine Image used to create the virtual machine. Then Apache Server and PHP programming environment inside each instance were installed as our web/app server environment.

S3 Storage

Used for saving our application development files

CloudFront

Used CloudFront as a Content Delivery Network (CDN) in order to provide a contents distribution to end users with low latency, high data transfer speeds.

Mongo lab

This document oriented database service on top of AWS EC2 provided the persistence layer for our contents storage and back-up service

VPC

Put our EC2 servers and RDS database into the VPC group in order to provide a more secured and isolated private network for all our cloud services.

IAM and Trusted Advisor

Security is always a top priority for a clinical study related application. By adopting these 2 services, we can create a secured strategy to enables us to securely control access to AWS services and resources for our users. Using IAM, we can create and manage AWS users and groups, and use permissions to allow and deny their access to our CERTAIN CMS application resources

Elastic load balancer

This AWS on-demand scaling load balancer and monitor system assured our application can be elastically expanded to support global usage in a most efficient way.

Software components:

CERTAIN CMS web admin

HTML5 based web admin portal manages all the medical cards

CMS contents APIs

The web service server to be used by different CERTAIN client software (flash, CMS admin and mobile app)

MongoDB

MongoDB is the persistence layer for our content management system

In order to build this scalable applications platform for our study, we selected Amazon Web Service (AWS) as our Infrastructure as a Service (IAAS) provider. AWS is a global leader in this area and right now it provides more than 40 cloud services for its 11 geographical regions across the world for IT developers. After making an assessment of the quality, security risk, time and cost factors, we used the following infrastructure services from AWS to build our CERTAIN CMS platform

While we have been somewhat struggling to find the right infrastructure, we have made much progress in the development and validation of the content itself, despite essentially no funding. Within three years we were able to create more than 200 cards, with more than half of them being validated. In large part this was only possible by directly involving many of the CERTAIN end-users into the content-creation process, thus reducing the work load per person while increasing the users’ “buy-in” into the CERTAIN concept.

While our approach of an open peer-review may increase the risk of “herding” (ie. reviewers being more likely to be influenced by and agreeing with their peers’ potentially incorrect opinions), [12] we intentionally adopted this process because open peer-review is generally thought to increase accountability, fairness, and transparency, with some evidence showing that it leads to better quality reviews, while being preferred by authors [1316]. Despite the theoretical risk that identifiable reviewers may feel more hesitant to criticize their peers, in practice the loss of anonymity does not appear to significantly affect reviewers’ decision to ask for major revisions or reject manuscripts [15]. While “open peer review” generally just denotes that reviewers’ identity is revealed to the authors (and possibly to the readers if the manuscript under review is eventually accepted) we further enabled reviewers to be aware of each other’s identity and opinions as well. This may further increase the risk of “herding”, but at a time where medical knowledge doubles approximately every 3 to 4 years we feel it is best to have an open and maximally transparent discourse involving as many content experts as possible [17]. This principle of maximal inclusion of content experts also underlies the mechanism to resolve complex issues using a modified Delphi process involving the entire expert panel and our efforts to encourage each CERTAIN user to simultaneously function as a peer-reviewer via the feedback option within the software.

A crucial challenge in developing globally applicable decision support is to provide best-practice recommendations that are relevant in low-resource practice settings where some tests and interventions may not be available. We tried to solve this conundrum by crafting cards based on best evidence assuming a resource-rich setting while allowing users to add permanent contextualizing card notes to each card. In the future we further plan to add this feature for clusters, so that for instance hospitals can add a specific recommendation regarding antibiotics for pneumonia taking into account local resistance patterns and drug availability, which will be visible to all providers affiliated with that particular hospital or cluster.

There have been many other attempts to harness the technologic progress to improve and standardize health-care in under-developed and under-served regions: for example, within the United States about 6 % of all intensive care patients are now (co-)managed by a telemedicine intensivist who remotely monitors patients’ condition in real-time and provides support to the onsite personnel [18]. This remote expertise and standardization of care appears to improve outcomes, but major barriers include the need and cost for 24/7 available remote experts, variable acceptance by the onsite personnel (who may feel monitored rather than supported) as well as variable integration and interoperability with the local software environment [18]. In an interesting extension of the telemedicine concept to resource-poor countries Celi et al. realized that a major infrastructure asset is the rampant availability of mobile phones which allow access to and interchange of information even in settings devoid of wireless internet and computers, and thus created a “cell phone-facilitated clinical system” [19]. This system, which is integrated into open MRS (an open source medical records system), allows users (e.g. patients or local health workers) to send medical information including images and voice messages to remote specialists who in turn could provide live decision support. While we similarly encountered software issues described above, CERTAIN alleviates the acceptance issue by providing on-demand, interactive best practice advise to the onsite providers, empowering them to use, ignore or modify the information at their own discretion. Additionally, and similar to the project by Celi et al., CERTAIN tries to address the challenge of getting technologic support to areas with potentially minimal infrastructure and/or internet capability by offering decision support via various platforms including a paper version as well as a mobile phone app.

Another major challenge at the intersection between technology and healthcare that we encountered with CERTAIN itself is the difficulty of keeping a balance between doing justice to the complex environment that it is being designed for (i.e. evaluation of critically ill patients) while keeping the interface simple since in these situations time is generally of the essence. For example, during a recent simulation study participants largely provided positive feedback about CERTAIN in general, but felt that the software should become somewhat more intuitive. It is reassuring though, that in this simulation study CERTAIN improved health-care providers’ performance [20]. It is unclear, however, whether the benefit is due to its embedded decision support or the other components such as teaching a structured approach to patient care, safety culture, and closed loop communication strategies. A before-after quality improvement study is underway to assess the impact of CERTAIN on care processes and patient outcomes when implemented into clinical practice in multiple Intensive Care Units (ICUs) across five continents after training local personnel remotely via live video stream [5, 9]. This study will significantly increase the number of users who can provide feedback about how well the decision support aligns with frontline providers’ needs. Based on this feedback, we are continuously improving the decision support system with a special focus on workflow integration, data entry and output, standards and transferability, and knowledge maintenance [21]. However, if shown to improve processes of care or patients’ outcomes, future research will be needed to determine the relative contribution of CERTAIN’s different components.

Conclusions

Although we were able to successfully develop and validate a large inventory of clinical decision support cards in a short period of time, readily available software products are suboptimal for use as content management platforms, requiring us to pursue a customized software solution.

Notes

Abbreviations

ABCDE: 

Airway, breathing, circulation, disability, exposure

CERTAIN: 

Checklist for early recognition and treatment of acute illness

Google docs: 

Google documents

ICU: 

Intensive care unit

Mobile apps: 

Mobile applications

MS Word: 

Microsoft word

pdf: 

Portable document format

Declarations

Acknowledgements

We would like to thank the many people who contributed to CERTAIN over the years. In particular we would like to thank Lei Fan for the fantastic work on developing and maintaining the CERTAIN software. Below is a list of authors and reviewers who have been involved in CERTAIN since its inception. Thank you to all who contributed their time and talent over the years and are still currently doing so. Thank you also to Midhat Mujic for his hard work with content uploading and support.

Funding

This work was supported in part by Mayo Clinic Critical Care Research Committee, Laerdal Foundation, Mayo Clinic Endowment for Education Research Award for CERTAIN training, , Mayo Clinic Department of Medicine Innovation Award for CME Education.

Availability of data and materials

This is a descriptive paper. To access CERTAIN go to www.icertain.org/. Please contact the corresponding author for further details if necessary.

Authors’ contributions

AB, LG, YD, RK, NA, MS, MH, CS-acquisition, analysis and interpretation of data. OG, MV, BB, OK-study design and acquisition of data. All authors have been involved in drafting and revising manuscript and given final approval of the version to be published.

Competing interests

Lisbeth Garcia-Arguello, Yue Dong, Manasi Hulyalkar, Marija Vukoja, Marcus J Schultz, Neill KJ Adhikari, Benjamin Bonneton and Christopher N Schmickl have no actual or potential conflicts of interest to disclose.

Mayo Clinic, Dr. Ognjen Gajic and Dr. Rahul Kashyap have a potential financial conflict of interest related to this research. CERTAIN software has been licensed to Ambient Clinical Analytics. Dr. Amelia Barwise has potential financial conflict of interest due to spousal connections with Ambient Clinical Analytics. The research was reviewed by the Mayo Clinic Conflict of Interest Review Board and conducted in compliance with Mayo Clinic Conflict of Interest policies.

Consent for publication

NA.

Ethics approval and consent to participate

NA.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
Multidisciplinary Epidemiology and Translational Research in Intensive Care (M.E.T.R.I.C.), Division of Pulmonary and Critical Care Medicine, Mayo Clinic
(2)
The Institute for Pulmonary Diseases of Vojvodina, Sremska Kamenica, Faculty of Medicine, University of Novi Sad
(3)
Academisch Medisch Centrum, Universiteit van Amsterdam
(4)
Department of Critical Care Medicine, Sunnybrook Health Sciences Centre and University of Toronto
(5)
Emergency Department, René Arbeltier Hospital
(6)
Department of Medicine, Boston Medical Center, Boston University School of Medicine
(7)
Department of Anesthesiology and Reanimation, Gulhane Military Medical Faculty

References

  1. Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat AH, Dellinger EP, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360(5):491–9.View ArticlePubMedGoogle Scholar
  2. Gawande A. The Checklist Manifesto: How to Get Things Right: Picador; Reprint edition (January 4, 2011). New York: Picador Publishers; 2011Google Scholar
  3. Kwok AC, Funk LM, Baltaga R, Lipsitz SR, Merry AF, Dziekan G, et al. Implementation of the World Health Organization surgical safety checklist, including introduction of pulse oximetry, in a resource-limited setting. Ann Surg. 2013;257(4):633–9.View ArticlePubMedGoogle Scholar
  4. Arriaga AF, Bader AM, Wong JM, Lipsitz SR, Berry WR, Ziewacz JE, et al. Simulation-based trial of surgical-crisis checklists. N Engl J Med. 2013;368(3):246–53.View ArticlePubMedGoogle Scholar
  5. Vukoja M, Kashyap R, Gavrilovic S, Dong Y, Kilickaya O, Gajic O. Checklist for early recognition and treatment of acute illness: International collaboration to improve critical care practice. World J Crit Care Med. 2015;4(1):55–61.View ArticlePubMedPubMed CentralGoogle Scholar
  6. Vukoja M, Riviello E, Gavrilovic S, Adhikari NK, Kashyap R, Bhagwanjee S, et al. A survey on critical care resources and practices in low- and middle-income countries. Global Heart. 2014;9(3):337–42. e1-5.View ArticlePubMedGoogle Scholar
  7. Bonneton B, Adhikari N, Schultz M, Kilickaya O, SENKAL S, Gavrilovic S, et al. Development of bedside descision support cards based on the information needs of acute care providers. Crit Care Med. 2013;41(12):A30–A1.View ArticleGoogle Scholar
  8. Sevilla Berrios R, O’Horo J, Schmickl C, Erdogan A, Chen X, Garcia Arguello L, et al. Evaluation of clinician performance in the assessment and management of acutely decompensated patients with and without electronic checklist: a simulation study. Poster presentation at ESICM LIVES in Barcelona 10/01/2014. 2014.Google Scholar
  9. CERTAIN official website; http://www.icertain.org/ (last accessed 1/15/2016).
  10. Jones J, Hunter D. Consensus methods for medical and health services research. BMJ. 1995;311(7001):376–80.View ArticlePubMedPubMed CentralGoogle Scholar
  11. Mooney SD, Baenziger PH. Extensible open source content management systems and frameworks: a solution for many needs of a bioinformatics group. Brief Bioinform. 2008;9(1):69–74.View ArticlePubMedGoogle Scholar
  12. Park IU, Peacey MW, Munafo MR. Modelling the effects of subjective and objective decision making in scientific peer review. Nature. 2014;506(7486):93–6.View ArticlePubMedGoogle Scholar
  13. Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. Jama. 1998;280(3):237–40.View ArticlePubMedGoogle Scholar
  14. McNutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. A randomized trial. Jama. 1990;263(10):1371–6.View ArticlePubMedGoogle Scholar
  15. van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. BMJ. 1999;318(7175):23–7.View ArticlePubMedPubMed CentralGoogle Scholar
  16. van Rooyen S, Godlee F, Evans S, Smith R, Black N. Effect of blinding and unmasking on the quality of peer review: a randomized trial. Jama. 1998;280(3):234–7.View ArticlePubMedGoogle Scholar
  17. Densen P. Challenges and opportunities facing medical education. Trans Am Clin Climatol Assoc. 2011;122:48–58.PubMedPubMed CentralGoogle Scholar
  18. Kumar S, Merchant S, Reynolds R. Tele-ICU: efficacy and cost-effectiveness of remotely managing critical care. Perspect Health Inf Manag. 2013;10:1f.PubMedPubMed CentralGoogle Scholar
  19. Celi LA, Sarmenta L, Rotberg J, Marcelo A, Clifford G. Mobile Care (Moca) for remote diagnosis and screening. J Health Inform Dev Ctries. 2009;3(1):17–21.PubMedPubMed CentralGoogle Scholar
  20. Berrios RS, O’Horo J, Schmickl C, Erdogan A, CHEN X, Arguello LG, et al. Prompting with electronic checklist improves clinician performance in medical emergencies. Crit Care Med. 2014;42(12):A1424.View ArticleGoogle Scholar
  21. Berner ES. Clinical decision support systems: State of the Art. AHRQ Publication No. 09-0069-EF. Rockville, Maryland: Agency for Healthcare Research and Quality. June 2009. available under https://healthit.ahrq.gov/search/09-0069 (last accessed 1/15/2016).

Copyright

© The Author(s). 2016