Skip to main content

Establishing an expert consensus for the operational definitions of asthma-associated infectious and inflammatory multimorbidities for computational algorithms through a modified Delphi technique

Abstract

Background

A subgroup of patients with asthma has been reported to have an increased risk for asthma-associated infectious and inflammatory multimorbidities (AIMs). To systematically investigate the association of asthma with AIMs using a large patient cohort, it is desired to leverage a broad range of electronic health record (EHR) data sources to automatically identify AIMs accurately and efficiently.

Methods

We established an expert consensus for an operational definition for each AIM from EHR through a modified Delphi technique. A series of questions about the operational definition of 19 AIMS (11 infectious diseases and 8 inflammatory diseases) was generated by a core team of experts who considered feasibility, balance between sensitivity and specificity, and generalizability. Eight internal and 5 external expert panelists were invited to individually complete a series of online questionnaires and provide judgement and feedback throughout three sequential internal rounds and two external rounds. Panelists’ responses were collected, descriptive statistics tabulated, and results reported back to the entire group. Following each round the core team of experts made iterative edits to the operational definitions until a moderate (≥ 60%) or strong (≥ 80%) level of consensus among the panel was achieved.

Results

Response rates for each Delphi round were 100% in all 5 rounds with the achievement of the following consensus levels: (1) Internal panel consensus: 100% for 8 definitions, 88% for 10 definitions, and 75% for 1 definition, (2) External panel consensus: 100% for 12 definitions and 80% for 7 definitions.

Conclusions

The final operational definitions of AIMs established through a modified Delphi technique can serve as a foundation for developing computational algorithms to automatically identify AIMs from EHRs to enable large scale research studies on patient’s multimorbidities associated with asthma.

Peer Review reports

Background

Asthma is the most common chronic illness of childhood, representing one of the five most burdensome chronic diseases in US adults [1,2,3]. Our group and others demonstrated asthma’s impact on the risks of a broad range of infectious and inflammatory diseases, namely asthma-associated infectious and inflammatory multimorbidities (AIMs) as an under-recognized health threat to adults and children with asthma [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. The Centers for Disease Control and Prevention terms asthma as an independent risk factor for invasive pneumococcal diseases. At present, the 2008 US and 2014 Canadian pneumococcal vaccine policies recommend a single dose of 23-valent pneumococcal polysaccharide vaccine (PPV-23) to adults with asthma ages 19–64 years [27,28,29]. A recent prospective cohort study showed that asthma’s impact on the risk of any infection as measured by a population attributable risk percent was similar to that of diabetes mellitus(DM) (2.2% vs. 2.9%, respectively) [8]. Impairment of both innate and adaptive immunity is the currently proposed mechanism for this association [9]. To mitigate the risks and outcomes of AIMs through clinical care and research, it is necessary to develop innovative strategies enabling identification and characterization of a subgroup of asthmatic children at risk of AIMs at a population level, especially studies requiring precision such as mechanism studies.

For example, to study the nature of the association of asthma with the risk of pneumonia, one must identify pneumonia by more than the inherently limited International Statistical Classification of Diseases (ICD) codes. The additional use of practical, predetermined criteria derived from guidelines and expert panels leveraging a broad range of EHR data sources is crucial for accurate and efficient identification in EHR-based studies. This includes the free text in a physician diagnosis, key terms referring to pneumonia in chest x-rays, and clinical notes. This type of manual chart review can be done for smaller studies, but becomes burdensome when applied to large, population-based studies with the goal of investigating numerous AIMs. Our group has developed and applied multiple computational algorithms (eg, natural language processing [NLP] algorithms) to automate the chart review process in regards to the two existing asthma criteria, asthma prognosis, and other asthma outcomes [30,31,32,33]. Those algorithms fully leverage EHRs in a way that effectively and efficiently phenotypes asthma, making asthma care and research scalable [30,31,32]. On the other hand, despite the significance of AIMs as a major health threat to people with asthma, computational algorithms for identifying AIMs are largely unavailable due to the lack of operational definitions which are critical for developing computational algorithms to automate the chart review process.

Deriving operational definitions based on the consensus of expert panels facilitates transparency and interpretability of downstream computational algorithms. Computational algorithms for a broad range of health conditions developed by different institutions using different approaches are available through eMERGE [34] and PheKB [35, 36]. However, very few studies report the development process of a rigorous operational definition for computational algorithms that can be reused systematically in a new area [37].

Herein we demonstrate the process and the results of establishing an operational definition for each AIM for the pediatric population through consensus building among internal and external experts using a modified Delphi method. The Delphi method relies on a group of experts to provide sequential levels of anonymous responses and feedback through a series of questionnaires to reduce range of responses and arrive at a predetermined level of consensus. This consensus building method is becoming a popular means through which to generate operational definitions and practice guidelines, as evident through recent work published in the fields of Otolaryngology, Pediatric Critical Care, and Geriatrics to name but a few [38,39,40,41,42].

Methods

The Delphi method has a myriad of applications and is often used in health services research, providing controlled feedback and systematic progression toward consensus among a panel of experts during completion of a series of voting rounds [43,44,45,46]. A modified process eliminates the initial open-ended questionnaire phase in lieu of a pre-population of statements that are reviewed and voted on by a panel of experts [47,48,49]. This modified Delphi method for consensus building serves as the methodological framework for researchers to develop and apply computational algorithms for AIMs.

Participants

A core team of five clinicians and scientists drafted the initial set of operational definitions. The internal expert panel consisted of eight physicians from a single institution practicing in the fields of Rheumatology (TM, AO), Infectious Disease (CH, ER), Gastroenterology (IA, MG) and Allergy (MP, AR). The five-member external panel of experts consisted of physicians practicing in the fields of Family Medicine/Epidemiology, Infectious Disease, Rheumatology, Pediatric Critical Care, and Gastroenterology all within the United States (see the Acknowledgement section). Criteria for panel inclusion included national recognition as an expert as evidenced through multiple peer reviewed publications in their field, presentations at national conferences, and leadership roles in professional societies. Participants were informed that participation in the project would require them to complete multiple Delphi rounds during which they would complete an online questionnaire, which would take 5–7 min. See Table 1 for detailed demographics of panel members.

Table 1 Participants’ demographic information

Questionnaire development

In order to posit the list of 19 potential AIMs and their operational definitions, the core team conducted (1) literature reviews per AIM and (2) a manual review of all ICD-9/10 codes for 24,003 electronic health records of children who were born in Olmsted County, Minnesota, between 1997 and 2016. These definitions were categorized within infectious disease conditions (11) and inflammatory disease conditions (8) (Fig. 1).

Fig. 1
figure1

Overall process for identifying 19 AIMs and their operational definitions

The definitions were composed via an online questionnaire distributed through email and administered using Qualtrics software (version 2017, Provo, UT). Each operational definition included one or more proposed criteria statements linked by a conjunction such as “and”, or “or”. Respondents were asked to respond Yes or No to the prompt “Do you agree with this?” for each of the operational definitions (Fig. 2). Respondents were instructed to keep the following questions in mind:

  1. 1.

    Is the operational definition of sufficient clinical value to warrant inclusion?

  2. 2.

    Is the wording of the operational definition clear and precise to avoid misinterpretation?

Fig. 2
figure2

Example of Delphi process and iterative submission of operational definitions for AIMs (e.g., Pneumonia). Three internal and two external rounds were completed sequentially

If the response was No, respondents were prompted to suggest specific changes in the subsequent free text space. All responses and comments were reviewed, and edits were incorporated into the next iteration of definitions. Figure 2 shows an example of the survey process. Moderate consensus was defined as 60–79% agreement among respondents, and strong consensus was defined as ≥ 80% agreement among respondents for each AIM.

Internal expert panel (IEP) voting rounds

The IEP members were given 7 days to complete the online questionnaire. Three follow-up emails were sent to non-respondents on the 3rd, 4th and 7th day which resulted in completion of the survey by all 8 IEP members. Core team members reviewed all response data and drafted modified operational definitions as necessary. These modified definitions were then resubmitted for review through an online questionnaire during the next Delphi round until consensus (≥ 60%) was achieved. Rounds two and three followed an identical process including the 3 subsequent reminders.

External expert panel (EEP) voting rounds

A five-member External Expert Panel (EEP) was enlisted to confirm consensus of the 19 operational definitions, which had achieved moderate or greater consensus among the IEP members. Two Delphi voting rounds were conducted following the same process as with the IEP until ≥ 60% agreement. Additional file 1: Fig. 1 shows the questionnaire used at the 2nd round for EEP.

This project was approved by the Mayo Clinic Institutional Review Board (#14-009934).

Results

Definitions of AIMs and response rate

In this study, 19 operational definitions were generated by a core team and then updated based on three sequential rounds with 8 IEP members, and two rounds with 5 EEP members from various institutions (Table 2). Response rate from participants was 100% for each round.

Table 2 Final operational definition of 19 asthma-associated infectious and inflammatory comorbidities

Percentage agreement rate for each AIM

Percentage agreement rate for each definition of AIMs is summarized in Table 3.

Table 3 The consensus level attained for each of the 19 operational definitions of asthma associated infectious and inflammatory multimorbidities (AIMs)

Consensus levels achieved are as follows: (1) Internal panel consensus: 100% for 8 definitions, 88% for 10 definitions, and 75% for 1 definition, (2) External panel consensus: 100% for 12 definitions and 80% for 7 definitions. In total, 18 of 19 definitions achieved strong consensus (≥ 80%) in both internal and external rounds. The single definition that did not reach strong consensus was regarding Kawasaki disease (KD) which achieved a consensus rating of 75% by the final internal round. Insight into this lower consensus comes from the comments of the 2 panelists who didn’t agree with the definition. The comments are as follows: “How will you handle a “possible” or “rule out” diagnosis? I think the IVIG (Intravenous immunoglobulin) treatment requirement is helpful” and “I agree with the definition, but I am not sure if this happens in practice (e.g. I don’t believe specialists are always consulted in the inpatient setting)”. Treatment condition (e.g. IVIG) was initially included in the definition but, the panel proposed to take it out and the core team of experts accepted and updated the definition after the second internal round. Thus, 7 out of 8 panels actually agreed with the definition, but one panelist selected “No” solely due to the electronic format (the panelist had a comment to offer and could only have access to a texting box if they selected “No”.

Feedback from the panels and process of modifications

The reasons for the disagreement from expert panelists were summarized as follows: (1) requiring more confirmative methods (53%)—e.g. specialist diagnosis and laboratory findings, (2) definition is too narrow (22%), (3) wording of the operational definition needs clarification (19%), (4) clarifying the condition of “period” by stating a definitive span of time, in the cases of recurrent or chronic disease (4%), and (5) others—e.g. editorial error. The core team reviewed the suggestions made by the internal and/or external panelists with consideration given to the following points: (1) existing literature on the operational definition of each AIM, (2) balance between sensitivity and specificity of each condition, (3) generalizability when used at other institutions, and (4) feasibility of development of computational algorithm.

Discussion

Response rates for each Delphi round were 100% in all 5 rounds. Using a modified Delphi technique, we achieved strong consensus (≥ 80%) for operational definitions of all AIMs except one (75% at the final internal round), which can be applicable to computational algorithms. Overall, this process adds accuracy and reliability to studies concerning the association between asthma and AIMs.

At the end of three sequential rounds, 7 to 8 of the eight internal panelists (depending on the specific AIM being evaluated) reached agreement on the final operational definition of each AIM resulting in an 88–100% consensus. For example, at the end of the first round, 4 AIMs had achieved a consensus of less than 70%, but through the reiteration process (e.g. feedback and modification), we reached a moderate to strong consensus (≥ 75%) for all AIMs. Upon sending the updated definitions to external panelists, 5 AIMs still had a consensus of less than 70% at the end of the second and final round, including recurrent otitis media, recurrent infections sinusitis, autoimmune thyroiditis, diabetes type 1 and 2. Of these 5, we modified the definitions of three inflammatory diseases to make them more specific, but proposed to keep the definition of the two infectious diseases by making further clarifications. As a result, 4 to 5 out of 5 panelists (80–100%) reached agreement on the final definitions of each AIM. Given their different specialties and practice settings, the feedback from the group of external experts was very helpful for ascertaining the generalizability of these operationalized definitions in other study settings.

This novel study is the first to use a modified Delphi method to construct operational definitions for each AIM enabling us to develop computational algorithms to identify AIMs from EHRs. After considering literature, feasibility (data accessibility, specificity (for mechanism study), and generalizability (for implementing to other institutions), the balance between specificity and sensitivity of each definition was decided by the core team in response to panel feedback with updating of definitions as deemed reasonable.

The main concern of the panelists was the need for other diagnostic methods in the operational definition. For example, the operational definitions of six inflammatory diseases including Kawasaki disease, autoimmune thyroiditis, diabetes type 1 and 2, inflammatory bowel disease, and the arthritis group (Juvenile Rheumatoid Arthritis, Juvenile idiopathic Arthritis, or Rheumatoid Arthritis) were defined only by physician diagnosis including a specialist’s diagnosis at least once. In the initial operational definition, more than two diagnoses by a physician were included, but in accordance with feedback from the panelists, this definition was modified. It now reads as requiring the diagnosis of specialist at least once and this is deemed reasonable for confirmation as these diseases are generally confirmed by specialists who see the patient after referral from a primary care physician or clinician at another medical institution. Furthermore, the precision of some of the AIMs was improved via the panelist’s suggestions of adding a laboratory test or treatment condition (e.g. urinary tract infection (UTI)). For example, historically, a varicella diagnosis was made clinically. Due to the varicella vaccine, varicella symptoms become milder with fewer lesions. In turn, this has caused clinicians to become more conservative in making a diagnosis of varicella. Currently, evaluation and management of varicella cases in outpatient settings is still primarily based on clinical ground. Unless there is a public health concern viral testing is infrequently ordered. Thus, we proposed the operational definition as, physician diagnosis of varicella AND/OR positive laboratory test. Having said that, our group has a project that focuses on understanding the immune mechanisms of AIMs as it relates to asthma, and therefore we took a generally conservative stance of keeping the definition even as it increases specificity at the cost of potentially reducing sensitivity. Depending on the scope of the study, other investigators can revise these definitions for each AIM, tailoring it to their particular study (e.g. balancing sensitivity vs. specificity).

For some AIMs, the core team initially proposed using only structured data, which was agreed upon by panelists, such as tympanostomy placement as a surrogate marker for recurrent or persistent otitis media. Since a child can have persistent or recurrent otitis media without having tympanostomy, the proposed definition can only identify those children who had tympanostomy tubes placed, which may increase specificity, but lower sensitivity. However, since clinicians utilize various diagnostic terms when referring to recurrent or persistent ear infections, it is challenging to identify children with true recurrent or persistent ear infections, especially using a computer program like natural language processing (NLP) algorithm. A previous study addressed this problem by using Current Procedural Terminology (CPT) codes for tympanostomy tube placement to identify children with recurrent or persistent ear infections [50]. As ear infections are a common childhood malady and diagnosis/identification of ear infections in medical records are variable, we chose to be conservative in identifying recurrent or persistent ear infections. At any rate, it is worth noting that differential data sources and fragmentation markedly impact the performance of computational phenotyping algorithms. Also, structured data might be more susceptible to misclassification biases depending on the clinical characteristics of the disease [51].

Our group developed automated chart review algorithms for asthma ascertainment with 97% sensitivity and 95% specificity [31]. This time saving automated chart review has improved the recognition and care of childhood asthma in the clinical setting and enables large-scale clinical studies [52].

A major strength of this study is the establishment of operational definitions for multiple AIMs enabling us to develop computational algorithms to identify AIMs which would enable us to study the association of asthma with AIMs on a large scale using EHRs.

A limitation of the study is the sample size. The number of expert panelists was relatively small compared with other Delphi studies, but data collection from clinician participants through the online questionnaire had a 100% response rate from clinical participants by sending frequent reminders. As the panel of experts were selected by convenience sampling, the sub-specialties of the core team, expert internal panel and external panel were not anonymous. However, the answers given on the online questionnaire were anonymous. In addition, the core team and internal expert panel members were all from one institution. To partially ameliorate this, we performed the external rounds in order to gain the generalizability for each definition as much as possible.

Conclusion

The objective establishment of consistent, robust, and practical operational definitions of multiple AIMs through a modified Delphi technique is a key step towards developing reliable computational algorithms for automated chart review to mitigate the risks and poor outcomes of AIMs through asthma research and care.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available as they include protected health information. Access to data could be discussed per the institutional policy after IRB at Mayo Clinic approves it. (Contact information: Juhn.young@mayo.edu).

Abbreviations

AI:

Artificial Intelligence

AIMs:

Asthma-associated infectious and inflammatory multimorbidities

EHR:

Electronic health record

EEP:

External expert panel

ICD:

International statistical classification of diseases

IEP:

Internal expert panel

References

  1. 1.

    Centers for Disease Control and Prevention. Vital signs: asthma prevalence, disease characteristics, and self-management education: United States, 2001–2009. MMWR Morb Mortal Wkly Rep. 2011;60(17):547–52.

    Google Scholar 

  2. 2.

    Lethbridge-Cejku M, Vickerie J. Summary of health statistics for US adults: national health interview survey, 2003. National Center for Health Statistics; 2005. p. 10225.

    Google Scholar 

  3. 3.

    Stanton MW. The high concentration of U.S. health care expenditures. Agency for healthcare research and quality 2006 [cited Research in Action Jan 23, 2018]; Available from: https://meps.ahrq.gov/data_files/publications/ra19/ra19.pdf.

  4. 4.

    Hasassri ME, et al. Asthma and risk of appendicitis in children: a population-based case-control study. Acad Pediatr. 2017;17(2):205–11.

    PubMed  Google Scholar 

  5. 5.

    Bang DW, et al. Asthma status and risk of incident myocardial infarction: a population-based case-control study. J Allergy Clin Immunol. 2016;4(5):917–23.

    Google Scholar 

  6. 6.

    Sheen YH, et al. Association of asthma with rheumatoid arthritis: a population-based case-control study. J Allergy Clin Immunol. 2018;6(1):219–26.

    Google Scholar 

  7. 7.

    Klemets P, et al. Risk of invasive pneumococcal infections among working age adults with asthma. Thorax. 2010;65(8):698–702.

    PubMed  Google Scholar 

  8. 8.

    Helby J, et al. Asthma, other atopic conditions and risk of infections in 105 519 general population never and ever smokers. J Intern Med. 2017;282(3):254–67.

    CAS  PubMed  Google Scholar 

  9. 9.

    Juhn YJ. Risks for infection in patients with asthma (or other atopic conditions): is asthma more than a chronic airway disease? J Allergy Clin Immunol. 2014;134(2):247–57.

    PubMed  PubMed Central  Google Scholar 

  10. 10.

    Busse WW, Gern JE. Asthma and infections: is the risk more profound than previously thought? J Allergy Clin Immunol. 2014;134(2):260–1.

    PubMed  Google Scholar 

  11. 11.

    Hartert TV. Are persons with asthma at increased risk of pneumococcal infections, and can we prevent them? J Allergy Clin Immunol. 2008;122(4):724–5.

    PubMed  Google Scholar 

  12. 12.

    Robinson KA, et al. Epidemiology of invasive Streptococcus pneumoniae infections in the United States, 1995–1998: opportunities for prevention in the conjugate vaccine era. JAMA. 2001;285(13):1729–35.

    CAS  PubMed  Google Scholar 

  13. 13.

    Juhn YJ, et al. Increased risk of serious pneumococcal disease in patients with asthma. J Allergy Clin Immunol. 2008;122(4):719–23.

    PubMed  PubMed Central  Google Scholar 

  14. 14.

    Capili CR, et al. Increased risk of pertussis in patients with asthma. J Allergy Clin Immunol. 2012;129(4):957–63.

    PubMed  Google Scholar 

  15. 15.

    Umaretiya PJ, et al. Asthma and risk of breakthrough varicella infection in children. Allergy Asthma Proc. 2016;37(3):207–15.

    CAS  PubMed  PubMed Central  Google Scholar 

  16. 16.

    Frey D, et al. Assessment of the association between pediatric asthma and Streptococcus pyogenes upper respiratory infection. Allergy Asthma Proc. 2009;30(5):540–5.

    PubMed  Google Scholar 

  17. 17.

    Kim BS, et al. Increased risk of herpes zoster in children with asthma: a population-based case-control study. J Pediatr. 2013;163(3):816–21.

    PubMed  PubMed Central  Google Scholar 

  18. 18.

    Kwon HJ, et al. Asthma as a risk factor for zoster in adults: a population-based case-control study. J Allergy Clin Immunol. 2016;137(5):1406–12.

    PubMed  Google Scholar 

  19. 19.

    Bang DW, et al. Asthma and risk of non-respiratory tract infection: a population-based case–control study. BMJ Open. 2013;3(10):e003857.

    PubMed  PubMed Central  Google Scholar 

  20. 20.

    Karki S, et al. Risk factors for pertussis hospitalizations in Australians aged 45 years and over: a population based nested case-control study. Vaccine. 2015;33(42):5647–53.

    PubMed  Google Scholar 

  21. 21.

    Forbes HJ, et al. Quantification of risk factors for herpes zoster: population based case–control study. BMJ. 2014;348:g2911.

    PubMed  PubMed Central  Google Scholar 

  22. 22.

    Esteban-Vasallo MD, et al. Sociodemographic characteristics and chronic medical conditions as risk factors for herpes zoster: a population-based study from primary care in Madrid (Spain). Hum Vaccin Immunother. 2014;10(6):1650–60.

    PubMed  PubMed Central  Google Scholar 

  23. 23.

    Harlak A, et al. Atopy is a risk factor for acute appendicitis? A prospective clinical study. J Gastrointestinal Surg Off J Soc Surg Aliment Tract. 2008;12(7):1251–6.

    Google Scholar 

  24. 24.

    Yun HD, et al. Asthma and proinflammatory conditions: a population-based retrospective matched cohort study. Mayo Clin Proc. 2012;87(10):953–60.

    PubMed  PubMed Central  Google Scholar 

  25. 25.

    Yoo KH, et al. Asthma status and waning of measles antibody concentrations after measles immunization. Pediatr Infect Dis J. 2014;33(10):1016–22.

    PubMed  PubMed Central  Google Scholar 

  26. 26.

    Talbot TR, et al. Asthma as a risk factor for invasive pneumococcal disease. N Engl J Med. 2005;352(20):2082–90.

    CAS  PubMed  Google Scholar 

  27. 27.

    CDC. ACIP provisional recommendations for use of pneumococcal vaccines. 2008.

  28. 28.

    Okapuu JM, et al. How many individuals with asthma need to be vaccinated to prevent one case of invasive pneumococcal disease? Can J Infect Dis Med Microbiol. 2014;25(3):147–50.

    PubMed  PubMed Central  Google Scholar 

  29. 29.

    Canada GO. Pneumococcal vaccine: Canadian immunizaation guide, P.H.A.o. Canada, Editor. 2016.

  30. 30.

    Juhn Y, Liu H. Artificial intelligence approaches using natural language processing to advance EHR-based clinical research. J Allergy Clin Immunol. 2020;145(2):463–9.

    PubMed  Google Scholar 

  31. 31.

    Wi CI, et al. Application of a natural language processing algorithm to asthma ascertainment. An automated chart review. Am J Respirat Crit Med. 2017;196(4):430–7.

    Google Scholar 

  32. 32.

    Kaur H, et al. Automated chart review utilizing natural language processing algorithm for asthma predictive index. BMC Pulm Med. 2018;18(1):34.

    PubMed  PubMed Central  Google Scholar 

  33. 33.

    Sohn S, et al. Ascertainment of asthma prognosis using natural language processing from electronic medical records. J Allergy Clin Immunol. 2018;141(6):2292-2294.e3.

    PubMed  PubMed Central  Google Scholar 

  34. 34.

    Lemke AA, et al. Community engagement in biobanking: experiences from the eMERGE network. Genom Soc Policy. 2010;6(3):50.

    PubMed  PubMed Central  Google Scholar 

  35. 35.

    Kirby JC, et al. PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability. J Am Med Inform Assoc. 2016;23(6):1046–52.

    PubMed  PubMed Central  Google Scholar 

  36. 36.

    PheKB. PheKB: a knowledgebase for discovering phenotypes from electronic medical records. 2019 [cited 2020 August 31]; Available from: https://phekb.org/.

  37. 37.

    Fu S, et al. Clinical concept extraction: a methodology review. J Biomed Inform. 2020;109:103526.

    PubMed  PubMed Central  Google Scholar 

  38. 38.

    Zanker J, et al. Establishing an operational definition of Sarcopenia in Australia and New Zealand: Delphi method based consensus statement. J Nutr Health Aging. 2019;23(1):105–10.

    CAS  PubMed  Google Scholar 

  39. 39.

    Rodriguez-Manas L, et al. Searching for an operational definition of frailty: a Delphi method based consensus statement: the frailty operative definition-consensus conference project. J Gerontol A Biol Sci Med Sci. 2013;68(1):62–7.

    PubMed  Google Scholar 

  40. 40.

    Pediatric Acute Lung Injury Consensus Conference G. Pediatric acute respiratory distress syndrome: consensus recommendations from the pediatric acute lung injury consensus conference. Pediatr Crit Care Med. 2015;16(5):428–39.

  41. 41.

    Messner AH, et al. Clinical consensus statement: ankyloglossia in children. Otolaryngol Head Neck Surg. 2020;162(5):597–611.

    PubMed  Google Scholar 

  42. 42.

    Carlson ML, et al. Working toward consensus on sporadic vestibular schwannoma care: a modified Delphi study. Otol Neurotol. 2020;41(10):E1360–71.

    PubMed  Google Scholar 

  43. 43.

    Eubank BH, et al. Using the modified Delphi method to establish clinical consensus for the diagnosis and treatment of patients with rotator cuff pathology. BMC Med Res Methodol. 2016;16(1):1–15.

    Google Scholar 

  44. 44.

    Jones J, Hunter D. Consensus methods for medical and health services research. BMJ. 1995;311(7001):376–80.

    CAS  PubMed  PubMed Central  Google Scholar 

  45. 45.

    Morita T, et al. Development of a clinical guideline for palliative sedation therapy using the Delphi method. J Palliat Med. 2005;8(4):716–29.

    PubMed  Google Scholar 

  46. 46.

    Gurrera RJ, et al. An international consensus study of neuroleptic malignant syndrome diagnostic criteria using the Delphi method. J Clin Psychiatry. 2011;72(9):1222–8.

    PubMed  Google Scholar 

  47. 47.

    Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008–15.

    CAS  PubMed  Google Scholar 

  48. 48.

    Fink A, et al. Consensus methods: characteristics and guidelines for use. Am J Public Health. 1984;74(9):979–83.

    CAS  PubMed  PubMed Central  Google Scholar 

  49. 49.

    Humphrey-Murto S, et al. The use of the Delphi and other consensus group methods in medical education research: a review. Acad Med. 2017;92(10):1491–8.

    PubMed  Google Scholar 

  50. 50.

    Bjur KA, et al. Assessment of the association between atopic conditions and tympanostomy tube placement in children. Allergy Asthma Proc. 2012;33(3):289–96.

    PubMed  PubMed Central  Google Scholar 

  51. 51.

    Wang L, et al. Impact of diverse data sources on computational phenotyping. Front Genet. 2020;11:556.

    PubMed  PubMed Central  Google Scholar 

  52. 52.

    Wu ST, et al. Automated chart review for asthma cohort identification using natural language processing: an exploratory study. Ann Allergy Asthma Immunol Off Publ Am College Allergy Asthma Immunol. 2013;111(5):364–9.

    Google Scholar 

Download references

Acknowledgements

We would like to thank all panelists who participated in this study and provided expert opinions on the operational definition of AIMs (Drs. Imad Absah, Pediatric Gastroenterology (Mayo Clinic, MN), Mir Ali, Pediatric Critical Care (Sanford Health, SD), Amir Orandi, Pediatric Rheumatology (Mayo Clinic, MN), Warren Bishop, Pediatric Gastroenterology (University of Iowa, IA), W. Charles Huskins, Pediatric Infectious Diseases (Mayo Clinic, MN), Michelle Gonzalez, Pediatric Gastroenterology (Mayo Clinic, MN), Charles Grose, Pediatric Infectious Diseases (University of Iowa, IA), Sandy Hong Pediatric Rheumatology (University of Iowa, IA), Thomas Mason, Pediatric Rheumatology (Mayo Clinic, MN), Miguel Park, Allergic Diseases (Mayo Clinic, MN), Anupama Ravi, Pediatric Allergy & Immunology (Mayo Clinic, MN), Elizabeth Ristagno, Pediatric Infectious Diseases (Mayo Clinic, MN), and Barbara Yawn, Family and Community Medicine (University of Minnesota, MN). We also thank Mrs. Kelly Okeson for her administrative assistance Mrs. Julie Porcher for proofreading of the manuscript. This work was supported by National Institutes of Health grants R01 HL126667, R21AI142702 and Mayo Foundation.

Funding

This work was supported by National Institutes of Health grants R01 HL126667, R21AI142702 and Mayo Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Affiliations

Authors

Contributions

YJ, HB, CW, EH, SS, JK, ER, PS, HL, JY were involved in the conception and design. YJ, BH, CW, EH, SS, JK, PS, and HL were involved in implementation of the described project. All authors contributed to drafting or critically reviewing the content of the manuscript. YJ was involved in the conception and design of the work, the acquisition, analysis, and interpretation of data, and have drafted the work. HB was involved in the conception and design of the work, and interpretation of data, and has drafted the manuscript. CW was involved in the conception and design of the work, and the acquisition, analysis, and interpretation of data, and has substantively revised the manuscript. EH was involved in the conception and design of the work, and the interpretation of data, and has substantively revised the manuscript. SS was involved in the conception and design of the work, and the acquisition, analysis, and interpretation of data, and has substantively revised it. JK was involved in the conception and design of the work, and the acquisition of data, and has substantively revised the manuscript. ER was involved in the conception and design of the work, and interpretation of data, and has substantively revised it. SP was involved in the analysis and interpretation of data and has substantively revised the manuscript. HL was involved in the conception, and design of the work, and has substantively revised it. YJ was involved in the conception and design of the work, and the acquisition, analysis, and interpretation of data, and has substantively revised the manuscript. All authors have approved the submitted version, and have agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Young J. Juhn.

Ethics declarations

Ethics approval and consent to participate

This project was approved by the Mayo Clinic Institutional Review Board (#14-009934), and written informed consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1

. A series of online questionnaires through which eight internal and 5 external expert panelists were invited to individually complete to provide judgement and feedback throughout three sequential internal rounds and two external rounds. This questionnaire was used sent to the external panelists for the final round.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yoon, J., Billings, H., Wi, CI. et al. Establishing an expert consensus for the operational definitions of asthma-associated infectious and inflammatory multimorbidities for computational algorithms through a modified Delphi technique. BMC Med Inform Decis Mak 21, 310 (2021). https://doi.org/10.1186/s12911-021-01663-y

Download citation

Keywords

  • Delphi
  • Asthma
  • Multimorbidities
  • Electronic health records
  • Natural language processing