Complexities, variations, and errors of numbering within clinical notes: the potential impact on information extraction and cohort-identification

Background Numbers and numerical concepts appear frequently in free text clinical notes from electronic health records. Knowledge of the frequent lexical variations of these numerical concepts, and their accurate identification, is important for many information extraction tasks. This paper describes an analysis of the variation in how numbers and numerical concepts are represented in clinical notes. Methods We used an inverted index of approximately 100 million notes to obtain the frequency of various permutations of numbers and numerical concepts, including the use of Roman numerals, numbers spelled as English words, and invalid dates, among others. Overall, twelve types of lexical variants were analyzed. Results We found substantial variation in how these concepts were represented in the notes, including multiple data quality issues. We also demonstrate that not considering these variations could have substantial real-world implications for cohort identification tasks, with one case missing > 80% of potential patients. Conclusions Numbering within clinical notes can be variable, and not taking these variations into account could result in missing or inaccurate information for natural language processing and information retrieval tasks.


Background
Much of medicine is quantitative, so it is no surprise that numbers and other numerical concepts are found throughout clinical notes. These numbers can appear in information for ages, dates, laboratory results, temporal constraints of clinical events, severity, risk prediction (e.g., odds ratios), rankings, and other expressions of quantity. As more and more hospitals, health systems, and clinics adopt electronic health records (EHRs) [1] there has been a concurrent interest in finding ways to make better and more meaningful use of the data, [2] including those embedded within the free text clinical notes derived from EHRs. This has led to substantial work in the areas of information extraction, natural language processing, [3] and information retrieval [4][5][6].
There are many challenges for accurately processing and extracting meaning from clinical notes, details of which have been described elsewhere [7,8]. These challenges include spelling errors, [9] ambiguous abbreviations and acronyms, [10][11][12] temporal relationships, [13][14][15] and the use of hedge phrases [16]. While prior authors have noted that variations exist in how numbers and other numerical concepts are recorded, the literature is lacking in illustrative examples of how these may be represented in clinical notes, which is important for developing targeted solutions when constructing robust information extraction systems. As information extraction tasks become more mainstream, ensuring that all relevant data are accurately identified will become increasingly important. Therefore, it is essential to understand the types of variability and mistakes that can appear in EHR clinical notes.
In this work, we sought to characterize and highlight several unusual characteristics of clinical notes that may be overlooked in typical information extraction tasks. Namely, we sought to quantify the variability in how numbers and numerical concepts are represented in the clinical notes, focusing primarily on deviations from typical Arabic number usage as well as other ways in which numbers were used inappropriately or described invalid scenarios such as biologically implausible ages.
Many illustrative examples are provided to highlight the magnitude of the issue. We also quantified the impact of these variations on cohort identification tasks using 10 scenarios in which patient cohorts were identified using Arabic or Roman numerals. The results of this work may be of interest to those who need to extract numeric expressions from clinical notes, and especially to those who work in the area of clinical research informatics for EHR phenotyping and cohort identification [17][18][19][20][21].

Clinical setting
This study took place at Michigan Medicine, an integrated, tertiary care provider comprised of 3 hospitals and 40 outpatient locations in Southeastern Michigan. Michigan Medicine implemented a homegrown EHR in 1998 which was used until its replacement by a vendor system (Epic, Epic Systems, Verona, WI). Epic was implemented in the ambulatory care setting in August 2012, followed by the inpatient setting in June 2014. Approaches to creating clinical notes (i.e., clinical documents) in both systems include typing as well as dictation/transcription. The clinical notes (e.g., progress notes, discharge summaries, pathology reports, radiology reports, etc.) are primarily free text. Notes are created by various clinicians and health professionals including physicians, nurses, pharmacists, and social workers. Because Michigan Medicine is a teaching institution, notes are also created by hundreds of clinicians-in-training, including residents and fellows.

Document index
As part of a larger Michigan Medicine-wide initiative to support improved access to the free text clinical notes for clinical care, operations, and research we developed a free text search engine, EMERSE [5], based on the open source Apache Lucene (https://lucene.apache.org) and Solr projects (http://lucene.apache.org/solr/). Solr creates an inverted index which makes it easy to identify all documents that contain specific words. Unlike some search engines, the index for EMERSE contains traditional stop words because many of these are also valid medical acronyms (e.g., IS: incentive spirometry; AND: axillary node dissection; OR: operating room). The standard Lucene tokenizer (StandardTokenizer) was used to tokenize the documents. As of December 2015 the index contained approximately 98.7 million documents and 12.7 billion words. In addition to the front-end user interface that EMERSE provides for standard users, the underlying Solr software includes a basic Query Screen interface that was used for the current analysis. This allowed us to search for single words and phrases, and quickly retrieve document counts without displaying any protected health information. Because no clinical notes were viewed by the team, this study was determined to be 'not regulated' by the University of Michigan Medical School Institutional Review Board.

Search strategy
Using Solr, we obtained document counts for multiple variations in how numbers and other numerical concepts were expressed in the clinical notes, including the 12 types of lexical variants shown in Table 1. This included both Roman and Arabic numbers, as well as variations of numbers spelled out in words. Other numerical aspects that were explored included fractions, negative numbers, extremely large numbers, dimensions, dates, ages, tuples, and others. These lexical variants were not intended to be exhaustive of all possibilities, but were rather meant to represent common occurrences in the EHR based on clinical experience. We specifically included in our searches variations on commonly used numerical expressions and concepts that could be challenging to extract from the notes while preserving the meaning and context. All searches were case-insensitive and conducted using a lower-case index. Unless specified, the exact search strings used are those displayed in the tables in the Results section. Finally, to determine the potential impact of these numerical variations on tasks such as cohort identification, we used the EMERSE interface to obtain patient counts for 10 disorders and clinical findings that included either Roman or Arabic numerals. We compared the overlap between cohorts to determine how many patients would have been missed by searching for only one of the numeric variations but not the other (e.g., 3 vs III).

Results
The results from our number and numerical concept searches are presented in Tables 2, 3 , 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 and 18. All counts are presented as the number of distinct documents in which the terms appeared. Overall, we found substantial variation in how these numbers and concepts were expressed. Following is a brief overview of some notable findings from the tables. Table 2 demonstrates that negative numbers were represented in forms where the expression was completely spelled out (e.g., 'minus five') or with the spelled out 'minus' combined with Arabic numerals (e.g., 'minus 5'). Fractions (e.g., 'one-fifth'; Table 3), dimensions (e.g., 'one by five'; Table 4), and ranges (e.g., 'one to five'; Table 5) all appeared in spelled out forms. Invalid dates such as 'January 39' (Table 6) appeared with low frequency, but were still present for nearly all of the combinations for which we searched. Roman numerals ( Table 7) were also present in the documents, although the frequency trailed off substantially beyond 30 ('XXX'). There were a small number of documents that also contained incorrectly formed Roman numerals such as 'IIII' rather than 'IV'. Tables 8 and 9 show variations in how some concepts related to medical scoring, staging, grading, and other clinical classifications were recorded, including variations using both Roman and Arabic numbers. Differences were noted in the frequency in how these numbers were used. For example, with 'type' (e.g., 'type 2' vs. 'type II') use of the Arabic numeral was more frequent than use of the Roman numerals. By contrast, with 'class' (e.g., 'class 2' vs. 'class II') the Roman numerals were more common than the Arabic numerals except for 'Class 5'. Table 10 displays similar examples of variations for diabetes. Table 10 also illustrates some of the typographic errors that exist in the notes (e.g., 'type 21 diabetes'), albeit at low frequencies. Table 11 shows biologically implausible ages, starting at '123 year old'. Note that the oldest living person in recorded history lived to 122 years [22]. Table 12 reports on ages described by decades. The most commonly used term was 'octogenarian' , followed by 'septuagenarian'. Table 13 shows how ranking is sometimes represented, including variations that were both correct (e.g., '1st' and '3rd') and incorrect (e.g., '1rd' and '3st'). These suffixes also existed with dates, including 'June 31st' which appeared 29 times and 'November 31st' which appeared 11 times, neither of which are valid dates. Table 14 displays very large and very small quantities, expressed as spelled out words. While no document included 'googolplex' , a finite number of documents (n = 6325) used 'infinity' , and a very small number (n = 2) included the very small number 'negative infinity'. Imprecise and informal expressions of quantity are reported in Table 15. Terms and phrases that appeared in a small subset of documents included 'gobs of' , 'gazillion' , and 'bazillion'. Other ordering and ranking variations are listed in Table 16, and tuples such as 'doubled' and 'quadruplets' are reported in Table 17. Table 18 displays examples showing the real-world implications of not considering the numeric variations in the clinical notes. This table reports on the number of patients having phrases in their notes representing diagnoses and clinical findings that could be used for cohort

Discussion
This work demonstrates the substantial variability in how numbers and other numerical concepts are represented in clinical notes derived from both a home-grown and a vendor EHR system. This variability was not only a result of normal English language variations, but of typographic errors [23] as well as incorrect usage errors. Our findings highlight data quality issues that could impact the performance of information retrieval and extraction systems, and demonstrates the complexity of medical information containing numbers and numerical concepts.
Importantly, this study also shows how much these variations could impact research endeavors such as cohort identification. Among the 10 examples shown in Table 18, eight of them resulted in more than 50% of the patients being missed under the scenario of searching for a phrase with only the Arabic or Roman numerals but not both variations. For the case of 'class 3 malocclusion' more than 80% of cases would have been missed if 'class III malocclusion' was excluded from the search. Interestingly, a search for 'grade 3 anaplastic astrocytoma' revealed a patient count of 69 whereas a similar search for 'grade III anaplastic astrocytoma' revealed a count of 67. This might lead one to conclude that approximately 68 such patients existed in the data set. However, our analysis revealed little overlap (n = 27) between these two sets, with 109 total patients identified when both variations were included. In many real-life cohort identification tasks, structured data such as International Classification of Disease, version 10 (ICD-10) codes may also used in addition to, or even instead of the free text, but such codes are known to be unreliable in certain contexts [24].
The frequencies reported in this paper were not meant to provide insights about whether they were the 'expected' number of instances but rather to show how many of these exist in the clinical notes. Any count above zero means that an information extraction process would have to consider that variation or it could be missed. However, one insight that can be drawn from the frequencies includes cases in which some counts   Since 'IV' is a commonly used abbreviation for 'intravenous' , this is a likely explanation for that observation. Many of the abnormal and unusual representations were rare considering how many documents were included in the full dataset. While this is reassuring for those conducting research or surveillance at a population level, the invalid or inappropriate use of numbering could have a more meaningful impact at an individual patient level, where a mistakenly interpreted or overlooked numerical concept could result in improper treatment decisions.
These findings also highlight the importance of taking into account the potential for both predictable and non-standard variations with tasks such as natural language processing, information extraction, or query expansion in information retrieval systems. It is also worth noting that the low frequency of some findings may mean that comparable examples do not exist in the document corpora used for NLP training tasks such as those used for the i2b2 challenge competitions [25]. This work could also inform ways in which data entry systems could be designed to identify these errors or variants to encourage users to enter more appropriate or standard terms.
It is possible that some of these complexities could be resolved by 'normalizing' the variations to a common form in a pre-processing step (e.g., converting 'VI' to 6). Indeed, some tools such as cTAKES [26] already does some of this work. Yet disambiguation may also be necessary since many of the concepts can appear in contexts beyond standard numbers. For example, 'I' could be the Roman numeral 1, or the common pronoun. The phrase '2/2' could be '2 out of 2' , 'secondary to' , or even 'February 2'. Word sense disambiguation continues to be an active area of NLP research [10,27,28]. Information extraction system designers must also consider how to handle values that are invalid such as out-of-range ages (e.g., '135 year old') rather than simply ignoring them. Terms like 'octogenarian' , and especially 'nonagenarian' can reveal a patients approximate age and thus should be taken into consideration when building or customizing de-identification systems.
Invalid dates (e.g., 'March 35') also represent a challenge. Many programming languages (e.g., Java) by default handle invalid dates in a lenient manner, meaning that a date such as 'March 35' would be converted to April 4. Care must also be taken when considering the interpretation of negative numbers. Depending on tokenization, a system might identify a number '1' or     'one' but miss the 'negative' qualifier in front of it if it is written as 'negative 1' or 'minus one' as opposed to '-1'. Tools do exist to help with number normalization, [29,30] and these should be considered when processing clinical text. Other tools have been developed to identify various concepts related to numbering including for Time (MedTime) [31] as well as cancer staging (e.g., 'Stage III lung cancer') and dimensions (MedKATp) [32]. Tokenization may also be important. A technical report about tokenization of MEDLINE abstracts briefly discusses how various tokenizers handle text including fractions [33]. A more recent paper noted the lack of focus on biomedical tokenization [34]. The issues described here are related to both semantic and syntactic heterogeneity, and are contributing factors limiting the widespread semantic interoperability of EHR data [35][36][37]. In some cases simple normalization to a canonical form should be easily achievable. In other cases, however, the complexities of natural language introduce challenges that will require additional work including disambiguation, intelligent tokenization, and sophisticated processing (e.g., machine learning). It will be      important for those working with the free text data to understand the text being analyzed and have plans for how outlier situations (e.g., invalid dates) will be handled. It will also be important to utilize vocabularies or ontologies with broad coverage of synonyms, near synonyms, and lexical variants. For example, 'TIIDM' appeared in nearly 1000 notes in our dataset but that term variant for 'type 2 diabetes mellitus' is not present in the Unified Medical Language System (UMLS), whereas 'T2DM' is in UMLS.
Additional complexities not analyzed in the current work included variations in units, which can further complicate information extraction. For example, weights can be written as "pounds", "lbs", "lb", "#", and sometimes no unit might be provided, meaning that additional work would be needed to determine if English (pounds) or metric (kg) weights were being described.
It is also worth noting that these data quality and normalization issues are not unique to clinical notes derived from EHRs. For example, the incorrect '3nd' (as opposed to the correct '3rd') appears in PubMed abstracts [38,39] as well as in clinical trial descriptions listed on ClinicalTrials.gov [40,41]. Even terms such as 'octogenarian' [42] and 'nonagenarian' [43] appear on ClinicalTrials.gov. Indeed, recent work has suggested formal representations for numeric data in clinical trial reports to aid in interpretation of the results [44]. Variability can also be found when identifying concepts within the UMLS Terminology Services Metathesaurus Browser (https://uts.nlm.nih.gov/metathesaurus.html). For example, as of July 2018, searching for the term 'stage 3' yields 233 results whereas searching for 'stage III' yields 803 results. Even 'type IIII' (an invalid form of the Roman numeral 'IV') appears in a UMLS entry (CUI C2612864), which is likely a typographic error.
Our work has several limitations. First, this study was conducted at a single site, and other medical centers or EHRs may contain different types or frequencies of variations that we did not detect. Second, we quantified only a subset of possible variations. For example, we did not explore the frequency of spelling errors such as 'sevin' , and there are other types of variations which were not included due to space limitations. Third, the frequency of some of the term variants we identified could be falsely elevated due to copy-pasting of text between notes. Nevertheless the tables we present in this work show a wide variety of possible ways in which numbers and numerical concepts are actually represented in the clinical EHR notes. Fourth, it may be the case that many of these variations would have no clinical significance with information extraction tasks. We believe, however, that it is difficult to generalize about what types of information are clinically significant versus insignificant as this may depend heavily on the specific information needs of users.

Conclusions
As precision medicine and personalized healthcare become more prevalent, computers might be tasked with making automatic decisions or recommendations on an individual patient basis using the information found within EHR notes. Thus, there could be a direct effect on patient outcomes if information is interpreted  incorrectly or overlooked. Further, the present study shows that these variations could have direct impact on cohort identification tasks unless care is taken to ensure search strings inclusive of the existing variations. Until then, clinicians and informaticians seeking to use these data should consider the variations described in this paper when designing strategies to ensure that information extraction tasks and systems are as accurate as possible.
Abbreviations EHR: Electronic health record; NLP: Natural language processing; UMLS: Unified Medical Language System Reesults from a cohort identification exercise for 10 diagnoses and clinical findings in the clinical notes, including counts of the number of patients identified by searching for phrases containing either the Arabic or Roman numeral variants, or both. The percentage of patients potentially missed by searching for only one of the variants is displayed b Cells with percentages > 50%