 Research article
 Open Access
 Open Peer Review
 Published:
Combining populationbased administrative health records and electronic medical records for disease surveillance
BMC Medical Informatics and Decision Making volume 19, Article number: 120 (2019)
Abstract
Background
Administrative health records (AHRs) and electronic medical records (EMRs) are two key sources of populationbased data for disease surveillance, but misclassification errors in the data can bias disease estimates. Methods that combine information from errorprone data sources can build on the strengths of AHRs and EMRs. We compared bias and error for four datacombining methods and applied them to estimate hypertension prevalence.
Methods
Our study included rulebased OR and AND methods that identify disease cases from either or both data sources, respectively, rulebased sensitivityspecificity adjusted (RSSA) method that corrects for inaccuracies using a deterministic rule, and probabilisticbased sensitivityspecificity adjusted (PSSA) method that corrects for error using a statistical model. Computer simulation was used to estimate relative bias (RB) and mean square error (MSE) under varying conditions of population disease prevalence, correlation amongst data sources, and amount of misclassification error. AHRs and EMRs for Manitoba, Canada were used to estimate hypertension prevalence using validated case definitions and multiple disease markers.
Results
The OR method had the lowest RB and MSE when population disease prevalence was 10%, and the RSSA method had the lowest RB and MSE when population prevalence increased to 20%. As the correlation between data sources increased, the OR method resulted in the lowest RB and MSE. Estimates of hypertension prevalence for AHRs and EMRs alone were 30.9% (95% CI: 30.6–31.2) and 24.9% (95% CI: 24.6–25.2), respectively. The estimates were 21.4% (95% CI: 21.1–21.7), for the AND method, 34.4% (95% CI: 34.1–34.8) for the OR method, 32.2% (95% CI: 31.8–32.6) for the RSSA method, and ranged from 34.3% (95% CI: 34.1–34.5) to 35.9% (95% CI, 35.7–36.1) for the PSSA method, depending on the statistical model.
Conclusions
The OR and AND methods are influenced by correlation amongst the data sources, while the RSSA method is dependent on the accuracy of prior sensitivity and specificity estimates. The PSSA method performed well when population prevalence was high and average correlations amongst disease markers was low. This study will guide researchers to select a datacombining method that best suits their data characteristics.
Background
Prevalence and incidence are essential measures for disease surveillance, to describe the burden of disease in a population and compare health status across populations and over time. Routinelycollected electronic health databases, such as administrative health records (AHRs), which are captured for healthcare system management and remuneration, are important sources for estimating disease prevalence and incidence because they provide information for the entire population and can therefore be used for surveillance of both common and rare conditions [1,2,3,4,5]. As well, they systematically capture information over time, which enables monitoring of trends. Electronic medical records (EMRs), digital versions of patient medical charts, are also increasingly being used for disease surveillance because they have many of the same advantages as AHRs and they also capture clinical information such as body mass index, smoking, and alcohol use [6,7,8,9].
However, both AHRs and EMRs are prone to misclassification errors [5, 9,10,11,12,13], including false negative cases in which individuals are incorrectly classified as not having a disease and false positive cases in which individuals are incorrectly classified as having a disease [14]. The magnitude and types of errors in each of these data sources may not be the same [15,16,17], therefore one source should not be routinely recommended over the other source for populationbased disease surveillance.
Combining information from EMRs and AHRs is an alternative to using one errorprone source over the other; datacombining methods capitalize on the strengths of each source for ascertaining cases to estimate chronic disease incidence and prevalence, and therefore help to reduce the impact of error. Datacombining methods based on both deterministic (i.e., rulebased) approaches and probabilistic models have been proposed [18,19,20,21,22,23,24,25,26,27]. However, there have been few comparisons of these methods [28,29,30]. Moreover, there have been limited investigations about the factors that may influence the accuracy of these methods.
The purpose of this study was to compare several methods for combining information from two errorprone data sources for estimating disease prevalence, including rulebased and modelbased methods. The objectives were to: (1) compare the bias and precision of datacombining methods and (2) estimate hypertension prevalence from AHRs and EMRs alone as well as from four datacombining methods. We selected hypertension because it is a common measure of health status included in national and international disease surveillance reports [4, 31].
Methods
The first objective relied on computer simulation techniques. The second objective was achieved using populationbased AHR and EMR data from the province of Manitoba, Canada.
Computer simulation
The computer simulation generated data from two sources using a model in which multiple disease markers are associated with the probability of disease presence/absence [32]. Specifically, we used copulas to generate multiple binary disease markers [33] for each data source. Copulas are constructed by specifying the joint distribution of correlated random variables that follow a standardized uniform distribution. The disease markers were assumed to be errorfree with complete information. True disease status for each member of the population was generated from a Bernoulli distribution via a logistic regression model. To obtain the specified prevalence estimates, values of the regression coefficients and marker prevalence were selected based on previous epidemiological studies about hypertension [34, 35].
Subsequently, errorprone measures of disease status were generated based on preselected values of sensitivity (\( {Sn}_{Y_j} \)) and specificity (\( {Sp}_{Y_j} \)) for the jth data source (j = 1, 2) [36]. A conditional Bernoulli process was used [37]:
where Y_{1} is an errorprone measure of disease status from the first data source, P(D = 1) is the indicator of population disease status, P(Y_{1} = 1  D = 1) and P(Y_{1} = 0  D = 0) are the sensitivity and specificity for the first data source, and U is a random variable that follows a uniform distribution.
A total of 500 replications of the simulation model were produced for each of 144 combinations of simulation conditions; the four datacombining methods were applied to the data for each replication to estimate prevalence. The simulation conditions included all possible combinations of true population prevalence (prev_{T}) of 10 and 20%, prevalence for each errorprone data source (\( {prev}_{Y_1}, \) \( {prev}_{Y_2} \)) ranging in values from 5 to 18%, correlation between data sources (\( {\rho}_{Y_1{Y}_2} \)) of 0.65 and 0.85, number of disease markers (N_{x}) of 8 and 16, average correlation amongst the disease markers (\( {\overline{\rho}}_x \)) of 0.00, 0.20, and 0.50 and correlation pattern amongst the disease markers (\( {\overline{\rho}}_{x\ \left(\mathrm{pattern}\right)} \)) that was unstructured or exchangeable. True prevalence of 20% was chosen to reflect the estimated prevalence of hypertension observed in previous studies about population prevalence [38], whereas the true prevalence of 10% was chosen to reflect the lower prevalence observed in a specific subgroup like younger adults [39]. We focused on prevalence values for the data sources that were lower than the true population prevalence since both AHRs and EMRs often underestimate chronic disease cases [9–13]. Data source correlation values were chosen to test the effect of moderate and high associations between data sources [40]. The average correlation and correlation pattern were relevant for investigations about the PSSA method [20]. The datacombining methods were evaluated using percent absolute relative bias (RB) and mean square error (MSE) [41]. Percent absolute RB was calculated as:
where \( \overline{prev_{\mathrm{m}}} \) is the mean prevalence for a datacombining method across the replications. MSE was calculated as \( \mathrm{MSE}={\sigma^2}_{prev_{\mathrm{m}}}\kern0.5em +\kern0.5em {\left\kern0.5em {prev}_{\mathrm{T}}\kern0.5em \kern0.5em \overline{prev_{\mathrm{m}}}\kern0.5em \right}^2,\kern0.5em \mathrm{where}\kern0.5em {\sigma^2}_{prev_{\mathrm{m}}} \)is the variance of the estimates. The simulation study was conducted using R software version R3.4.4 for Windows [42].
Populationbased data sources and study cohort
The study data for Objective 2 were AHRs and EMRs from the Manitoba Population Research Data Repository housed at the Manitoba Centre for Health Policy (MCHP), a research unit at the University of Manitoba. The province of Manitoba has universal healthcare, which means that virtually all health system contacts are captured in AHRs for the entire population of 1.3 million residents. The study observation period was fiscal years 2005/06 to 2008/09 (a fiscal year extends from April 1 to March 31).
AHRs included hospital discharge abstracts, physician billing claims, and Drug Program Information Network (DPIN) records. Hospital discharge abstracts contain records of discharges from acute care facilities; each abstract captures up to 25 diagnosis codes that use the World Health Organization’s International Classification of Diseases (ICD), 10th revision, Canadian version (ICD10CA). Physician billing claims are submitted by feeforservice physicians to the ministry of health for provider remuneration. Each claim includes a single threedigit ICD9CM code for the diagnosis best reflecting the reason for the visit. The DPIN is an electronic, online, pointofsale database that contains information about prescriptions filled by community pharmacies. Each approved drug is assigned a Drug Identification Number (DIN) by Health Canada; DINs can be linked to the World Health Organization’s Anatomical Therapeutic Chemical (ATC) codes [43].
EMRs used were obtained from the Manitoba Primary Care Research Network (MaPCReN) which is a practicebased research network comprised of consenting primary care providers (mostly family physicians). The MaPCReN repository includes information on health problems, billing data, medications, laboratory results, selected risk factors, referrals, and procedures for primary care patients [10]. EMRs from Manitoba has been previously used to evaluate the quality of these data for measuring hypertension [44]. Approximately 22% of the provincial population is represented in the MaPCReN repository, which covers all geographic regions and various practice configurations within the province [45].
EMRs and AHRs were linked using an encrypted unique personal health identification number (PHIN) available in the population registry; the registry captures information on dates of healthcare coverage, demographic characteristics, and location of residence.
The PHIN is available on each record in all of the data sources. Any identifying data, such as names and addresses were removed from the data by the provincial ministry of health prior to record linkage. Before linkage, key variables including sex, birth date, postal code, and PHIN were formatted in the same way on each file to account for formatting differences, such as capitalization, justification, and leading zeroes.
Validated case ascertainment algorithms for hypertension were applied to each data source [9, 12]. Table 1 lists the components of these algorithms, including ICD diagnosis codes and ATC prescription drug codes.
The study cohort included Manitoba residents 18+ years of age with at least one encounter in EMR data during the study observation period. The EMR data were linked to AHR data for all cohort members. To be retained in the cohort, an individual required a minimum of 7 years of health insurance coverage before the study index date and 7 years of coverage after the study index date, in order to implement the EMR case ascertainment algorithm for hypertension [46]. The study index date was the date of the individual’s first record in EMR data.
Model covariates
Sociodemographic and comorbidity measures were used to describe the study cohort and as covariates (i.e., markers) in the statistical model for the probabilistic datacombining method. Sociodemographic measures, which included sex, age group (18–44, 45–64, 65+ years), income quintile, and region of residence, were defined at the study index date. Income quintile is an arealevel measure of socioeconomic status defined using Statistics Canada Census data and based on total household income for dissemination areas, the smallest geographic unit for which Census data are publicly released [47]. Postal codes from the population registry were used to assign individuals to income quintiles. Region was based on regional boundaries and was defined as Winnipeg and nonWinnipeg.
Comorbidity measures included the Charlson comorbidity score (CCS) and multiple diseasespecific measures. The CCS is a summary measure based on ICD diagnosis codes from hospital discharge abstracts and physician billing claims [48]; it was derived using data for the oneyear period prior to the study index date. The CCS was defined as a categorical variable with values of 0, 1 to 2, and 3+. Diseasespecific covariates included chronic obstructive pulmonary disease (COPD), diabetes, depression, dementia, obesity, cerebrovascular disease, congestive heart failure, coronary heart disease, renal disease, and substance abuse, all of which have been used in previous research as indicators of hypertension in probabilistic models [49,50,51]. The first five covariates were defined from both AHRs and EMRs. The remaining covariates were defined from AHRs only because EMR case ascertainment algorithms have not been developed. Case ascertainment algorithms for AHRs were based on the twoyear period prior to the index date in accordance with previous recommendations [49], while EMR case ascertainment algorithms did not have a time period requirement. Finally, obesity, another covariate for the probabilistic model, was defined from EMRs (obese = body mass index > 30.0; not obese = body mass index ≤30.0; missing).
Datacombining methods
Four datacombining were selected based on previous research [21]. We included rulebased OR and AND methods, which use a deterministic rule to classify individuals as having the target disease or not having the target disease. The OR method identified individuals as hypertension cases if they met the case ascertainment algorithm for either EMRs or AHRs, and the AND method identified individuals as hypertension cases if they met the case ascertainment algorithm for both EMRs and AHRs [24]. The OR and AND methods assume: (1) observed disease status is 100% sensitive and specific, and (2) observed disease status from two data sources is conditionally independent on the true disease status.
We also considered a rulebased sensitivity and specificity adjusted (RSSA) method, which uses information about the accuracy of case ascertainment algorithms from prior validation studies to correct the estimated number of true disease cases [25, 26, 52]. The number of individuals ascertained as disease cases was weighted by the average values of sensitivity and specificity for each source identified from three Canadian validation studies about hypertension [5, 9,10,11,12,13]. Specifically, the average sensitivity and specificity values used were 0.72 and 0.95 for AHRs and 0.87 and 0.90 for EMRs. The RSSA method assumes that observed disease status from the two data sources is conditionally independent.
The probabilistic sensitivityspecificity (PSSA) method was also considered; it assumes that true disease status is associated with disease markers [20]. The sensitivities and specificities of the two data sources are modelled via a Bayesian regression model with a probit link function. The model can be decomposed into an outcome model (i.e., true outcome given disease markers) and a reporting model (i.e., reported status given true outcome and disease markers). It was assumed that the joint distribution of the reported (i.e., observed) disease status was conditional on the true disease status and observed markers. Using a Gibbs sampling technique, values of the unobserved true disease status is sampled from the posterior distribution conditional on the disease markers [53]. Model convergence was assessed using diagnostics recommended in previous research [54].
We considered four models for the PSSA method using different subsets of covariates (i.e., markers) based on theory, previous research, and empirical estimates of correlation amongst the covariates. For Model 1, which was the full model, the covariates included all sociodemographic variables, the CCS, and all diseasespecific markers. For Model 2, only EMRdefined measures of COPD, diabetes, dementia, depression and obesity were selected for model inclusion. In addition, given that the CCS includes some comorbid conditions already identified as diseasespecific markers, it was excluded. For Model 3, we excluded markers with correlations > 0.60. For Model 4, which was the reduced model, we limited our attention to covariates strongly associated with hypertension prevalence based on previous research [50, 51], including age, sex, diabetes, obesity, cardiovascular disease, COPD (a proxy for smoking status) [55] and substance use.
For each of the PSSA models, visual graphical assessment using traceplots demonstrated that model convergence was reached after the 500th iteration [see Additional file 1]. We ran a total of 10,000 iterations of the Gibbs sampler for each model. In addition, we used Gelman–Rubin diagnostics to ensure the Potential Scale Reduction Factor (PSRF) of all parameters was close to one [56], suggesting that 10,000 iterations were sufficient for attaining convergence. Once we decided that the chain has converged at iteration 500, we discarded the first 500 samples as burnin samples and used the remaining 9500 samples for inference.
Statistical analysis for numeric example
Descriptive analyses were conducted using frequencies and percentages. Associations amongst the covariates and case ascertainment algorithms were estimated using tetrachoric and polychoric correlations [57].
Hypertension prevalence estimates and 95% confidence intervals (95% CIs) were calculated for each data combiningmethod and for each data source on its own. We also calculated sex and agegroup stratified estimates and their 95% CIs. For the OR and AND methods, we assumed a normal approximation to the binomial distribution when calculating the 95% CIs. For the RSSA and PSSA methods we constructed 95% CIs using the percentile bootstrap method; the number of bootstrap samples was set to 999 following previous recommendations [58].
Model fit was assessed for the PSSA method using the Deviance Information Criterion (DIC) [59], which is a penalized measure of the log of the likelihood function. Smaller values of the DIC indicate a better fitting model [60].
Results
Computer simulation
The simulation results are reported in Table 2; for the PSSA method we reported results for an exchangeable correlation amongst the model covariates; similar results were obtained for an unstructured correlation and are therefore not reported. Absolute RB ranged from 0.2 to 108.8% and MSE ranged from 0.00 to 6.16 across the simulation conditions.
When true prevalence was 20%, the outcome prevalence combination of (18, 10%) for the two data sources resulted in the smallest percent absolute RB and MSE values for the OR method. However, for the AND, RSSA and PSSA methods, the absolute RB and MSE values were smallest for outcome prevalence combination (18, 15%). The RSSA method had the smallest absolute RB when \( {\rho}_{y_1{y}_2} \) = 0.65 and the OR method resulted in average absolute RB that was the smallest when \( {\rho}_{y_1{y}_2} \) = 0.85.
When the average marker correlation was either \( {\overline{\rho}}_x \) = 0.00 or \( {\overline{\rho}}_x \) = 0.20 and true prevalence was 20%, the PSSA method had the smallest absolute RB (3.7%) when \( {\rho}_{y_1{y}_2} \) = 0.85 and outcome prevalence was (15, 15%). As the average marker correlation increased from \( {\overline{\rho}}_x \) = 0.00 to \( {\overline{\rho}}_x \) = 0.50, the absolute RB and MSE values for the PSSA method increased by more than 90%, irrespective of the correlation between the data sources. The absolute RB showed very little variation (less than 7%) when the average marker correlation was \( {\overline{\rho}}_x \) = 0.00 compared to \( {\overline{\rho}}_x \) = 0.20. When the average marker correlation was zero (i.e., independent markers), the PSSA method produced prevalence estimates that were stable. This result suggests that each of the markers was providing unique information to the model.
When true prevalence was 10%, Table 2 revealed that absolute RB ranged from 0.3 to 375.0% and MSE ranged from < 0.01 to 18.41 across the simulation conditions. The RSSA method had the smallest percent absolute RB and MSE when the outcome prevalence combination was (8, 7%), regardless of the correlation between data sources. As outcome prevalence went from (8, 7%) to (5, 5%), performance of the RSSA and AND methods got worse. For example, the percent absolute RB and MSE for the RSSA method went from 8.4% and 0.01 to 30.9% and 0.10, when \( {\rho}_{y_1{y}_2} \) = 0.85. On the other hand, when \( {\rho}_{y_1{y}_2} \) = 0.65, the average absolute RB and MSE went from 1.1% and < 0.01 to 28.8% and 0.08. The OR method resulted in absolute RB that was the smallest when the outcome prevalence was (8, 5%) and (5, 5%) regardless of the correlation between the data sources. For example, for outcome (8, 5%), the average absolute RB and MSE were 3.3% and < 0.01 when \( {\rho}_{y_1{y}_2} \) = 0.85, and 12.8% and 0.02 when \( {\rho}_{y_1{y}_2} \) = 0.65.
The PSSA method had the smallest absolute RB (1.1 and 6.3%) for outcome prevalence (8, 5%) and (5, 5%) when \( {\overline{\rho}}_x \) = 0.00 and correlation between the data sources was \( {\rho}_{y_1{y}_2} \) = 0.85. As the average marker correlation increased, the absolute RB and MSE values of the PSSA method increased substantially. For example, under outcome prevalence (8, 5%) and \( {\rho}_{y_1{y}_2} \) = 0.85, the PSSA method had absolute RB of 1.1, 10.7 and 230.5% when the average marker correlation was 0.00, 0.20 and 0.50, respectively. When N_{x} = 8, the values of MSE for the PSSA method increased. For example, under outcome prevalence (8, 5%) and \( {\overline{\rho}}_x \) = 0.00, the MSE value went from 0.12 to 2.78 when \( {\rho}_{y_1{y}_2} \) = 0.85 and 0.28 to 4.31 when \( {\rho}_{y_1{y}_2} \) = 0.65. Under all of the three outcome prevalence conditions, average absolute RB and MSE values of the PSSA method increased as the average marker correlation increased. As the correlation between the data sources went from \( {\rho}_{y_1{y}_2} \) = 0.85 to \( {\rho}_{y_1{y}_2} \) = 0.65, average absolute RB and MSE values increased substantially. For example, under outcome prevalence (8, 7%) and \( {\rho}_{y_1{y}_2} \) = 0.85, the absolute RB values were 35.0, 37.8 and 43.3% when \( {\overline{\rho}}_x \) = 0.00, 0.20 and 0.50, and 154.9, 217.1 and 286.5% when \( {\rho}_{y_1{y}_2} \) = 0.65.
The results showed an increase in the absolute RB and MSE for each datacombining method when true prevalence was 10% compared with when it was 20%. In terms of the effect of the correlation between data sources, the absolute RB and MSE for the OR, AND and PSSA methods became smaller as the correlation increased from \( {\rho}_{y_1{y}_2} \) = 0.65 to \( {\rho}_{y_1{y}_2} \) = 0.85. The best results were obtained for the RSSA when \( {\rho}_{y_1{y}_2} \) = 0.65 and the OR method when \( {\rho}_{y_1{y}_2} \) = 0.85.
The effect of the average marker correlation on performance of the PSSA method was evident for all simulation conditions. The estimated prevalence became more biased as correlation increased. The percent absolute RB and MSE across all simulation conditions were 46.7% and 1.86 when \( {\overline{\rho}}_x \) = 0.00 and 160.4% and 6.68 when \( {\overline{\rho}}_x \) = 0.50.
Results for numeric example
A total of N = 121,144 individuals had at least one encounter in EMRs that could be linked to AHRs in the study observation period. After exclusions, the study cohort included n = 68,877 individuals (Fig. 1). Close to half of the individuals in the cohort were between 18 and 44 years of age. Slightly more than half of the cohort members were female and the majority were urban residents. Cohort members were equally distributed across most income quintiles, with the exception of the lowest quintile where they tended to be underrepresented. More than 83% of the individuals in the cohort had a CCS score of 0 (Table 3).
In terms of the diseasespecific covariates, individuals with diagnosed depression constituted 10.3% of the study cohort when identified from AHRs and 16.0% when identified from EMRs. A total of 1.9% of the study cohort had COPD when identified from AHRs and 0.3% when identified from EMRs.
The tetrachoric correlation for AHR and EMR case ascertainment algorithms was 0.90 (95% CI: 0.89–0.90). When stratifying the cohort by sex, the association between AHR and EMR case ascertainment algorithms was similar for males, with a value of 0.88 (95% CI: 0.88–0.90), and for females, with a value of 0.90 (95% CI: 0.90–0.91). Across age groups, the correlation coefficient had values of 0.89 (95% CI: 0.88–0.90) for ages 18 to 44 years, 0.87 (0.86–0.87) for ages 45 to 64 years, and 0.76 (95% CI: 0.74–0.77) for ages 65+ years.
The estimated hypertension prevalence using each datacombining method for the entire study cohort is shown in Fig. 2; the results stratified by sex and age group are reported in Table 4. The prevalence estimates for AHR and EMR case ascertainment algorithms had values of 30.9% (95% CI: 30.6–31.2) and 24.9% (95% CI: 24.6–25.2), respectively, which were significantly different. The estimated prevalence using the OR method was close to the estimate for AHRs (34.4%; 95% CI: 34.1–34.8). The AND method produced the lowest estimate. The RSSA method produced an estimate substantially lower than the OR method.
For the PSSA method, the mean absolute correlation values amongst the covariates included in Models 1 through 4 were: 0.18, 0.17, 0.13, and 0.16, respectively. Model 1 produced the highest prevalence estimate of 35.9% (95% CI: 35.7–36.1). Model 4 had the lowest estimate at 34.3% (95% CI: 34.1–34.5); these estimates were significantly different. Model 4 resulted in the lowest DIC (Table 5). As Table 4 reveals, similar patterns were observed for the datacombining methods across age groups as well as for males and females. The PSSA model fit statistics also produced consistent results, regardless of the stratification variables.
Discussion
Four datacombining methods that use information from two errorprone data sources for ascertaining chronic disease cases were compared. A simulation study was conducted to evaluate the performance of the methods. Then a numeric example for hypertension prevalence estimation was applied to realworld data. The investigated methods can benefit population health surveillance programs that inform health promotion and chronic disease prevention initiatives.
Under simulation conditions in which the two data sources were highly correlated, the estimated prevalence from the OR method was only slightly biased. For simulation conditions in which the two data sources were not highly correlated, the RSSA method had the lowest absolute RB and MSE among all other datacombining methods. Performance of the PSSA method was influenced by both the number of covariates and magnitude of their correlation.
In the numeric example, there was a high correlation between the AHR and EMR case ascertainment algorithms for hypertension, which provided a limited margin of improvement for the datacombining methods. The high degree of overlap left a small number of individuals classified as disease cases in one data source but not the other. Other studies have found a high degree of association between these two data sources for conditions with welldefined diagnostic criteria, including hypertension and diabetes [40, 61].
In our study cohort, the naïve estimates of hypertension prevalence from AHRs and EMRs were higher than those obtained from three Canadian studies, which had values of 19.6 and 21.3% for AHRs [13, 31, 39] and 22.8% for EMRs [46]. However, our results are consistent with those from another Canadian study that estimated hypertension prevalence to be between 27 and 30% using AHRs [5]. The patterns in terms of sex and age stratified prevalence estimates were consistent with previous studies [5, 39, 49], which lends face validity to our findings.
Amongst the rulebased methods, the AND and RSSA methods produced estimates of prevalence that were significantly lower than the OR method. This was somewhat surprising given the high degree of correlation between AHR and EMR case definitions. However, it also points to the need for almost complete overlap between the two data sources for the AND method to produce similar results to the OR method. Prevalence estimates for the PSSA method were similar for Models 1 through 3, but were significantly lower for Model 4 than for Model 1. The low variation in prevalence estimates for the first three models might be attributed to the low mean correlation amongst the markers. Our simulation study revealed that when the average correlation amongst the marker was zero (i.e., independent markers), the PSSA method produced prevalence estimates that were unbiased. The low correlation amongst the markers suggest that each marker was providing unique information to the model.
This study has some limitations. First, the simulation study focused on a limited number of simulation conditions. At the same time, we selected scenarios that are representative of realworld data [34, 35, 38]. Another limitation is that we focused on only a single chronic disease in our numeric example, and it had a relatively high prevalence. Greater differences across datacombining methods might be revealed for a chronic disease having lower prevalence in the population. We selected hypertension in part because a number of prior studies have demonstrated the feasibility of using administrative data for case ascertainment.
The key strength of this study was the use of both computer simulation and a real numeric example to investigate datacombining methods. We compared methods using two populationbased data sources that are available in many jurisdictions worldwide. Moreover, this research investigated different sets of case ascertainment markers when applying the PSSA method, to assess the utility and feasibility of these markers as proxy measures of hypertension.
Conclusions
Our research demonstrates that the choice of a datacombining method depends on the characteristics of the data. It is important for researchers to carefully consider the expected magnitude of correlation amongst data sources when estimating disease prevalence using a datacombining method as well as the accuracy of the individual data sources. When correlation between data sources is very high, using the OR method or the AND method will result in comparable estimates of prevalence. When correlation is low, however, we recommend using the OR method. If both data sources tend to poorly capture true nondisease cases, then the AND method is preferable.
In our simulation study, the RSSA method produced large RB and MSE when we underestimated the specificity of case ascertainment algorithms compared to when true estimates of specificity of the case ascertainment algorithms were defined. Therefore, the RSSA method should be used with caution if accurate estimates of sensitivity and specificity of case ascertainment algorithms are not available from published sources. In the simulation, the estimated prevalence from the RSSA method was less biased when true prevalence was 20% compared to 10%. Thus, we recommend using the RSSA when true prevalence is higher, as it is less affected by potentially sparse data.
For the PSSA method, we recommend including a rich set of markers to estimate disease prevalence, especially when true prevalence is low. The PSSA method works best when correlation between the two data sources is high, the average marker correlation is low and the true prevalence is high.
The methods used in this study can be extended to combine more than two data sources. For example, future research could investigate including survey data as a third data source. For example, the populationbased Canadian Community Health Survey is used to produce prevalence estimates for many conditions, including hypertension [62], even though it is prone to recall bias. Combining this data source with both AHRs and EMRs might be helpful to epidemiologists and public health staff who routinely use only a single source to report disease prevalence estimates. The PSSA models only included covariates with complete information. However, covariates could potentially be characterized by missing data. Further research could extend this method to account for missingness in the markers [63, 64].
Availability of data and materials
Data used in this article were derived from administrative health data as a secondary source. The data were provided under specific data sharing agreements only for the approved use. The original source data are not owned by the researchers and as such cannot be provided to a public repository. The original data source and approval for use has been noted in the acknowledgments of the article. Where necessary and with appropriate approvals, source data specific to this article or project may be reviewed with the consent of the original data providers, along with the required privacy and ethical review bodies.
Abbreviations
 AHR:

Administrative health record
 ATC:

Anatomical Therapeutic Chemical
 CCS:

Charlson comorbidity score
 CI:

Confidence interval
 COPD:

Chronic obstructive pulmonary disease
 DIC:

Deviance Information Criterion
 DIN:

Drug Identification Number
 DPIN:

Drug Program Information Network
 EMR:

Electronic medical record
 ICD:

International Classification of Diseases
 MaPCReN:

Manitoba Primary Care Research Network
 MCHP:

Manitoba Centre for Health Policy
 MSE:

Mean square error
 PHIN:

Personal health identification number
 PSRF:

Potential scale reduction factor
 PSSA:

Probabilisticbased sensitivityspecificity adjusted
 RB:

Relative bias
 RSSA:

Rulebased sensitivityspecificity adjusted
References
 1.
Mähönen M, Jula A, Harald K, Antikainen R, Tuomilehto J, Zeller T, et al. The validity of heart failure diagnoses obtained from administrative registers. Eur J Prev Cardiol. 2013;20(2):254–9.
 2.
Sundbøll J, Adelborg K, Munch T, Frøslev T, Sørensen HT, Bøtker HE, Schmidt M. Positive predictive value of cardiovascular diagnoses in the Danish National Patient Registry: a validation study. BMJ Open. 2016;6(11):e012832.
 3.
Sung SF, Hsieh CY, Lin HJ, Chen YW, Yang YHK, Li CY. Validation of algorithms to identify stroke risk factors in patients with acute ischemic stroke, transient ischemic attack, or intracerebral hemorrhage in an administrative claims database. Int J Cardiol. 2016;215:277–82.
 4.
TessierSherman B, Galusha D, Taiwo OA, Cantley L, Slade MD, Kirsche SR, Cullen MR. Further validation that claims data are a useful tool for epidemiologic research on hypertension. BMC Public Health. 2013;13(1):51.
 5.
Tu K, Campbell NR, Chen ZL, CauchDudek KJ, McAlister FA. Accuracy of administrative databases in identifying patients with hypertension. Open Med. 2007;1(1):e18.
 6.
Papani R, Sharma G, Agarwal A, Callahan SJ, Chan WJ, Kuo YF, et al. Validation of claimsbased algorithms for pulmonary arterial hypertension. Pulm Circ. 2018;8(2):1–8.
 7.
Peng M, Chen G, Kaplan GG, Lix LM, Drummond N, Lucyk K, et al. Methods of defining hypertension in electronic medical records: validation against national survey data. J Public Health. 2016;38(3):e392–9.
 8.
Roberts CL, Bell JC, Ford JB, Hadfield RM, Algert CS, Morris JM. The accuracy of reporting of the hypertensive disorders of pregnancy in population health data. Hypertens Pregnancy. 2008;27(3):285–97.
 9.
Williamson T, Green ME, Birtwhistle R, Khan S, Garies S, Wong ST, et al. Validating the 8 CPCSSN case definitions for chronic disease surveillance in a primary care database of electronic health records. Ann Fam Med. 2014;12(4):367–72.
 10.
Coleman N, Halas G, Peeler W, Casaclang N, Williamson T, Katz A. From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database. BMC Fam Pract. 2015;16(1):11.
 11.
KadhimSaleh A, Green M, Williamson T, Hunter D, Birtwhistle R. Validation of the diagnostic algorithms for 5 chronic conditions in the Canadian primary care sentinel surveillance network (CPCSSN): a Kingston practicebased research network (PBRN) report. J Am Board Fam Med. 2013;26(2):159–67.
 12.
Lix L, Yogendran M, Burchill C, Metge C, McKeen N, Moore D, Bond R. Defining and validating chronic diseases: an administrative data approach. Winnipeg: Manitoba Centre for Health Policy; 2006.
 13.
Quan H, Khan N, Hemmelgarn BR, Tu K, Chen G, Campbell N, et al. Validation of a case definition to define hypertension using administrative data. Hypertension. 2009;54(6):1423–8.
 14.
Valle D, Lima JMT, Millar J, Amratia P, Haque U. Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches. Malar J. 2015;14:434.
 15.
Atwood KM, Robitaille CJ, Reimer K, Dai S, Johansen HL, Smith MJ. Comparison of diagnosed, selfreported, and physicallymeasured hypertension in Canada. Can J Cardiol. 2013;29(5):606–12.
 16.
Gini R, Francesconi P, Mazzaglia G, Cricelli I, Pasqua A, Gallina P, et al. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey. BMC Public Health. 2013;13(1):15.
 17.
Tang PC, Ralston M, Arrigotti MF, Qureshi L, Graham J. Comparison of methodologies for calculating quality measures based on administrative data versus clinical data from an electronic health record system: implications for performance measures. J Am Med Inform Assoc. 2007;14(1):10–5.
 18.
Bernatsky S, Joseph L, Bélisle P, Boivin JF, Rajan R, Moore A, Clarke A. Bayesian modelling of imperfect ascertainment methods in cancer studies. Stat Med. 2005;24(15):2365–79.
 19.
Dendukuri N, Joseph L. Bayesian approaches to modeling the conditional dependence between multiple diagnostic tests. Biometrics. 2001;57(1):158–67.
 20.
He Y, Landrum MB, Zaslavsky AM. Combining information from two data sources with misreporting and incompleteness to assess hospiceuse among cancer patients: a multiple imputation approach. Stat Med. 2014;33(21):3710–24.
 21.
Reitsma JB, Rutjes AW, Khan KS, Coomarasamy A, Bossuyt PM. A review of solutions for diagnostic accuracy studies with an imperfect or missing reference standard. J Clin Epidemiol. 2009;62(8):797–806.
 22.
Alonzo TA, Pepe MS. Using a combination of reference tests to assess the accuracy of a new diagnostic test. Stat Med. 1998;18(22):2987–3003.
 23.
Martin DH, Nsuami M, Schachter J, Hook EW, Ferrero D, Quinn TC, Gaydos C. Use of multiple nucleic acid amplification tests to define the infectedpatient “gold standard” in clinical trials of new diagnostic tests for chlamydia trachomatis infections. J Clin Microbiol. 2004;42(10):4749–58.
 24.
Schiller I, Smeden M, Hadgu A, Libman M, Reitsma JB, Dendukuri N. Bias due to composite reference standards in diagnostic accuracy studies. Stat Med. 2016;35(9):1454–70.
 25.
Couris CM, Polazzi S, Olive F, Remontet L, Bossard N, Gomez F, Trombert B. Breast cancer incidence using administrative data: correction with sensitivity and specificity. J Clin Epidemiol. 2009;62(6):660–6.
 26.
Couris CM, Colin C, Rabilloud M, Schott AM, Ecochard R. Method of correction to assess the number of hospitalized incident breast cancer cases based on claims databases. J Clin Epidemiol. 2002;55(4):386–91.
 27.
Hadgu A, Dendukuri N, Hilden J. Evaluation of nucleic acid amplification tests in the absence of a perfect goldstandard test: a review of the statistical and epidemiologic issues. Epidemiology. 2005;16(5):604–12.
 28.
Baughman AL, Bisgard KM, Cortese MM, Thompson WW, Sanden GN, Strebel PM. Utility of composite reference standards and latent class analysis in evaluating the clinical accuracy of diagnostic tests for pertussis. Clin Vaccine Immunol. 2008;15(1):106–14.
 29.
Dendukuri N, Wang L, Hadgu A. Evaluating diagnostic tests for chlamydia trachomatis in the absence of a gold standard: a comparison of three statistical methods. Stat Biopharm Res. 2011;3(2):385–97.
 30.
Tang S, Hemyari P, Canchola JA, Duncan J. Dual composite reference standards (dCRS) in molecular diagnostic research: A new approach to reduce bias in the presence of Imperfect reference. J Biopharm Stat. 2018;28(5):951–65.
 31.
Pace R, Peters T, Rahme E, Dasgupta K. Validity of health administrative database definitions for hypertension: a systematic review. Can J Cardiol. 2017;33(8):1052–9.
 32.
Lewbel A. Identification of the binary choice model with misclassification. Economet Theor. 2000;16(4):603–9.
 33.
Schirmacher D, Schirmacher E. Multivariate dependence modeling using paircopulas. 2008 ERM Symposium; 2008. p. 1–52.
 34.
Kaplan MS, Huguet N, Feeny DH, McFarland BH. Selfreported hypertension prevalence and income among older adults in Canada and the United States. Soc Sci Med. 2010;70(6):844–9.
 35.
Walker RL, Chen G, McAlister FA, Campbell NR, Hemmelgarn BR, Dixon E, et al. Hospitalization for uncomplicated hypertension: an ambulatory care sensitive condition. Can J Cardiol. 2013;29(11):1462–9.
 36.
Gibbons CL, Mangen MJJ, Plass D, Havelaar AH, Brooke RJ, Kramarz P, et al. Measuring underreporting and underascertainment in infectious disease datasets: a comparison of methods. BMC Public Health. 2014;14(1):147.
 37.
Tennekoon V, Rosenman R. Systematically misclassified binary dependent variables. Communications in StatisticsTheory and Methods. 2016;45(9):2538–55.
 38.
Padwal RS, Bienek A, McAlister FA, Campbell NR, Outcomes Research Task Force of the Canadian Hypertension Education Program. Epidemiology of hypertension in Canada: an update. Can J Cardiol. 2016;32(5):687–94.
 39.
Robitaille C, Dai S, Waters C, Loukine L, Bancej C, Quach S, et al. Diagnosed hypertension in Canada: incidence, prevalence and associated mortality. Can Med Assoc J. 2012;184(1):E49–56.
 40.
Frank J. Comparing nationwide prevalences of hypertension and depression based on claims data and survey data: an example from Germany. Health Policy. 2016;120(9):1061–9.
 41.
Walther BA, Moore JL. The concepts of bias, precision and accuracy, and their use in testing the performance of species richness estimators, with a literature review of estimator performance. Ecography. 2005;28(6):815–29.
 42.
The R Project for Statistical Computing. The R Project for Statistical Computing. 2018. Available from: https://www.rproject.org/.
 43.
World Health Organization. WHO collaborating Centre for Drug Statistics Methodology: ATC classification index with DDDs and guidelines for ATC classification and DDD assignment. Oslo: Norwegian Institute of Public Health; 2006.
 44.
Singer A, Yakubovich S, Kroeker AL, Dufault B, Duarte R, Katz A. Data quality of electronic medical records in Manitoba: do problem lists accurately reflect chronic disease billing diagnoses? J Am Med Inform Assoc. 2016;23(6):1107–12.
 45.
The University of Manitoba. (2018). Manitoba Primary Care Research Network (MaPCReN). Available from: http://umanitoba.ca/faculties/health_sciences/medicine/units/family_medicine/research/mapcren.html. Accessed 12 June 2019.
 46.
Godwin M, Williamson T, Khan S, Kaczorowski J, Asghari S, Morkem R, et al. Prevalence and management of hypertension in primary care practices with electronic medical records: a report from the Canadian primary care sentinel surveillance network. CMAJ Open. 2015;3(1):E76.
 47.
Mustard CA, Derksen S, Berthelot JM, Wolfson M, Roos LL. Agespecific education and income gradients in morbidity and mortality in a Canadian province. Soc Sci Med. 1997;45(3):383–97.
 48.
Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi JC, et al. Coding algorithms for defining comorbidities in ICD9CM and ICD10 administrative data. Med Care. 2005;43:1130–9.
 49.
Peng M, Chen G, Lix LM, McAlister FA, Tu K, Campbell NR, et al. Refining hypertension surveillance to account for potentially misclassified cases. PLoS One. 2015;10(3):e0119186.
 50.
EchouffoTcheugui JB, Batty GD, Kivimäki M, Kengne AP. Risk models to predict hypertension: a systematic review. PLoS One. 2013;8(7):e67370.
 51.
Sun D, Liu J, Xiao L, Liu Y, Wang Z, Li C, et al. Recent development of riskprediction models for incident hypertension: an updated systematic review. PLoS One. 2017;12(10):e0187240.
 52.
Naaktgeboren CA, Bertens LC, van Smeden M, de Groot JA, Moons KG, Reitsma JB. Value of composite reference standards in diagnostic research. BMJ. 2013;347:1–9.
 53.
Casella G, George EI. Explaining the Gibbs sampler. Am Stat. 1992;46(3):167–74.
 54.
Gelman A, Rubin D. Inference from iterative simulation using multiple sequences. Stat Sci. 1992;7(4):457–72.
 55.
Leslie WD, Berger C, Langsetmo L, Lix LM, Adachi JD, Hanley DA, et al. Construction and validation of a simplified fracture risk assessment tool for Canadian women and men: results from the CaMos and Manitoba cohorts. Osteoporos Int. 2011;22(6):1873–83.
 56.
Brooks SP, Gelman A. General methods for monitoring convergence of iterative simulations. J Comput Graph Stat. 1998;7(4):434–55.
 57.
Juras J, Pasaric Z. Application of tetrachoric and polychoric correlation coefficients to forecast verification. Geofizika. 2006;23(1):59–82.
 58.
Wilcox RR. Fundamentals of modern statistical methods: Substantially improving power and accuracy. Springer Science & Business Media. New York: Springer; 2010.
 59.
Spiegelhalter DJ, Best NG, Carlin BP, Van Der Linde A. Bayesian measures of model complexity and fit. J R Stat Soc Ser B (Stat Methodol). 2002;64(4):583–639.
 60.
Gelman A, Hwang J, Vehtari A. Understanding predictive information criteria for Bayesian models. Stat Comput. 2014;24(6):997–1016.
 61.
Zellweger U, Bopp M, Holzer BM, Djalali S, Kaplan V. Prevalence of chronic medical conditions in Switzerland: exploring estimates validity by comparing complementary data sources. BMC Public Health. 2014;14(1):1157.
 62.
Muggah E, Graves E, Bennett C, Manuel DG. Ascertainment of chronic diseases using population health data: a comparison of health administrative data and patient selfreport. BMC Public Health. 2013;13(1):16.
 63.
Janssen KJ, Donders ART, Harrell FE, Vergouwe Y, Chen Q, Grobbee DE, Moons KG. Missing covariate data in medical research: to impute is better than to ignore. J Clin Epidemiol. 2010;63(7):721–7.
 64.
Rubin DB. Multiple imputation for nonresponse in surveys. New York: Wiley; 1987.
Acknowledgements
The authors acknowledge the Manitoba Centre for Health Policy for use of data contained in the Manitoba Population Research Data Repository under project #2017– 038 (HIPC# 2017/2018 – 42). The results and conclusions are those of the authors and no official endorsement by the Manitoba Centre for Health Policy, Manitoba Health, or other data providers is intended or should be inferred. Data used in this study are from the Manitoba Population Research Data Repository housed at the Manitoba Centre for Health Policy, University of Manitoba and were derived from data provided by Manitoba Health.
Funding
Funding for this study was provided by the Canadian Institutes of Health Research (Funding Reference # 143293). LML was supported by a Research Chair from Research Manitoba during the period of the study and is currently supported by a Tier 1 Canada Research Chair in Methods for Electronic Health Data Quality.
Author information
Affiliations
Contributions
All authors conceived the study and prepared the analysis plan. SA and LML conducted the analysis and prepared the draft manuscript. All authors reviewed and approved the final version of the manuscript.
Corresponding author
Correspondence to Lisa M. Lix.
Ethics declarations
Ethics approval and consent to participate
This study received ethical approval from the University of Manitoba Health Research Ethics Board. Consent was not received from study participants; this was a retrospective populationbased cohort study that used secondary data and therefore obtaining consent was not practicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional file
Additional file 1:
Visual Graphical Assessment and Trace Plots Showing Convergence for the Probabilistic SensitivitySpecificity Adjusted (PSSA) Models. Trace plots, density plots and convergence plots of the posterior distribution of the estimated disease prevalence for the PSSA method. (DOCX 2759 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Received
Accepted
Published
DOI
Keywords
 Administrative data
 Electronic medical records
 Misclassification bias
 Prevalence
 Statistical model