 Research article
 Open Access
 Published:
Estimating the reidentification risk of clinical data sets
BMC Medical Informatics and Decision Making volume 12, Article number: 66 (2012)
Abstract
Background
Deidentification is a common way to protect patient privacy when disclosing clinical data for secondary purposes, such as research. One type of attack that deidentification protects against is linking the disclosed patient data with public and semipublic registries. Uniqueness is a commonly used measure of reidentification risk under this attack. If uniqueness can be measured accurately then the risk from this kind of attack can be managed. In practice, it is often not possible to measure uniqueness directly, therefore it must be estimated.
Methods
We evaluated the accuracy of uniqueness estimators on clinically relevant data sets. Four candidate estimators were identified because they were evaluated in the past and found to have good accuracy or because they were new and not evaluated comparatively before: the Zayatz estimator, slide negative binomial estimator, Pitman’s estimator, and muargus. A Monte Carlo simulation was performed to evaluate the uniqueness estimators on six clinically relevant data sets. We varied the sampling fraction and the uniqueness in the population (the value being estimated). The median relative error and interquartile range of the uniqueness estimates was measured across 1000 runs.
Results
There was no single estimator that performed well across all of the conditions. We developed a decision rule which selected between the Pitman, slide negative binomial and Zayatz estimators depending on the sampling fraction and the difference between estimates. This decision rule had the best consistent median relative error across multiple conditions and data sets.
Conclusion
This study identified an accurate decision rule that can be used by health privacy researchers and disclosure control professionals to estimate uniqueness in clinical data sets. The decision rule provides a reliable way to measure reidentification risk.
Background
The public is uncomfortable disclosing their personal information, or having their personal information processed for, secondary purposes if they do not trust the organization collecting and processing the data. For example, individuals often cite privacy and confidentiality concerns and lack of trust in researchers as reasons for not having their health information used for research purposes [1]. One study found that the greatest predictor of patients’ willingness to share information with researchers was the level of trust they placed in the researchers themselves [2]. A number of US studies have shown that attitudes toward privacy and confidentiality of the census are predictive of people’s participation [3, 4], and also that there is a positive association between belief in the confidentiality of census records and the level of trust one has in the government [5]. These trust effects are amplified when the information collected is of a sensitive nature [5, 6].
There is a risk that the increasing number of medical data breaches are potentially eroding the public’s trust in health information custodians in general [7, 8]. For example, the number of records affected by breaches is already quite high: the U.S. Department of Health and Human Services (HHS) has reported 252 breaches at health information custodians (e.g., clinics and hospitals) each involving more than 500 records from the end of September 2009 to the end of 2010 [9]. In all, the records of over 7.8 million patients have been exposed. At the same time there is increasing pressure to make individuallevel health data more generally available, and in some cases publicly available, for research and policy purposes [10–23].
One of the factors which help to make the public more comfortable with their health information being used for research purposes is its deidentification at the earliest opportunity [1, 24–30]. As many as 86% of respondents in one study were comfortable with the creation and use of a health database of deidentified information for research purposes, whereas only 35% were comfortable with such a database that included identifiable information [28]. It is therefore important to ensure that the risk of reidentification is low.
The uniqueness of individuals in the population is often used as a measure of reidentification risk [31–36]. In commentary in the Federal Register about the deidentification standards in the Health Insurance Portability and Accountability Act (HIPAA), HHS referred only to uniqueness as the reidentification risk measure [37, 38]. If an individual is unique in the population then their risk of reidentification can be quite high. For example, unique individuals are easier to correctly reidentify by matching their records in the disclosed database with a population registry, such as a voter registration list [39].
When the data custodian is disclosing the full population of patients then it is easy to just measure uniqueness from the data. However, in practice many data sets are samples from the population, for example, data abstracted from a sample of charts, data from surveys [40, 41], and public use microdata files such as census sample files [42–46]. The population may be all of the patients at a clinic or all people living in a particular geographic area.
The custodian may not have the resources to acquire data on all of the population to measure reidentification risk [47]. Consequently, the custodian needs to estimate uniqueness from the available sample data, and then decide whether the risk of reidentification is acceptable or if further disclosure control actions are required (e.g., generalization of the data or putting in place a data sharing agreement with the data recipient).
A number of different uniqueness estimators have been proposed in the literature. It is important to know which of these works best on clinical data sets. However, many of these estimators have not been compared, and therefore we do not know which ones would provide the most accurate estimates. In this study we use a Monte Carlo simulation to compare four different methods for estimating population uniqueness to determine which is the most accurate, and under what conditions.
Methods
Definitions
Quasiidentifiers
The variables that are going to be included in a risk assessment are called the quasiidentifiers[48]. Examples of common quasiidentifiers are [33, 49–52]: dates (such as, birth, death, admission, discharge, visit, and specimen collection), locations (such as, postal codes, hospital names, and regions), race, ethnicity, languages spoken, aboriginal status, and gender.
Equivalence classes
All the records that have the same values on the quasiidentifiers are called an equivalence class. For example, all the records in a dataset about 17 year old males admitted on 1^{st} January 2008 are an equivalence class.
Uniqueness
A unique record is one that is in an equivalence class of size one. For example, if our quasiidentifiers are age, gender, and postal code, then if there is only one 90 year old female in the postal code “N3E 6Y4” then her record would be unique. Other sensitive variables that are not considered quasiidentifiers are not taken into account in the computation of uniqueness. The term “uniqueness” is used to characterize the amount of unique records in a data set. The way it is measured will depend on other factors, and these are discussed further below.
Threat model and risk measurement
Context
Consider the common situation whereby a data custodian wishes to disclose a data set to a researcher. A condition of the disclosure by the research ethics board was that the data has to be deidentified. To decide whether the data set is sufficiently deidentified, the data custodian needs to measure reidentification risk.
One of the common threat models that is considered when disclosing health data sets is that an adversary will match against the voter registration list [39], and in the responses to comments on the HIPAA Privacy Rule regulations published in the Federal Register, the Department of Health and Human Services (DHHS) explicitly considers voter registration lists as a key data source that can be used for reidentification [37, 38]. Some legal scholars argue that threat models should only consider public information which an adversary can get access to and not information that may be privately known by the adversary or in private databases [53].
The voter registration list is assumed to represent the whole adult population. Many states in the US make their voter registration lists readily available for a nominal fee or free, and these often include the name, address, date of birth, and gender of individuals [39]. The matching example is shown in Figure 1.
Under this example the data that is being disclosed is considered a sample, and the voter registration list is considered the population. In our analysis we assume that the adversary does not know who is in the sample data set. For instance, the sample may be charts randomly selected for abstraction.
Here we have 14 individuals in the sample data set. An examination of that data set indicates that 9 of the 14 records are unique on the quasiidentifiers (they are highlighted in the figure). Given that they are unique in the data set, then the custodian may assume that if an adversary links these records with the voter list they will all match successfully and all 9 can be reidentified: a reidentification rate of approximately 64%, which would be considered high by most standards. The data custodian may then proceed to generalize the year of birth to a decade of birth such that none of the records in the data set is unique and suppresses three records in the data set (approximately 21% suppression). This is illustrated in deidentification path (a) in Figure 1. By eliminating uniqueness the adversary would not be able to match with certainty any of the disclosed records. This deidentification has resulted in the loss in precision of the date of birth variable and 21% suppression.
However, the data custodian did not need to generalize the year of birth at all. For a correct match to occur with certainty, a record needs to be unique in both, the disclosed data set as well as in the voter registration list. As shown in Figure 1, only 2 of the 9 records that are unique in the original data set are also unique in the voter registration list (the unique records in the voter registration list are highlighted). Therefore, under our threat model the data custodian could have disclosed the original data with the full year of birth and only suppressed these two records (the male born in 1962 and the female born in 1966). This is illustrated in deidentification path (b) in Figure 1. We are only interested in the records that are unique in the population given that they are unique in the sample data set.
Notation
We will first introduce some notation. Let N and n be the number of records in the voter registration list and the disclosed (sample) data set respectively, K and u denote the number of nonzero equivalence classes in the voter registration list and the disclosed data set respectively, and ${F}_{i}$ and ${f}_{i}$ denote the size of the i ^{th} equivalence class in the voter registration list and the disclosed data set respectively, where $i\in \left\{1,\dots K\right\}$ ($\left\{1,\dots ,u\right\}$ respectively).
Measuring uniqueness
One can measure the conditional probability that a record in the voter registration list is unique given that it is unique in the original data set by [54]:
where I is the indicator function. For example, $I\left({f}_{i}=1,{F}_{i}=1\right)$ is one if the sample equivalence class is a unique as well as the corresponding population equivalence class, otherwise it is zero.
However, as a risk metric for the whole data set that will be disclosed, ${\lambda}_{1}$ can be misleading. In our example, 2 out of 9 sample unique records were population unique, giving a risk of ${\lambda}_{1}=0.22$. However, out of the whole data set only 2 out of 14 records are at risk, therefore the data set risk should be 0.14. To give a more extreme example, consider a 1000 record data set where there are only two unique records and they are both also unique in the voter registration list. In this case ${\lambda}_{1}=1$ indicating that all records are at risk, when in fact only 2 out of 1000 records are at risk. A more appropriate risk metric would then be:
In the 1000 record example above, this would give a risk of ${\lambda}_{2}=0.002$ and for the example of Figure 1 it would be ${\lambda}_{2}=0.14$ for the original data set, which corresponds to what one would expect intuitively.
The risk metric ${\lambda}_{2}$ approximates the proportion of records in the voter registration list that are unique under an assumption of sampling with equal probabilities [54]. The ${\lambda}_{3}$ measure is the proportion of records in the voter registration list that are unique:
The value for ${\lambda}_{3}$ in our example of Figure 1 would be 0.15 since six records in the voter registration list are unique.
To illustrate the relationship between the measures in equations (2) and (3), we empirically computed the expected value $E\left({\lambda}_{2}\right)$ on the state inpatient database for the state of New York for 2007. This data set, which is available from the Agency for Healthcare and Quality, consists of discharge abstract data for approximately 1.5 million patients (after removing patients with invalid ZIP codes). We used the following quasiidentifiers: age in years, gender, the first three digits of the ZIP code, the time in days since the last visit, and the length of stay at the hospital in days. In the whole population 0.1815 of the records were unique (i.e., ${\lambda}_{3}=0.1815$). We drew 1000 random samples at varying sampling fractions from that population data set and computed the mean ${\lambda}_{2}$. As you can see in Figure 2, the $E\left({\lambda}_{2}\right)$ value is very close to the ${\lambda}_{3}$ value across sampling fractions.
Therefore, if we can compute or estimate ${\lambda}_{3}$ directly, then we would get a measure of risk for any sample data set under an assumption of sampling with equal probabilities. This metric would have an intuitive general meaning.
There is evidence in the responses to commentary on HIPAA in the Federal Register by DHHS that they were thinking of ${\lambda}_{3}$ as the reidentification risk metric in the discussion of identifiability, for example, when there is reference to "At the point of approximately 100,000 population, 7.3% of records are unique" and "4% unique records using the 6 variables", which in all cases were based on analyses of census data and in all cases was referring to the percentage of all records in the file [37, 38]. Furthermore, the actual reidentification risk of data sets compliant with the HIPAA Safe Harbor standard has been computed empirically and is always presented in terms of a ${\lambda}_{3}$ metric [55–57].
To know in advance the proportion of records in the voter registration list that are unique, the data custodian has two options: (a) obtain a copy of the voter registration list for all areas of the country for which there are patients in the data set and compute the number of records that are unique in the voter registration list on the quasiidentifiers, or (b) estimate uniqueness in the voter registration list using the disclosed data set only. The former can be resource intensive and would require regularly acquiring an updated voter list. The latter is less costly and can be fully automated.
Our objective in this paper then is to evaluate existing uniqueness estimators of the form ${\lambda}_{3}$ and identify one or a combination of estimators that are most accurate. The data custodian can use the estimator with only the disclosed data set to assess reidentification risk. If that number is too high then the custodian can apply various deidentification methods, such as generalization and suppression, to reduce it to an acceptable level. The steps of such a process are described later in the paper.
Estimating uniqueness
Thus far there have been no comprehensive evaluations of existing uniqueness estimators of the type ${\lambda}_{3}$. In this study we will empirically evaluate a set of population uniqueness estimators to determine which ones provide the most accurate estimates.
Various models were used in the literature to estimate the population uniqueness from a sample. The majority are based on the superpopulation model approach. This approach assumes that the population is generated from a superpopulation by an appropriate distribution. The problem of population uniqueness estimation then becomes a problem of parameter estimation. The superpopulation methods proposed in the literature are: the Poissongamma model [31], the Poisson lognormal model [58], the Logarithmic series model [59], the Dirichlet multinomial model [60], the Ewens model [61], Pitman’s model [62, 63], and the slide negative binomial model [64]. The muargus model [65] has not been used in the context of population uniqueness estimation, but can be extended for that purpose. Furthermore, Zayatz introduced a method which is not dependant on a model for the population equivalence classes [66].
Hoshino [63] compared 6 superpopulation models: the Poissongamma model, the Poisson lognormal model, the Logarithmic series model, the Dirichlet multinomial model, the Ewens model, and Pitman’s model. He concluded that the Pitman model “provides the most plausible inference” among the models compared. Based on his comparison, we will discard the 5 models above since they were inferior in estimation accuracy, and include only the Pitman model in our evaluation.
[Chen and McNulty 64] compared 3 models: the slide negative binomial (SNB) model, the equivalence class model and the Poissongamma model. They concluded that the SNB model improves significantly the population uniqueness estimation. However, the authors assumed that the number of equivalence classes in the population is known and they employed that fact in assessing the models. In practice however, the number of population equivalence classes is not known (and must also be estimated), and for that reason these results are not realistic. It is necessary to rerun that comparison and therefore we will include the SNB model and the Zayatz equivalence class model in our evaluation.
In this paper we therefore evaluate the following four models: [Zayatz 66], SNB [64], the Pitman model [62, 63], and muargus [65]. Based on existing evidence, these models are the best candidates for estimating uniqueness and have not been compared directly on clinical data sets before.
Empirical evaluation
Simulation
We performed a Monte Carlo simulation to evaluate the accuracy of the four estimators described above. In this simulation we mimic what the adversary would do and therefore we mimic the reidentification success rate of the adversary. We assume that a disclosed data set is a subset from a population data set. An adversary will match the records in the disclosed data set with the population (as explained in our motivating example). The number of records that can be matched with certainty is on average equal to ${\lambda}_{3}$. We could compute ${\lambda}_{3}$ exactly from the population data set. This gave us the actual reidentification success rate of the adversary.
All estimators were implemented by the authors in SAS, and all simulations described here were also performed in SAS. The estimators and the parameter choices, where relevant, are described further in the Additional file 1: Appendix A.
Data sets
The six data sets we used are shown in Table 1. The first three are public and last three are confidential clinical data sets. They all have the typical kinds of demographic quasiidentifiers that are seen in clinical data sets. These data sets were chosen because of their heterogeneity – since they represent different types of contexts they increase the generalizability of the results.
Three different versions of each data set were created, with low uniqueness (<10% of the observations), medium uniqueness (between 10% and 50% of the observations), and high uniqueness (greater than 50% of the observations). The three versions of the data sets were created by generalizing the quasiidentifiers in the original data set. For example, a date of birth may be generalized to year of birth, or a six character postal code may be generalized to a three character postal code. The FARS and Adult data sets only had medium uniqueness at the outset, therefore there was no possibility of creating a high uniqueness version of these data sets.
Measurement
We treat each data set as a population and draw 1000 simple random samples. For each sampling fraction we compute the median relative bias across the 1000 samples: $median\left(\raisebox{1ex}{${\widehat{\lambda}}_{3}{\lambda}_{3}$}\!\left/ \!\raisebox{1ex}{${\lambda}_{3}$}\right.\right)$. We also compute the interquartile range which indicates the dispersion of the relative bias.
The relative bias is suited to this problem because it reflects the importance of the error in decision making better than, say, just the bias $\left({\widehat{\lambda}}_{3}{\lambda}_{3}\right)$. Because the most common acceptable values for uniqueness are often low (for example, between 0.05 and 0.2 [68–70]), the bias can give misleading results. For example, a bias of 0.1 when ${\lambda}_{3}=0.9$ is not going to influence the decision that the reidentification risk is high. However, a bias of 0.1 when ${\lambda}_{3}=0.11$ could make a difference in deciding whether the risk is acceptable or not. In both cases the bias is the same, but the impact on the decision is quite different. The relative bias, on the other hand, would be quite low in the former case (0.11), and high in the latter (0.91), which more accurately reflects the severity of the error.
An alternative evaluation metric that could have been used was a mean square error (MSE). However, extreme values for some of the estimators under some simulation conditions distorted the MSE significantly. Hence, we chose a robust median to get a more realistic assessment of performance.
Model combination
Three parameters were varied during this simulation: (a) the data set used to represent the population, (b) the extent of uniqueness in the population, and (c) the sampling fraction.
The sampling fraction was varied for each data set as follows: 0.01, 0.05, 0.1, 0.3, 0.5, 0.7, and 0.9. In total then, there were 3 (uniqueness levels) x 7 (sampling fractions) x 4 (estimators) = 84 study points per data set simulated 1000 times.
Informed by methods to create ensembles [71, 72], we combined the estimators that we have to try to obtain a more accurate estimate that utilizes as many of our base estimation methods as possible. A simple ensemble would take the mean of the estimates of all of the estimators. However. we expected that some estimators will work better under different conditions (e.g., for different values on sampling fraction or population uniqueness value), and we wanted our ensemble strategy to take that into account.
We therefore constructed a regression tree across all study points for each data set [73]. The outcome variable used when constructing the tree was the relative bias results for each observation (where there are 84,000 observations). A regression tree provides a succinct descriptive summary of the factors that affect estimation accuracy and can be helpful in discovering subtle patterns. The input variables for constructing the tree were the sampling fraction, the estimator, and the uniqueness level. The tree construction process attempts to reduce the node deviance, defined as $\sum {\left(y\overline{y}\right)}^{2}$, where y is the relative bias and $\overline{y}$ is the mean relative bias within a node.
Because ensembles are usually created for a single data set, we had six trees. We then used a subjective process to combine the regression trees from each data set to create an overall decision rule. In developing this decision rule we assumed that underestimation is worse than overestimation. Underestimation may result in a data custodian inadvertently disclosing data with a high amount of uniqueness, and therefore exposing patient data to a higher reidentification risk than intended. Overestimation leads to a conservative approach to disclosure where data that has been disclosed has a lower reidentification risk than intended.
Ethics
This study was approved by the research ethics board of the Children’s Hospital of Eastern Ontario. The data custodians for the three nonpublic clinical data sets also approved this protocol.
Results
We present the detailed results for the emergency department data set in the main body of the paper, with the results for the other data sets in the Additional file 2: Appendix B. The results were quite consistent across the data sets and therefore here is no loss in generality by focusing on the emergency department data here.
Figure 3 shows the median relative bias and interquartile ranges of the relative bias for the emergency department data when the population uniqueness is below 10%. Each panel in the figure is for a particular sampling fraction (denoted by pi), and shows the results for the four estimators. We see that at low sampling fractions the models tend to have higher relative bias, and that approaches zero as the sampling fraction increases. Also, the amount of variation in the relative bias is not high.
In Figure 4 are the results (the median relative bias and interquartile ranges of the relative bias) when the population uniqueness is at a medium level (between 10% and 50%). The general pattern seen for low uniqueness holds, except there are a number of study points for which the SNB model fails. Also, the median relative bias is lower for all sampling fractions compared to the low uniqueness version of the data set.
Figure 5 shows the results when there is high uniqueness in the population data set (greater than 50%). All models perform relatively well in terms of relative bias and variation of relative bias. This is the case even for small sampling fractions.
The regression tree for the emergency department data is given in Figure 6. This shows that for higher sampling fractions (denoted by pi) all models tend to perform well with a mean relative bias of 0.22. For lower sampling fractions the Pitman model and the muArgus model have the lowest mean relative bias at 0.013. When the sampling fraction is low (below 30%) the SNB and Zayatz models tend to have high relative bias, irrespective of the uniqueness levels in the data.
In general we found that the Pitman model emerged as the most accurate for low sampling fractions. For higher sampling fractions the most accurate estimate varies between SNB and Zayatz. However, SNB tended to fail to converge in a number of instances, making it an unreliable model in practice and required us to have a 'replacement' in our decision rule.
The combined rule from the six data set ensembles is shown below. The performance of that rule compared to the original models is given in the results graphs in Figures 3, 4, and 5 and is labeled as the E1 model. As can be seen, the performance of E1 is superior to any of the original models across the full set of conditions.
If π ≤ 0.1 then
E1 = Pitman
Else
If SNB converges then
if Est(SNB) > Est(Zayatz) then
E1 = Zayatz
Else
E1 = SNB
Endif
Else
E1 = Zayatz
Endif
Endif
The E1 rule does not use the muargus estimator. The muargus estimator consistently performed worse than the other estimators and was associated with terminal nodes with high relative bias in all of the regression tree. Therefore its inclusion would have resulted in a noticeable deterioration in prediction performance.
Discussion
Summary and implications
Population uniqueness is a commonly used measure of reidentification risk [31–36]. In cases where the disclosed data set is a sample, the population uniqueness must be estimated. In this paper we have evaluated four different uniqueness estimators using a Monte Carlo simulation on clinically relevant data sets.
Informed by methods to creating ensembles, we constructed regression trees that combine the uniqueness estimators to minimize their relative bias for each data set. These trees were then converted to a single decision rule that works across all data sets and performs better than any of the original estimators.
Our decision rule selects among the best three estimators. It has good and consistent accuracy across multiple conditions, often with a small overestimation. Application of the decision rule requires the implementation of three estimators. However, it does not require knowledge of the general uniqueness level in the population a priori (i.e., if it is low, medium, or high), which may be difficult to know in practice, but does require knowledge of whether the sampling fraction is greater than 10% or not.
Future studies that need to estimate uniqueness should consider using the three estimators combined with this decision rule for maximum accuracy.
Applications in practice
The process within which uniqueness estimates would be applied is illustrated by the control flow graph in Figure 7.
The first step is for the custodian to understand the plausible adversaries that can attempt to reidentify the disclosed data. A useful way to categorize adversaries is in terms of how constrained they are. Five types of constraints to be considered are:

Financial constraints: how much money will the adversary spend on a reidentification attack ? Costs will be incurred to acquire databases. For example, the construction of a single professionspecific database using semipublic registries that can be used for reidentification attacks in Canada costs between $150,000 to $188,000 [49]. In the US, the cost for the voter registration list from Alabama is more than $28,000, $5,000 for Louisiana, more than $8,000 for New Hampshire, $12,000 for Wisconsin and $17,000 for West Virginia [39].

Time constraints: how much time will the adversary spend to acquire registries useful for a reidentification attack? For example, let’s say that one of the registries that the adversary would use is the discharge abstract database from hospitals. Forty eight states collect data on inpatients [74], and 26 states make their state inpatient databases (SIDs) available through the Agency for Healthcare Research and Quality (AHRQ) [75]. The SIDs for the remaining states would also be available directly from each individual state but the process may be more complicated and time consuming in this example. Would an adversary satisfy themselves only with the AHRQ states or will they put the time to get the data from other states as well ?

Willingness to misrepresent themselves: to what extent will the adversary be willing to misrepresent themselves to get access to public or semipublic registries? For example, some states only make their voter registration lists available to political parties or candidates (e.g., California) [39]. Would an adversary be willing to misrepresent themselves to get these lists? Also, some registries are available at a lower cost for academic use versus commercial use. Would a nonacademic adversary misrepresent themselves as an academic to reduce their registry acquisition costs?

Willingness to violate agreements: to what extent would the adversary be willing to violate data sharing agreements or other contracts that s/he needs to sign to get access to registries? For example, acquiring the SIDs through the AHRQ requires that the recipient sign a data sharing agreement which prohibits reidentification attempts. Would the adversary still attempt a reidentification even after signing such an agreement?

Willingness to commit illegal acts: to what extent would an adversary break the law to obtain access to registries that can be used for reidentification? For example, privacy legislation and the Elections Act in Canada restrict the use of voter lists to running and supporting election activities [49]. There is at least one known case where a charity allegedly supporting a terrorist group has been able to obtain Canadian voter lists through deception for fund raising purposes [76–78].
It should be noted that most known reidentification attacks were performed by researchers or the media [79]. This type of adversary is likely highly constrained with limited time and funds, an unwillingness to misrepresent themselves, and unwillingness to violate agreements and contracts. Alternatively, the custodian may wish to make a worse case assumption and consider a minimally constrained adversary with unlimited resources and funds who is willing to misrepresent themselves and violate agreements and laws. This kind of assumption would be suitable if the data will be made publicly available, in which case the data custodian would have no control over who would get the data. The choice of constraints will have an impact on which registries the adversary would plausibly have access to.
The data custodian then needs to select the quasiidentifiers in the data set. The quasiidentifiers would be the variables that a potential adversary would be able to get using public or semipublic registries. Note that an adversary may combine multiple sources together to construct a database useful for reidentification [50]. It is not necessary for the custodian to acquire all of these registries, but only to know what the variables are in these registries. Examples of public and semipublic registries that can be used for reidentification are:

Voter registration lists, court records, obituaries published in newspapers or online, telephone directories, private property security registries, land registries, and registries of donations to political parties (which often include at least full address).

Professional and sports associations often post information about their members and teams (e.g., lists of lawyers, doctors, engineers, and teachers with their basic demographics, and information about sports teams with their demographics, height, weight and other physical and performance characteristics).

Certain employers often post information about their staff online, for example, at educational and research establishments and at law firms.
For a registry to be useful as a potential source of quasiidentifiers, it must be plausible for the adversary to get access to it. By considering the constraints on the adversary, it is then possible to decide how plausible it is for the adversary to acquire each type of registry and for which state. For example, if the data to be disclosed is for patients in California and it is assumed that the adversary is highly constrained, then the voter registration lists would not be available to the adversary for a reidentification attack (it is only available for parties, candidates, political committees, scholarly or journalistic purposes).
Because the assumptions made about the adversary would often not apply to the data custodian, it is important for the data custodian to be able to estimate reidentification risk. For example, if it is assumed that the adversary is willing to misrepresent themselves to get a semipublic registry, the data custodian cannot mimic that and misrepresent themselves to acquire that registry for the purpose of reidentification risk assessment. The custodian needs to estimate the risk without acquiring that registry, which is the problem our uniqueness estimators are solving.
The custodian must then select the uniqueness threshold that will be used to decide whether the reidentification risk is acceptable or not. There are a number of precedents that can be useful for deciding on a threshold. One can, for instance, rely on how HHS classifies health data breaches, whereby they will not publicize breaches affecting less than 500 records [80]. This effectively sets two tiers of breaches, and one can argue that a reidentification affecting less than 500 records would be considered lower risk. Also, previous disclosures of cancer registry data have deemed thresholds of 5% and 20% of the population at risk as acceptable for public release and research use respectively [68–70].
Now the data custodian can use the estimators and decision rule described in this paper to measure the actual uniqueness from the data using the selected quasiidentifiers. If the uniqueness estimate is larger than the threshold then the data custodian can deidentify the data by applying, for example, generalization and suppression [81]. If the uniqueness is below the threshold, then a decision needs to be made about whether the deidentified data is suitable for the purpose of the analysis that will be performed on it. This is a subjective decision that requires consultation with the data recipients. If the data is deemed not suitable for the purpose because there was too much generalization and suppression, then the threshold can be revised upwards.
Revising the threshold upwards implies that the data custodian is taking more risk in disclosing that data. To compensate for that higher risk, the custodian may wish to impose additional constraints or conditions. For example, the custodian may require that regular security audits be performed of the data recipient’s site. A systematic way for making these tradeoffs and the checklists that can be used for that purpose have been detailed elsewhere [35, 82–84].
Related work
An alternative mechanism for protecting information that has been proposed in the literature is differential privacy [85, 86]. Generally speaking, differential privacy requires that the answer to any query be “probabilistically indistinguishable” with or without a particular row in the database. Thus differential privacy hides the presence of an individual in the database by making the two output distributions (with or without the row) “computationally indistinguishable” [87]. This is typically achieved by adding Laplace noise to every query output. The noise should be large enough in order to hide the output contributed by any row in the database. The literature on differential privacy, although extensive, has been mostly theoretical [86, 88]. Moving from theory to practice will require specific limitations and considerations to be addressed [88], and it is proving to be a challenging task [89, 90]. Therefore, for the context that we consider in this paper, the disclosure of individuallevel data, differential privacy does not provide a ready solution yet, whereas managing uniqueness has been a generally accepted approach for disclosure control over the last two decades.
There are other criteria for deciding whether the risk of reidentification is too high. The most common is the kanonymity criterion [91–94]. Uniqueness is the same as kanonymity when $k=1$. If a data set has high uniqueness then it will fail the kanonymity criterion for any value of $k>1$. If a data set has low uniqueness, then it may still fail kanonymity for a higher value of k. Therefore, low uniqueness is a necessary but insufficient condition to achieve kanonymity for $k>1$.
Limitations
One assumption in our current threat model, and in almost all threat models used in the disclosure control literature, is that an adversary will use exact matching to reidentify individuals. In reality data sets have errors, duplicates, and other quality problems. Therefore, in general contemporary reidentification risk metrics tend to err on the conservative side.
We constructed a rule from six data sets. These were six data sets that were heterogeneous covering very different settings and were all clinically relevant in that they had quasiidentifiers often seen in clinical data sets and that could be used for reidentification. While it would be better to repeat the analysis on more data sets, we found considerable consistency in the trees generated from each data set. Furthermore, the final decision rule that we created performed well across all six heterogeneous data sets. Future work should further validate this rule on other independent data sets.
Conclusions
Accurately measuring reidentification risk is necessary when using and disclosing health data for secondary purposes without patient consent. This allows the data custodian to ensure that patient privacy is protected in a defensible manner. Population uniqueness is a commonly used measure of reidentification risk. However, there are multiple methods for estimating population uniqueness that have been proposed in the literature, and their relative accuracy has not been evaluated on clinical data sets. In this study we performed a simulation to evaluate these estimation methods and based on that developed an accurate decision rule that can be used by health privacy researchers and disclosure control professionals to estimate uniqueness in clinical data sets. The decision rule provides a reliable way to measure reidentification risk.
References
 1.
Beyond the HIPAA Privacy Rule: Enhancing privacy, improving health through research. Edited by: Nass S, Levit L, Gostin L. 2009, Washington, DC: National Academies Press
 2.
Damschroder L, Pritts J, Neblo M, Kalarickal R, Creswell J, Hayward R: Patients, privacy and trust: Patients' willingness to allow researchers to access their medical records. Soc Sci Med. 2007, 64: 223235. 10.1016/j.socscimed.2006.08.045.
 3.
Mayer TS: Privacy and Confidentiality Research and the US Census Bureau: Recommendations based on a review of the literature. 2002, Washington, DC: US Bureau of the Census
 4.
Singer E, van Hoewyk J, Neugebauer RJ: Attitudes and Behaviour: The impact of privacy and confidentiality concenrs on participation in the 2000 census. Public Opin Q. 2003, 67: 368384. 10.1086/377465.
 5.
Council. NR: Privacy and Confidentiality as Factors in Survey Response. 1979, Washington: National Academy of Sciences
 6.
Martin E: Privacy Concerns and the Census Long Form: Some evidence from Census 2000. Annual Meeting of the American Statistical Association. 2001, Washington, DC
 7.
Robeznieks A: Privacy fear factor arises. Mod Healthc. 2005, 35 (46): 6
 8.
Becker C, Taylor M: Technical difficulties: Recent health IT security breaches are unlikely to improve the public's perception about the safety of personal data. Mod Healthc. 2006, 38 (8): 67.
 9.
Office for Civil Rights: Annual report to congress on breaches of unsecured protected health information for calendar years 2009 and 2010. 2011, US Department of Health and Human Services
 10.
Fienberg S, Martin M, Straf M: Sharing Research Data. 1985, Committee on National Statistics, National Research Council
 11.
Hutchon D: Publishing raw data and real time statistical analysis on ejournals. Br Med J. 2001, 322 (3): 530
 12.
Are journals doing enough to prevent fraudulent publication?. Can Med Assoc J. 2006, 174 (4): 431
 13.
Abraham K: Microdata access and labor market research: The US experience. Allegmeines Stat Archiv. 2005, 89: 121139.
 14.
Vickers A: Whose data set is it anyway ? Sharing raw data from randomized trials. Trials. 2006, 7: 1510.1186/17456215715.
 15.
Altman D, Cates C: Authors should make their data available. BMJ. 2001, 323: 1069
 16.
Delamothe T: Whose data are they anyway ?. BMJ. 1996, 312: 12411242. 10.1136/bmj.312.7041.1241.
 17.
Smith GD: Increasing the accessibility of data. BMJ. 1994, 308: 15191520. 10.1136/bmj.308.6943.1519.
 18.
Commission of the European Communities: On scientific information in the digital age: Access, dissemination and preservation. 2007
 19.
Lowrance W: Access to collections of data and materials for health research: A report to the Medical Research Council and the Wellcome Trust. 2006, Medical Research Council and the Wellcome Trust
 20.
Yolles B, Connors J, Grufferman S: Obtaining access to data from governmentsponsored medical research. NEJM. 1986, 315 (26): 16691672. 10.1056/NEJM198612253152608.
 21.
Hogue C: Ethical issues in sharing epidemiologic data. J Clin Epidemiol. 1991, 44 (Suppl. I): 103S107S.
 22.
Hedrick T: Justifications for the sharing of social science data. Law Hum Behav. 1988, 12 (2): 163171.
 23.
Mackie C, Bradburn N: Improving access to and confidentiality of research data: Report of a workshop. 2000, Washington: The National Academies Press
 24.
Pullman D: Sorry, you can't have that information: Stakeholder awareness, perceptions and concerns regarding the disclosure and use of personal health information. eHealth 2006. 2006
 25.
OIPC Stakeholder Survey, 2003: Highlights Report. 2003
 26.
Willison D, Schwartz L, Abelson J, Charles C, Swinton M, Northrup D, Thabane L: Alternatives to projectspecific consent for access to personal information for health research: What is the opinion of the Canadian public ?. J Am Med Inform Assoc. 2007, 14: 706712. 10.1197/jamia.M2457.
 27.
Nair K, Willison D, Holbrook A, Keshavjee K: Patients' consent preferences regarding the use of their health information for research purposes: A qualitative study. J Health Serv Res Policy. 2004, 9 (1): 2227. 10.1258/135581904322716076.
 28.
Kass N, Natowicz M, Hull S: The use of medical records in research: what do patients want?. J Law Med Ethics. 2003, 31: 429433. 10.1111/j.1748720X.2003.tb00105.x.
 29.
Whiddett R, Hunter I, Engelbrecht J, Handy J: Patients' attitudes towards sharing their health information. Int J Med Inf. 2006, 75: 530541. 10.1016/j.ijmedinf.2005.08.009.
 30.
Pritts J: The importance and value of protecting the privayc of health information: Roles of HIPAA Privacy Rule and the Common Rule in health research. 2008, Available from: http://iom.edu/Object.File/Master/53/160/Pritts%20Privacy%20Final%20Draft%20web.pdf. Accessed on: July 15, 2009.
 31.
Bethlehem J, Keller W, Pannekoek J: Disclosure control of microdata. J Am Stat Assoc. 1990, 85 (409): 3845. 10.1080/01621459.1990.10475304.
 32.
Sweeney L: Uniqueness of Simple Demographics in the US Population. 2000, Carnegie Mellon University, Laboratory for International Data Privacy
 33.
El Emam K, Brown A, Abdelmalik P: Evaluating Predictors of Geographic Area Population Size Cutoffs to Manage Reidentification Risk. J Am Med Inform Assoc. 2009, 16 (2): 256266. 10.1197/jamia.M2902. [PMID: 19074299].
 34.
Golle P: Revisiting the uniqueness of simple demographics in the US population. 2006, Workshop on Privacy in the Electronic Society
 35.
El Emam K, Brown A, AbdelMalik P, Neisa A, Walker M, Bottomley J, Roffey T: A method for managing reidentification risk from small geographic areas in Canada. BMC Med Inform Decis Mak. 2010, 10: 1810.1186/147269471018.
 36.
Koot M, Noordende G, de Laat C: A study on the reidentifiability of Dutch citizens. Workshop on Privacy Enhancing Technologies (PET 2010). 2010
 37.
Department of Health and Human Services: Standards for privacy of individually identifiable health information. 2000, Federal Register, Available from: http://aspe.hhs.gov/admnsimp/final/PvcFR06.txt. Archived at: http://www.webcitation.org/5tqU5GyQX.
 38.
Department of Health and Human Services: Standards for privacy of individually identifiable health information. 2000, Federal Register, Available from: http://aspe.hhs.gov/admnsimp/final/PvcFR05.txt. Archived at: http://www.webcitation.org/5tqULb7hT.
 39.
Benitez K, Malin B: Evaluating reidentification risks with respect to the HIPAA privacy rule. J Am Med Inform Assoc. 2010, 17 (2): 169177. 10.1136/jamia.2009.000026.
 40.
Statistics Canada: Canadian Community Health Survey (CCHS) Cycle 3.1 (2005) Public Use Microdata File (PUMF) User Guide. 2006
 41.
Statistics Canada: Canadian Community Health Survey: Public Use Microdata File. 2009, Available from: http://www.statcan.gc.ca/bsolc/olccel/olccel?catno=82M0013X&lang=eng.
 42.
Statistics Canada: 2001 Census Public Use Microdata File: Individuals file user documentation. 2001
 43.
Dale A, Elliot M: Proposals for the 2001 samples of anonymized records: An assessment of disclosure risk. J R Stat Soc. 2001, 164 (3): 427447. 10.1111/1467985X.00212.
 44.
Marsh C, Skinner C, Arber S, Penhale B, Openshaw S, Hobcraft J, Lievesley D, Walford N: The case for samples of anonymized records from the 1991 census. J R Stat Soc A Stat Soc. 1991, 154 (2): 305340. 10.2307/2983043.
 45.
Marsh C, Dale A, Skinner C: Safe data versus safe settings: Access to microdata from the British census. Int Stat Rev. 1994, 62 (1): 3553. 10.2307/1403544.
 46.
El Emam K, Paton D, Dankar F, Koru G: Deidentifying a Public Use Microdata File from the Canadian National Discharge Abstract Database. BMC Med Inform Decis Mak. 2011, 11: 5310.1186/147269471153.
 47.
El Emam K, Dankar F: Protecting privacy using kanonymity. J Am Med Inform Assoc. 2008, 15: 627637. 10.1197/jamia.M2716.
 48.
Dalenius T: Finding a needle in a haystack or identifying anonymous census records. J Official Stat. 1986, 2 (3): 329336.
 49.
El Emam K, Jabbouri S, Sams S, Drouet Y, Power M: Evaluating common deidentification heuristics for personal health information. J Med Internet Res. 2006, 8 (4): e2810.2196/jmir.8.4.e28. [PMID: 17213047].
 50.
El Emam K, Jonker E, Sams S, Neri E, Neisa A, Gao T, Chowdhury S: PanCanadian DeIdentification Guidelines for Personal Health Information. 2007, Ottawa: Privacy Commissioner of Canada
 51.
Canadian Institutes of Health Research: CIHR best practices for protecting privacy in health research. 2005, Ottawa: Canadian Institutes of Health Research
 52.
ISO/TS 25237: Health Informatics: Pseudonymization. 2008, Geneva: International Organization for Standardization
 53.
Yakowitz J: Tragedy of the Commons. Harvard J Law Technol. 2011, 25 (1): 266.
 54.
Skinner G, Elliot M: A measure of disclosure risk for microdata. J R Stat Soc Ser B. 2002, 64 (Part 4): 855867.
 55.
National Committee on Vital and Health Statistics: Report to the Secretary of the US Department of Health and Human Services on Enhanced Protections for Uses of Health Data: A Stewardship Framework for "Secondary Uses" of Electronically Collected and Transmitted Health Data. 2007
 56.
Sweeney L: Data sharing under HIPAA: 12 years later. Workshop on the HIPAA Privacy Rule's DeIdentification Standard. 2010, Washington: Department of Health and Human Services
 57.
Lafky D: The Safe Harbor method of deidentification: An empirical test. Fourth National HIPAA Summit West. 2010
 58.
Skinner C, Holmes D: Modeling population uniqueness. Proceedings of the International Seminar on Statistical Confidentiality. 1993
 59.
Johnson N, Kotz S, Kemp A: Univariate discrete distributions. 2005, Hoboken: Wiley
 60.
Takemara A: Some superpopulation models for estimating the number of population uniques. Proceedings of the Conference on Statistical Data Protection. 1999
 61.
Ewens W: Population genetics theory  the past and the future. Mathematical and statistical development of evolutionary theory. Edited by: Lessard Kluwer S. 1990, Springer: New York, 177227.
 62.
Pitman J: Random discrete distribution invariant under size based permutation. Adv Appl Probability. 1996, 28: 525539. 10.2307/1428070.
 63.
Hoshino N: Applying Pitman's sampling formula to microdata disclosure risk assessment. J Official Stat. 2001, 17 (4): 499520.
 64.
Chen G, KellerMcNulty S: Estimation of identification disclosure risk in microdata. J Official Stat. 1998, 14 (1): 7995.
 65.
Benedetti R, Franconi L: Statistical and technological solutions for controlled data dissemination. Proceedings of New Techniques and Technologies for Statistics (vol. 1). 1998
 66.
Zayatz L: Estimation of the percent of unique population elements on a microdata file using the sample. 1991, Washington: US Bureau of the Census
 67.
El Emam K, Dankar F, Vaillancourt R, Roffey T, Lysyk M: Evaluating patient reidentification risk from hospital prescription records. Can J Hospital Pharm. 2009, 62 (4): 307319.
 68.
Howe H, Lake A, Shen T: Method to assess identifiability in electronic data files. Am J Epidemiol. 2007, 165 (5): 597601.
 69.
Howe H, Lake A, Lehnherr M, Roney D: Unique record identification on public use files as tested on the 1994–1998 CINA analytic file. 2002, North American Association of Central Cancer Registries
 70.
El Emam K: Heuristics for deidentifying health data. IEEE Security and Privacy. 2008, 6 (4): 5861.
 71.
Seni G, Elder J: Ensemble methods in data mining. 2010, San Rafael: Morgan & Claypool
 72.
Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2009, New York: Springer
 73.
Breiman L, Friedman J, Olshen R, Stone C: Classification and Regression Trees. 1984, Belmont: Wadsworth and Brooks/Cole
 74.
ConsumerPurchaser Disclosure Project: The state experience in health quality data collection. 2004, Washington DC: National Partnership for Women & Families, Available from http://healthcaredisclosure.org/links/files/DataCollection.pdf.
 75.
El Emam K, Mercer J, Moreau K, GravaGubins I, Buckeridge D, Jonker E: Physician privacy concerns when disclosing patient data to public health authorities for disease outbreak surveillance. BMC Public Health. 2011, 11: 45410.1186/1471245811454.
 76.
Bell S: Alleged LTTE front had voter lists. National Post. 2006
 77.
Bell S: Privacy chief probes how group got voter lists. National Post. 2006
 78.
Freeze C, Clark C: Voters lists 'most disturbing' items seized in Tamil raids, documents say. Globe and Mail. 2008, Available from: http://www.theglobeandmail.com/servlet/story/RTGAM.20080507.wxtamilssb07/BNStory/National/home. Archived at: http://www.webcitation.org/5Xe4UWJKP.
 79.
Dankar F, El Emam K: The Application of Differential Privacy to Health Data. Proceedings of he 5th International Workshop on Privacy and Anonymity in the Information Society (PAIS). 2012
 80.
Department of Health and Human Services: Office of Civil Rights. Breaches Affecting 500 or More Individuals. 2010, Available from: http://www.hhs.gov/ocr/privacy/hipaa/administrative/breachnotificationrule/postedbreaches.html.
 81.
El Emam K, Dankar F, Issa R, Jonker E, Amyot D, Cogo E, Corriveau JP, Walker M, Chowdhury S, Vaillancourt R, Roffey T, Bottomley J: A Globally Optimal kAnonymity Method for the Deidentification of Health Data. J Am Med Inf Assoc. 2009, 16 (5): 670682. 10.1197/jamia.M3144.
 82.
El Emam K: Riskbased deidentification of health data. IEEE Security and Privacy. 2010, 8 (3): 6467.
 83.
El Emam K: Method and Experiences of RiskBased Deidentification of Health Information. Workshop on the HIPAA Privacy Rule's DeIdentification Standard. 2010, Department of Health and Human Services
 84.
Cavoukian A, El Emam K: A PositiveSum Paradigm in Action in the Health Sector. 2010, Office of the Information and Privacy Commissioner of Ontario
 85.
Dwork C, McSherry F, Nissim K, Smith A: Calibrating Noise to Sensitivity in Private Data Analysis. 3rd theory of cryptography conference. 2006
 86.
Dwork C: Differential privacy: A survey of results. Proceedings of the 5th International Conference on Theory and Applications of Models of Computation. 2008
 87.
Dwork C: Differential Privacy. Automata, Languages and Programming. 2006
 88.
Dankar F, El Emam K: The Application of Differential Privacy to Health Data. The 5th International Workshop on Privacy and Anonymity in the Information Society (PAIS). 2012
 89.
Lee J, Clifton C: How Much Is Enough? Choosing epsilon for Differential Privacy. 2011, Information Security
 90.
Sarathy R, Muralidhar K: Some Additional Insights on Applying Differential Privacy for Numeric Data. 2010, Privacy in Statistical Databases, 210219.
 91.
Samarati P, Sweeney L: Protecting privacy when disclosing information: kanonymity and its enforcement through generalisation and suppression. 1998, SRI International
 92.
Samarati P: Protecting respondents' identities in microdata release. IEEE Transactions on Knowledge and Data Engineering. 2001, 13 (6): 10101027. 10.1109/69.971193.
 93.
Sweeney L: kanonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledgebased Systems. 2002, 10 (5): 557570. 10.1142/S0218488502001648.
 94.
Ciriani V, di Vimercati SSF DC, Samarati P: kAnonymity, in Secure Data Management in Decentralized Systems. 2007, New York: Springer
 95.
Haas P, Stokes L: Estimating the number of classes in a finite population. J Am Stat Assoc. 1998, 93 (444): 14751487. 10.1080/01621459.1998.10473807.
Prepublication history
The prepublication history for this paper can be accessed here:http://www.biomedcentral.com/14726947/12/66/prepub
Acknowledgements
The work reported here was funded by the Canada Research Chairs program. We wish to thank Luk Arbuckle for reviewing earlier versions of this paper.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The author(s) declare that they have no competing interests.
Authors’ contributions
FD designed the study, interpreted the results, and contributed to writing the paper. KEE designed the study, contributed to the data analysis, interpreted the results, and contributed to writing the paper. AN performed the data analysis and contributed to writing the paper. TR contributed to interpreting the results and writing the paper. All authors read and approved the final manuscript.
Electronic supplementary material
Appendix B.
Additional file 2: Detailed Results.(PDF 211 KB)
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Dankar, F.K., El Emam, K., Neisa, A. et al. Estimating the reidentification risk of clinical data sets. BMC Med Inform Decis Mak 12, 66 (2012). https://doi.org/10.1186/147269471266
Received:
Accepted:
Published:
Keywords
 Relative Bias
 Threat Model
 Differential Privacy
 Uniqueness Estimator
 Data Custodian