Skip to main content
  • Research article
  • Open access
  • Published:

Influence of data quality on computed Dutch hospital quality indicators: a case study in colorectal cancer surgery



Our study aims to assess the influence of data quality on computed Dutch hospital quality indicators, and whether colorectal cancer surgery indicators can be computed reliably based on routinely recorded data from an electronic medical record (EMR).


Cross-sectional study in a department of gastrointestinal oncology in a university hospital, in which a set of 10 indicators is computed (1) based on data abstracted manually for the national quality register Dutch Surgical Colorectal Audit (DSCA) as reference standard and (2) based on routinely collected data from an EMR. All 75 patients for whom data has been submitted to the DSCA for the reporting year 2011 and all 79 patients who underwent a resection of a primary colorectal carcinoma in 2011 according to structured data in the EMR were included. Comparison of results, investigating the causes for any differences based on data quality analysis. Main outcome measures are the computability of quality indicators, absolute percentages of indicator results, data quality in terms of availability in a structured format, completeness and correctness.


All indicators were fully computable based on the DSCA dataset, but only three based on EMR data, two of which were percentages. For both percentages, the difference in proportions computed based on the two datasets was significant.

All required data items were available in a structured format in the DSCA dataset. Their average completeness was 86%, while the average completeness of these items in the EMR was 50%. Their average correctness was 87%.


Our study showed that data quality can significantly influence indicator results, and that our EMR data was not suitable to reliably compute quality indicators. EMRs should be designed in a way so that the data required for audits can be entered directly in a structured and coded format.

Peer Review reports


Over the last decades, it became possible and increasingly interesting to measure the quality of health care to implement quality improvement activities and to strengthen both transparency and accountability [1]. In this context, both legally mandatory and voluntary quality indicators [2] for various kinds of diseases and interventions have been released by governments, patient and scientific associations as well as insurance companies. The computed results are used for performance comparisons between health care institutions. As such comparisons have potentially serious implications, including influencing the choices of patients and insurance companies, indicator results should be reliable.

Ideally, clinical quality indicators are computed inside hospitals based on data recorded during the care process and stored in the Electronic Medical Record (EMR). In the United States, the meaningful use [3] of EMRs is put forward as a national goal, which includes the electronic exchange of health information as well as the computation and reporting of clinical quality measures [4]. This meaningful use reduces the registration burden for care providers and furthermore enables the unobtrusive measuring and monitoring of indicators in real-time, allowing for timely intervention.

Next to this development, national and international medical data registries proliferate [5], which are frequently used to quantitatively compare performance between health-care institutions. Due to various barriers that impede the reuse of data [6], many care organisations still collect the data for quality registers manually [7]. This labour-intensive process might lead to the undesirable situation that the data in registers differs from source data in an EMR.

In the Netherlands, “Zichtbare Zorg” [8] developed amongst others a set of 11 evidence-based colorectal cancer surgery indicators, which is computed based on the register of the Dutch Surgical Colorectal Audit (DSCA) [9]. The DSCA has been set up in 2009 to measure and to improve the quality of colorectal cancer surgery, serving as both national and international role model. All Dutch hospitals that perform colorectal cancer surgery submit data to the DSCA register. Ideally, data should be submitted (semi-)automatically, but in practice surgeons often enter it manually via a web form. The data is often submitted at the end of a reporting year, impeding timely feedback.

This study aims to assess whether the set of quality indicators can be computed automatically based on EMR data and to investigate barriers to succeed. Hence, we compared quality indicators computed based on our EMR data to the same indicators computed based on manually abstracted data for the DSCA register, and performed a data quality analysis to explain any differences.


Patient data

We used two data sources of a department of colorectal cancer surgery in a university hospital: manually abstracted data for the DSCA register and structured data from the EMR.

The DSCA dataset consists of 212 variables, including demographic information, diagnoses, procedures, results of pathological examinations and clinical outcome. Attending surgeons enter the required data, either manually with the help of a web form, which takes 15 to 20 minutes per patient, or with a spreadsheet. In most hospitals the data is entered via the web form. In our hospital, the responsible surgeon preselects the patients for whom to submit data from the database containing all surgical procedures. He then browses structured and unstructured data such as pathology reports for the respective patients to identify as many of the required variables as possible. All patients of our hospital for whom data has been submitted to the DSCA in 2011 were included.

For this study, we regarded the DSCA dataset as the current reference standard. We deliberately do not refer to it as gold standard because we cannot exclude all possibility of errors due to manual data entry. However, surgeons have reported to enter the data carefully. Also, the data is monitored by the DSCA by an annual comparison to the dataset of the Dutch Cancer Registry. Its reliability seems to be high: A recent comparison showed that data has been submitted to the DSCA register for 94% of the patients in the Dutch Cancer Registry. Most data items correspond well, with discrepancies being mainly due to differing interpretations and definitions [10]. For example, anastomotic leakages are only registered in the DSCA if they caused a re-intervention, while the Dutch Cancer Registry handles a broader definition.

Regarding our EMR, several source systems that contain information on patients, diagnoses, operations, admissions, encounters, pathology reports, endoscopies and medications periodically insert data into our data warehouse. Diagnoses are encoded in ICD-9-CM, and surgical procedures in codes from a Dutch procedure classification consisting of nearly 40,000 codes. All patients who had an operation in 2011 have been extracted from the data warehouse. In the following, we refer to this dataset as EMR. All patients from the EMR who seemingly should have been submitted to the DSCA in the reporting year 2011 due to a recorded surgical resection of a primary colorectal carcinoma were included.

Patient matching

In absence of patient identifiers, the patients for whom data has been submitted by our hospital to the DSCA in 2011 are matched with the patients from the EMR based on their gender, year of birth and operation date as well as sets of procedures that they underwent.

The Institutional review board of the Academic Medical Centre at the University of Amsterdam waived the need for informed consent, as individual patients were not directly involved. The use of the data is officially registered according to the Dutch Personal Data Protection Act.

Quality indicators and their computation

We used the set of colorectal quality indicators released by a governmental quality of care program called “Zichtbare Zorg” for the reporting year 2011. The set consists of 8 thematic indicators, 3 of which comprise two related indicators denoted as e.g. 8a and 8b, resulting in a total of 11 indicators: 9 process indicators, 1 structure indicator and 1 outcome indicator (see Table 1). The process and outcome indicators are percentages computed based on the definitions for numerators and denominators of each indicator. The structure indicator 8a (“How many surgeons does the team include and how many of these surgeons carry out resections on primary colonic carcinoma patients?”) is not designed to be computable based on the EMR. Therefore, we did not include it in our study. Of the remaining 10 indicators, the DSCA indicator 1 and the circumferential resection margin indicator 6a measure the percentage of patients for whom data has been submitted to the DSCA. As we do not expect submission of data to the DSCA to be recorded in the EMR, we exclude the numerators of these indicators. The 8 fully and 2 partially (i.e. only the denominator) included indicators have been formalised with our previously developed indicator formalisation method CLIF [11] to enable their automated computation, for which the obtained queries are run against the respective datasets. The queries are published on figshare [12].

Table 1 Zichtbare Zorg indicators for 2011 translated from Dutch to English

Outcome measures

Quality indicators

The first outcome measure is the computability of quality indicators, and the corresponding results. Numerators and denominators of indicators are computable if all required items are available in a structured format.

As in [13] and [4], we analysed the accuracy of quality indicator results computed based on EMR data by measuring sensitivity and specificity. We also measure the positive predictive value (PPV) and the negative predictive value (NPV) as well as the positive likelihood ratio (PLR) and the negative likelihood ratio (NLR).

Whether the difference in proportions was significant has been tested with Bland’s and Butland’s method to compare proportions in overlapping samples [14]. A p-value < 0.05 was considered significant.

Data quality

We analysed the quality of the 14 data items required to compute the set of quality indicators (Operation date, Year of birth, Procedure, Operation urgency, Primary location/Diagnosis, cT score, pN stage, pM stage, Examined lymph nodes, Circumferential margin, Colonoscopy, Chemotherapy/Medication, Meeting date and Radiotherapy start date). The first quality dimension we analysed is availability in a structured format, as unstructured data cannot be used directly to automatically compute quality indicators. For data items that are available in a structured format, we focus on the quality dimensions completeness and correctness[15]. Completeness is measured as the percentage of items that should be recorded for each patient (such as the operation urgency, as all included patients have been operated) that are indeed available in the respective dataset. Items that do not necessarily apply to all patients, such as the start date of preoperative radiotherapy, are excluded, as a missing value might be due to the fact that the patient was indeed not treated with previous radiotherapy, but it might also be the case that the start date has not been recorded. Items explicitly recorded as ‘unknown’ are regarded as absent, diminishing completeness.

We measure correctness by checking whether data items recorded in the EMR are consistent with the corresponding items in the DSCA dataset with regard to the indicator definitions, i.e. whether they have the same effect on the indicator results. For example, a date for a multidisciplinary meeting is considered correct if both dates are before or both dates are after the operation.

Finally, encountered problems regarding data quality are categorised.


Patient matching

As shown in Figure 1, 75 patients are included for the reporting year 2011 in the DSCA dataset, and 79 in the EMR. Following the matching strategy, it was possible to match all 75 DSCA patients with patients in the EMR. Sixty-three of these patients were also selected by the query to compute the indicators based on the EMR dataset, while 12 patients were not selected. Manual inspection showed that 4 of these 12 patients had no relevant diagnosis recorded in the EMR. A fifth patient was recorded with a colonic carcinoma and a resection of rectum, but the query against the data warehouse selected patients with a colonic carcinoma and colectomy or a rectum carcinoma and resection of rectum. For the remaining 7 patients, the diagnosis date was after the (elective) operation date, so that a relationship between diagnosis and operation could not be assumed.

Figure 1
figure 1

Matching of patients included in the DSCA dataset, selected from the EMR and included in the EMR.

Sixteen patients from our EMR dataset could not be matched to the DSCA dataset because they were selected incorrectly due to incorrect (e.g. tumours that were classified as non-malignant based on the pathology examination) or imprecise (e.g. recurrent carcinomas) diagnosis codes or despite missing relations between the diagnosis and the procedure in the EMR dataset.

Computation of quality indicators

Table 2 shows the indicator results computed based on the DSCA dataset, as well as fully computable indicators and denominators based on the EMR data. The chemotherapy indicators 5a and 5b as well as the radiotherapy indicator 7 could not be computed, as the required carcinoma’s stage was not available in a structured format.

Table 2 Indicator results based on both datasets

Comparison of selected patients

Table 3 shows the comparison of selected patients for all fully computable indicator elements.

Table 3 Patients selected based on the two datasets

Outcome measures

Quality indicators

All 10 indicators were fully computable based on the DSCA dataset. Eight of these indicators should in principle be fully computable based on EMR data, but in practice this was the case for only three indicators. For the two indicators (multidisciplinary meeting and imaging) that are percentages, the difference in proportions computed based on the two datasets was significant.

For 4 indicators, only the denominators were fully computable, because the data items defining the quality of care measured in the numerator, such as the number of examined lymph nodes, were not available in a structured format.

Data quality

The results of the data quality analysis are given in Table 4. Fourteen data items are required to compute the set of quality indicators. All of these items are available in the DSCA register, and 8 in the EMR, with the remaining 6 only being available in free text. The pathology reports contained in the EMR comprise required data such as the number of examined lymph nodes, the circumferential margin and the pathological stage of the carcinoma only in free text. The clinical stage of the carcinoma is equally unavailable, although it might be present in free text sources that we did not have at our disposal, such as conclusions of physical or radiologic examinations or endoscopies, or contained in referral letters. It is contained in a structured format in the Dutch Cancer Registry, but the goal of our study was to focus on the data in our EMR.

Table 4 Data quality

For data items that should be recorded for each patient, the average completeness is 86% for the register’s dataset and 50% for the EMR. The average correctness of data items in the EMR is 87%.

Catalogue of encountered problems

In our case study, quality indicators could not be computed reliably based on the EMR data due to the general problems as enlisted in Table 5.

Table 5 Catalogue of encountered problems


Our results show that EMR-based indicator results significantly underestimate the quality of care compared to the same indicators computed based on manually abstracted data for a national quality register. Reasons were unavailable, incomplete and incorrect data items as well as missing relationships between diagnoses and procedures in the EMR. In particular, detailed data that reflects whether a patient’s treatment met the ideal standard of care was often incomplete in the EMR.

Comparison with other studies

The use of EMRs has increased rapidly in the recent years, making trustworthy reuse of data [18] an important challenge and research question. Worldwide, EMR-based quality measures [19] are increasingly employed, and new standards [20] such as eMeasures to automatically derive quality measures from EMRs are introduced.

Many researchers have compared results computed based on different data sources. Both Kerr et al. [21] and Parsons et al. [22] found that EMR-derived measures can underestimate performance in comparison to manual abstraction. Kern et al. [4] found that a “wide measure-by-measure variation in accuracy threatens the validity of electronic reporting”. Likewise, results of quality indicators computed based on administrative data have been compared to results computed based on manually abstracted EMR data. MacLean et al. [23] found that the EMR allows for a greater spectrum of measurable quality indicators, while summary estimates computed based on both data sources did not differ substantially. Tang et al. [24] found a significantly higher percentage of patients that have been identified to be relevant by manual selection.

Ancker et al. observed that “secondary use of data […] requires a generally higher degree of data integrity than required for the original primary use” [25]. It has been suggested that reliable and valid quality indicator results are only achievable based on accessible and high-quality data [2633]. Likewise, it has been shown that data quality issues are common in data warehouses and electronic patient records [3436].

Limitations of this study

Our case study included one hospital and one year of data with a relatively small sample size, and it is questionable to what extent the situation in our hospital is generalisable to other hospitals. However, the sample size was sufficient to show that data quality can significantly influence computed quality indicator results, which should be independent from the respective location.

Recommendations/future work

Based on the encountered problems, we compiled a set of recommendations to improve the quality and (re)usability of EMR data.

Availability of structured data

Data to determine the quality of care is particularly valuable, and hospital information systems should be set up in such a way that this data is available, accessible and usable for quality measurement and further use-cases. To obtain structured data, synoptic reports, i.e. predefined computer-based forms to record relevant procedures and findings in a structured, standardised format, have been shown to be advantageous [3739]. A standard way to encode medical free text is the use of Natural Language Processing tools. However, as most tools are developed for English, further research is required to handle Dutch [40].

Correctness of data items

Multiple data entry is unnecessary, error-prone, tedious and time-consuming. Data should be recorded only once, in an adequate quality. The quality might be risen by making those entering data aware of its possible reuses. Also, local quality improvement strategies from the literature [7, 41] could be applied. To submit data to the DSCA under such improved circumstances, required items could be preselected automatically from the EMR, checked by the one responsible and be submitted to quality registers or other authorised parties. If the data needs to be edited, changes should be applied locally before the data is shared with external parties.

Longitudinal view of patient history

As patient referrals are common and hospital alliances are likewise to proliferate in the future, it must become common practice to exchange data securely and automatically. Patients are likely to become active managers of their health, increasingly enabled to share their data with their caregivers.

Relations between diagnoses and procedures

To reuse clinical data, the relations between diagnoses and procedures must be traceable. To be able to automatically select only examinations that have been carried out in the context of a certain diagnosis, such relations should be recorded.

Level of detail

Patient data should be recorded as detailed as necessary for quality indicator computation and further foreseeable use-cases, such as the recruitment of patients for clinical trials, decision support, the early detection of epidemics or general clinical research. This might seem time-consuming, but will likely reduce the workload in the long term, as each data item has to be recorded only once. To further reduce the workload, the process should be supported by advanced data entry methods and interfaces.


Only data that is represented meaningfully - ideally in standard codes from comprehensive controlled clinical terminologies - can be reused automatically. Terminologies such as SNOMED CT can support the “Collect once - use many times” paradigm [42], which stands for the idea that data is captured only once and can be reused thereafter for a variety of purposes. Controlled terminologies can allow for meaning-based retrieval, for example by aggregation along hierarchical structures, or based on relationships between codes. An advantage of standard terminologies is that they are integrated in the National Library of Medicine’s Unified Medical Language System Metathesaurus, which contains mappings between terms across multiple terminologies.


This study showed that data quality can significantly influence indicator results, and that our routinely recorded EMR data was not suitable to reliably compute quality indicators. To support primary and secondary uses of data, EMRs should be designed so that a core dataset consisting of relevant items is entered directly and timely in a structured, sufficiently detailed and standardised format. Furthermore, awareness about the (re)use of data could be risen to ensure the quality of required data, and local data quality improvement strategies could be applied. Data could then be aggregated for different uses, according to various definitions. This strategy likely leads to an increased volume of high-quality data, which can ultimately serve as a basis for physicians not only to monitor but also to deliver the best possible quality of care.


  1. Brook RH, McGlynn EA, Cleary PD:Measuring quality of care. N Engl J Med. 1996, 335 (13): 966-970. 10.1056/NEJM199609263351311.

    Article  CAS  PubMed  Google Scholar 

  2. Campbell SM, Braspenning J, Hutchinson A, Marshall M:Research methods used in developing and applying quality indicators in primary care. Qual Saf Health Care. 2002, 11 (4): 358-364. 10.1136/qhc.11.4.358. [],

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Blumenthal D, Tavenner M:The “Meaningful Use” regulation for electronic health records. N Engl J Med. 2010, 363: 501-504. 10.1056/NEJMp1006114. [],

    Article  CAS  PubMed  Google Scholar 

  4. Kern LM, Malhotra S, Barron Y, Quaresimo J, Dhopeshwarkar R, Pichardo M, Edwards AM, Kaushal R:Accuracy of electronically reported “Meaningful Use” clinical quality measures. Ann Intern Med. 2013, 158: 77-83. 10.7326/0003-4819-158-2-201301150-00001.

    Article  PubMed  Google Scholar 

  5. Drolet BC, Johnson KB:Categorizing the world of registries. J Biomed Inform. 2008, 41 (6): 1009-1020. 10.1016/j.jbi.2008.01.009. [],

    Article  PubMed  Google Scholar 

  6. Dentler K, ten Teije A, de Keizer NF, Cornet R:Barriers to the reuse of routinely recorded clinical data: a field report. Studies in Health Technology and Informatics. 2013, [],

    Google Scholar 

  7. Arts DGT, de Keizer NF, Scheffer GJ:Defining and improving data quality in medical registries: a literature review, case study, and generic framework. J Am Med Inform Assoc. 2002, 9: 600-611. 10.1197/jamia.M1087. [],

    Article  PubMed  PubMed Central  Google Scholar 

  8. Zichtbare Zorg:Kwaliteitsindicatoren. 2012,

    Google Scholar 

  9. van Gijn W, van de Velde CJH:Improving quality of cancer care through surgical audit. Eur J Surg Oncol (EJSO). 2010, 36 Suppl 1: [],

    Google Scholar 

  10. Dutch Institute for Clinical Auditing: DICA-Rapportages 2011: transparantie, keuzes en verbetering van zorg. 2011, Leiden: DICA,\%20Jaarrapportage\%202011.pdf

    Google Scholar 

  11. Dentler K, ten Teije A, Cornet R, de Keizer NF:Towards the automated calculation of clinical quality indicators. Knowledge Representation Health-Care. 2012, LNCS 6924: 51-64.

    Article  Google Scholar 

  12. Dentler K:Formalised colorectal cancer surgery quality indicators. 2014, [],

    Google Scholar 

  13. Romano PS, Mull HJ, Rivard PE, Zhao S, Henderson WG, Loveland S, Tsilimingras D, Christiansen CL, Rosen AK:Validity of selected AHRQ patient safety indicators based on VA national surgical quality improvement program data. Health Serv Res. 2009, 44: 182-204. 10.1111/j.1475-6773.2008.00905.x. [],

    Article  PubMed  PubMed Central  Google Scholar 

  14. Bland JM, Butland BK:Comparing proportions in overlapping samples. Tech. rep. 2011, [],

    Google Scholar 

  15. Weiskopf NG, Weng C:Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc. 2012, 20 (144–151): [],

    Google Scholar 

  16. Persell SD, Wright JM, Thompson JA, Kmetik KS, Baker DW:166. Arch Intern Med. 2006, 20: 2272-[],

    Article  Google Scholar 

  17. Baker DW, Persell SD, Thompson JA, Soman N, Burgner KM, Liss D, Kmetik KS:Automated review of electronic health records to assess quality of care for outpatients with heart failure. Ann Intern Med. 2006, 146 (4): 270-277. [],

    Article  Google Scholar 

  18. Geissbuhler A, Safran C, Buchan I, Bellazzi R, Labkoff S, Eilenberg K, Leese A, Richardson C, Mantas J, Murray P, De Moor G:Trustworthy reuse of health data: a transnational perspective. Int J Med Inform. 2013, 82 (1): 1-9. 10.1016/j.ijmedinf.2012.11.003. doi:10.1016/j.ijmedinf.2012.11.003,

    Article  CAS  PubMed  Google Scholar 

  19. Weiner JP, Fowles JB, Chan KS:New paradigms for measuring clinical performance using electronic health records. Int J Qual Health Care. 2012, 24 (3): 200-205. 10.1093/intqhc/mzs011.

    Article  PubMed  Google Scholar 

  20. Fu PC, Rosenthal D, Pevnick JM, Eisenberg F:The impact of emerging standards adoption on automated quality reporting. J Biomed Inform. 2012, 45 (4): 772-781. 10.1016/j.jbi.2012.06.002. [],

    Article  PubMed  Google Scholar 

  21. Kerr EA, Smith DM, Hogan MM, Krein SL, Pogach L, Hofer TP, Hayward RA:Comparing clinical automated, medical record, and hybrid data sources for diabetes quality measures. J Qual Improv. 2002, 28 (10): 555-565. [] [],

    Google Scholar 

  22. Parsons A, McCullough C, Wang J, Shih S:Validity of electronic health record-derived quality measurement for performance monitoring. J Am Med Inform Assoc. 2012, 19 (4): 604-609. 10.1136/amiajnl-2011-000557. [],

    Article  PubMed  PubMed Central  Google Scholar 

  23. MacLean CH, Louie R, Shekelle PG, Roth CP, Saliba D, Higashi T, Adams J, Chang JT, Kamberg CJ, Solomon DH, Young RT, Wenger NS:Comparison of administrative data and medical records to measure the quality of medical care provided to vulnerable older patients. Med Care. 2006, 44 (2): 141-148. 10.1097/ [http: //],

    Article  PubMed  Google Scholar 

  24. Tang PC, Ralston M, Arrigotti MF, Qureshi L, Graham J:Comparison of methodologies for calculating quality measures based on administrative data versus clinical data from an electronic health record system: implications for performance measures. J Am Med Inform Assoc. 2007, 14: 10-15. [[]],

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Ancker JS, Shih S, Singh MP, Snyder A:Root causes underlying challenges to secondary use of data. AMIA Symposium Proceedings. 2011, 57-62. [],

    Google Scholar 

  26. Brook RH, McGlynn EA, Shekelle PG:Defining and measuring quality of care: a perspective from US researchers. Int J Qual Health Care. 2000, 12 (4): 281-295. 10.1093/intqhc/12.4.281. [],

    Article  CAS  PubMed  Google Scholar 

  27. Powell AE, Davies HTO, Thomson RG:Using routine comparative data to assess the quality of health care: understanding and avoiding common pitfalls. Qual Saf Health Care. 2003, 12 (122–128): [],

    Google Scholar 

  28. Fowles J, Kind E, Awwad S, Weiner J, Chan K:Performance measures using electronic health records: five case studies. 2008, [],

    Google Scholar 

  29. Roth CP, Lim YW, Pevnick JM, Asch SM, McGlynn EA:The challenge of measuring quality of care from the electronic health record. Am J Med Qual. 2009, 24 (5): 385-394. 10.1177/1062860609336627. [],

    Article  PubMed  Google Scholar 

  30. Abernethy AP, Herndon JEII, Wheeler JL, Rowe K, Marcello J, Meenal P:Poor documentation prevents adequate assessment of quality metrics in colorectal cancer. J Oncol. 2009, 5 (4): 167-174. [] [],

    Google Scholar 

  31. Chan KS, Fowles JB, Weiner JP:Electronic health records and the reliability and validity of quality measures: a review of the literature. Med Care Res Rev. 2010, 67 (5): 503-527. 10.1177/1077558709359007. [],

    Article  PubMed  Google Scholar 

  32. Burns EM, Bottle A, Aylin P, Darzi A, Nicholls RJ, Faiz O:Variation in reoperation after colorectal surgery in England as an indicator of surgical performance: retrospective analysis of Hospital Episode Statistics. BMJ. 2011, 343: [],

    Google Scholar 

  33. Anema HA, van der Veer SN, Kievit J, Krol-Warmerdam E, Fischer C, Steyerberg E, Dongelmans DA, Reidinga AC, Klazinga NS, de Keizer NF:Influences of definition ambiguity on hospital performance indicator scores: examples from The Netherlands. Eur J Public Health. 2013, Dmv: 1-6. [],

    Google Scholar 

  34. Warsi AA, White S, McCulloch P:Completeness of data entry in three cancer surgery databases. Eur J Surg Oncol (EJSO). 2002, 28 (8): 850-856. 10.1053/ejso.2002.1283. [],

    Article  CAS  Google Scholar 

  35. Botsis T, Hartvigsen G, Chen F, Weng C:Secondary use of EHR: data quality issues and informatics opportunities. AMIA Summits Transl Sci. 2010, 2010: 1-5. [],

    Google Scholar 

  36. Hripcsak G, Albers DJ:Next-generation phenotyping of electronic health records. J Am Med Inform Assoc. 2013, 20 (1): 117-121. 10.1136/amiajnl-2012-001145. doi:10.1136/amiajnl-2012-001145,

    Article  PubMed  Google Scholar 

  37. Edhemovic I, Temple WJ, de Gara CJ, Stuart GCE:The computer synoptic operative report—leap forward in the science of surgery. Ann Surg Oncol. 2004, 11 (10): 941-947. 10.1245/ASO.2004.12.045. [] [],

    Article  PubMed  Google Scholar 

  38. Mack LA, Bathe OF, Hebert MA, Tamano E, Buie WD, Fields T, Temple WJ:Opening the black box of cancer surgery quality: WebSMR and the Alberta experience. J Surg Oncol. 2009, 99 (9): 525-530. [],

    Article  CAS  PubMed  Google Scholar 

  39. Park J, Pillarisetty VG, Brennan MF, Jarnagin WR, D’Angelica MI, Dematteo RP, Coit DG, Janakos M, Allen PJ:Electronic synoptic operative reporting: assessing the reliability and completeness of synoptic reports for pancreatic resection. J Am Coll Surg. 2010, 211 (3): 308-315. 10.1016/j.jamcollsurg.2010.05.008. [],

    Article  PubMed  Google Scholar 

  40. Cornet R, Van Eldik A, De Keizer N:Inventory of tools for dutch clinical language processing. Stud Health Technol Inform. 2012, 180: 245-

    PubMed  Google Scholar 

  41. Wyatt J:Acquisition and use of clinical data for audit and research. J Eval Clin Pract. 1995, 1: 15-27. 10.1111/j.1365-2753.1995.tb00004.x. [],

    Article  CAS  PubMed  Google Scholar 

  42. Cimino JJ:Collect once, use many: enabling the reuse of clinical data through controlled terminologies. J AHIMA. 2007, 78 (2): 24-29.

    PubMed  Google Scholar 

Pre-publication history

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Kathrin Dentler.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

KD, RC, AtT and NdK conceived and designed the study. KD conducted the analysis and wrote the first draft of the paper. PT entered and interpreted the data. All authors reviewed the manuscript and approved the final version. KD is guarantor for the study.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Dentler, K., Cornet, R., Teije, A.t. et al. Influence of data quality on computed Dutch hospital quality indicators: a case study in colorectal cancer surgery. BMC Med Inform Decis Mak 14, 32 (2014).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: