Skip to main content
  • Research article
  • Open access
  • Published:

Measuring the operational impact of digitized hospital records: a mixed methods study



Digitized (scanned) medical records have been seen as a means for hospitals to reduce costs and improve access to records. However, clinical usability of digitized records can potentially have negative effects on productivity.


Data were collected during follow-up outpatient consultations in two NHS hospitals by non-clinical observers using a work sampling approach in which pre-defined categories of clinician time usage were specified. Quantitative data was analysed using two-way ANOVA models and the Mann-Whitney U test. A focus group was held with clinicians to qualitatively explore their experiences using digitized medical records. The quantitative and qualitative results were synthesized.


Four hundred six consultations were observed. Using paper records, there was a significant difference in consultation times between hospitals (p = 0.016) and a significant difference in consultation times between specialties within hospitals (p = 0.003). Using digitized records there was a significant difference in consultation times between specialties within a hospital (p = 0.001). Excluding outliers, there was no significant difference between consultation times using digitized records compared with consultations using paper records in the same hospital, either at site (p > =0.285) or specialty level (p > =0.122). With digitized records at site A, two out of three specialties showed a significant increase in time spent searching computer records (p < =0.010, Δ = 01:50–07:10) and one specialty had a corresponding reduction in time spent searching paper records (p = 0.015, Δ = −00:28). Site B showed a notable increase in direct patient care (p < 0.001, Δ = 04:20–06:00) and time spent searching computer records (p < =0.043, Δ = 00:10–01:40) and reductions in the other time categories.

The focus group confirmed that the most recent clinical letter was a vital document in the patient record, often containing most of the required information. Concerns were expressed about consistency of scanning practice, causing uncertainty about what could be relied upon to exist in the digitized record. Benefits of digitized records included: access from multiple locations, better prepared ward rounds, improved inpatient handovers and an improved timeline of patient events. Limitations of digitized records included: increased complexity of creating a patient summary, display of specialised content such as hand-drawn diagrams, inability to quickly flick through the pages to find relevant content.


Digitized medical records can be implemented without detrimental operational impact. Inherent differences between specialties can outweigh the differences between paper and digitized records. Clear and consistent operational processes are vital for the reliability and usability of digitized medical records. Divergent views about usability (such as whether patient summary information is better or worse) may reflect familiarity with features of the digitized record.

Peer Review reports


Many hospitals have seen the use of digitized medical records (scanned paper) as a means to save money on administration and improve access to records [1, 2]. In the United Kingdom (UK), Government policy has repeatedly promoted the move away from paper records in health care [3]. However, published UK experience has shown that clinical usability of the digitized hospital record can be poor and potentially have negative effects on operational processes [4]. Even full electronic patient records (EPRs) have had detrimental impact on clinical productivity, both in the USA [5] and recent UK implementations [6, 7].

Hence, we believe that robust data is needed to determine if digitized hospital records can be implemented in a clinically acceptable way without detrimental operational impact within the UK National Health Service (NHS). Despite the volume of health informatics literature [8], there remains insufficient published research about digitized hospital records. To date, most of the published implementation experience about using digitized hospital records have been from projects in Norway [913].

For clarity, we first define our understanding of electronic health record acronyms. Electronic patient records (EPRs) [14] are also widely called electronic health records (EHRs) [15] or electronic medical records (EMRs) [16]. An influential NHS information strategy [17] attempted to distinguish the EPR from the EHR, with the former defined as a record maintained by a single healthcare institution and the latter as a longitudinal cradle-to-grave patient record drawn from multiple EPRs. However, in general usage the terms lack such precision and are often interchangeable. The HL7 EHR-System Functional Model defines an EHR as “a comprehensive, structured set of clinical, demographic, environmental, social, and financial data and information in electronic form, documenting the health care given to a single individual” [18]. Alternatively, ISO TR 20514:2005 defined a ‘basic generic’ EHR as simply a “repository of information regarding the health status of a subject of care, in computer processable form” [19].

The data within an EHR typically includes both structured and unstructured content. Structured data is usually directly typed or dictated into the system or received by electronic transfer (such as laboratory results) and is characterised by defined records, fields and coding schemes. In contrast, unstructured data includes items such as free text notes or scanned correspondence [20]. Any data can be coded using code systems and terminologies such as LOINC or SNOMED-CT, though obviously more structured data can be coded at a finer level of detail and hence offers more sophisticated analysis capabilities [21].

A digitized medical record is a paper record scanned into a set of unstructured computerized images, with some level of structured metadata for navigation and analysis purposes. This can exist in a standalone application or as a module of an EHR. There is a spectrum of capabilities and limitations in digitized medical record applications, from simple image display to complex tools for search, navigation or annotation. Software applications for digitized medical records are often seen as a form of electronic document management (EDM) and some publications and EHR vendors use this acronym to describe such functionality. Although document scanning is also widely used in UK primary care, the scope of this paper is digitized hospital records. We hypothesised that using digitized medical records would make a significant difference to outpatient consultation times and to the duration of tasks within a consultation.



The purpose of this study was to measure the effects of digitized medical records on the duration and time utilization of follow-up outpatient consultations as the measure of operational impact. The study compared timings between consultations using paper records with consultations using digitized records. We were not comparing one EDM system against another, but comparing any digitized record against standard paper-based practice. The study also sought the views of clinicians about the benefits and disadvantages of using digitized records. We selected follow-up rather than initial outpatient visits as the unit of measure, as some patients would have no pre-existing hospital medical record at their first consultation. The defined research questions and their purpose are listed in Table 1.

Table 1 Research questions


The study was conducted in two English NHS Trusts: Mid Yorkshire Hospitals NHS Trust (MYHT) and Basildon & Thurrock University Hospital NHS Foundation Trust (BTUHT). Both of the Trusts were implementing digitized medical records (using different systems) and agreed to participate in the study to evaluate the operational impact. This study had no external funding, but data collection was resourced from within the Trusts’ implementation project budgets. Data were collected at each site within a few months of initial implementation (we use the terms “site” and “Trust” interchangeably), whilst clinics were still using a mixture of paper and digitized records.

MYHT specialties were gynaecology, paediatrics and vascular surgery, and BTUHT specialties were gynaecology, paediatrics and rheumatology. Clinical specialties for observation were selected for a combination of reasons. Primarily this was to include specialties that rely heavily on detailed patient information including history and prior findings and interventions, and are therefore impacted more substantially by availability of the digitized medical record. We were also constrained by practical logistics (based on where and when the digitized medical records were implemented) and our aim to allow some inter-site comparison of the same disciplines. This partly opportunistic approach produced an unbalanced design, but we took account of this in the analysis.

Study design

The study emulated the approach of previous research into time effects of EHRs [22, 23]. Time sampling data were collected by a non-clinical observer using a work sampling approach in which pre-defined categories of clinician time usage were specified. The work sampling method is explained in the cited EHR time effect studies and cognate reports such as Munyisia, Yu & Hailey and Ammenwerth & Spotl [24, 25]. Data were gathered as they occurred naturalistically without randomization or blinding, thus representing a quasi-experimental approach [26]. The initial time categories were derived from previous work [22, 23], but some revisions were made by the research group to better suit the study context. The defined categories for clinician time usage are shown in Table 2.

Table 2 Outpatient consultation time categories

For purposes of informed consent, an information sheet explaining the research was provided for each patient in each consultation, and the clinician explained that he or she (not the patient) was the subject of the study and that participation was optional. If either a patient or the clinician declined participation for any patient-clinician interaction then the observer would leave the room until the next patient was seen. In all instances of data gathering the observer was entirely passive and had no interaction with clinicians or patients during consultations and no patient data were collected during any observation. As this study was unfunded, a purely manual data collection method was employed rather than, for example, a digital camera to record consultations and timestamp activities. The observers employed a paper tally sheet and a stopwatch. The tally sheet contained instructions about how to categorize the time if a clinician was doing two things at once. So, for example, the instructions said “If also talking to patient, record as writing”. Time recorded as category A was only when the physician was doing nothing else. Due to the concurrent usage of digitized and paper records, and the use of multiple computer systems for other purposes such as diagnostic imaging and laboratory test requesting and reporting, there were both paper-based and computer-based consultations occurring in the same clinics.

At MYHT there was a single quantitative data collection in February 2011 (shown as Stage 1 in Table 3), comprising a mixture of paper-based consultations and those using digitized medical records, and a qualitative focus group held in June 2012 to explore their experiences using digitized medical records. At BTUHT there were two quantitative data collections, the first between December 2010 and January 2011 (shown as Stage 1 in Table 3) and the second in December 2012 (shown as Stage 2 in Table 3). The field notes were qualitatively analysed. The study design received a favourable opinion from an NHS research ethics committee in November 2010.

Table 3 Summary of observational data sets

Statistical analysis

The outcome variables were the duration of each follow-up outpatient consultation and the time spent on each activity category. In order to establish expected time parameters and perform power analyses for needed sample sizes, data from three published studies [2729] were combined with anecdotal data (personal communications, August 2010) to estimate a coefficient of variation (standard deviation ÷ mean) for outpatient consultations. This was in the range 0.21–0.29 (mean 0.24). This gave a range of sample sizes from 33 to 92 per group to detect a 2-min difference (α = 0.05, β = 0.2). The mean coefficient of variation was used to estimate sample sizes on the standard follow-up consultation times recommended by the professions [30, 31]. A hypothetical 15-min consultation would need a sample size of 52 per group. The only relevant data source for the time spent searching/reading medical records was a simulation conducted in one of the Trusts in the study. Data from this produced a sample size of 25 per group. As the same observation would collect both outcome variables, the maximum sample size (52 consultations per specialty) was considered optimal. IBM SPSS® Version 22 was used to perform standard parametric and non-parametric tests.

Raw data were thoroughly reviewed prior to analysis to ensure that the most appropriate statistical techniques were applied. Consultation times were analyzed, both the entire time spent for each patient and the breakdown of time spent on each category: direct patient care, information searching on paper or computer, information recording and dictation. Two-way ANOVA models were used for research questions 1–6 and the Mann-Whitney U test was used for research question 7 due to the severe departure from normal distribution and homogeneous variance. As the units of analysis were different for research questions 1–4 and research questions 5–6, we used different ANOVA models. For research questions 1–4, we split the data file by record type and ran a two-way ANOVA with total consultation time as the dependent variable and site and specialty as the independent variables. For research questions 5–6, we split the data file by site and ran a two-way ANOVA with total consultation time as the dependent variable and record type and specialty as the independent variables. In both cases, we used a Type IV model due to the unbalanced design. For research questions 1–4, boxplots revealed three outlying data points. These were filtered out in the ANOVA model so that the variance was homogeneous, as required for this test. For research questions 5–6, more outliers were discovered. It was necessary to filter out cases where the total consultation time was greater than 30 min (n = 31) to achieve a data set with homogeneous variance (n = 375). For research question 7, we split the data file by site and specialty ran a Mann-Whitney U test with the time categories A-H as the dependent variables and record type as the independent variable. Given the relatively small sample sizes, we selected the exact computation method.


Quantitative data summary and characteristics

Altogether 406 consultations were observed; shown by site, specialty and record type in Table 3. No observations were declined by the patient or clinician. The figure in brackets is the number of clinicians observed in each subgroup. The entire MYHT data set was gathered by one observer. The BTUHT data was collected by two observers in stage one and a third observer in stage two. The sample sizes achieved were lower than the target levels due to time and resource constraints within the Trust implementation projects.

Total consultation duration

Figures 1 and 2 illustrate the distribution of duration times as SPSS boxplots. Outliers are included for completeness, but some were excluded from analyses as explained above.

Fig. 1
figure 1

Distribution of total consultation time observations by record type (MYHT)

Fig. 2
figure 2

Distribution of total consultation time observations by record type (BTUHT)

Table 4 shows the mean total consultation times, rounded to the nearest second, along with its 95 % confidence interval. The issue of potential measurement error is discussed later.

Table 4 Mean total consultation times by Trust and specialtya,b


The ANOVA residuals were normally distributed, indicating that the results were reliable and readily interpretable. The results of analyses and interpretations are summarized in Table 5. Table 6 gives further detail of the significant differences between specialties found in the ANOVA models.

Table 5 Detailed ANOVA statistical results: Total consultation timesa
Table 6 Inter-specialty differencesa,b,c

Time categories

Tables 7, 8 and Figs. 3, 4 summarize the median timings for each task category by site and specialty. The alphabetic codes refer to the time categories listed in Table 2 above.

Table 7 Median timings (mm:ss) for task categories – MYHT
Table 8 Median timings (mm:ss) for task categories – BTUHT
Fig. 3
figure 3

Median timings for task categories as % of consultation – MYHT

Fig. 4
figure 4

Median timings for task categories as % of consultation – BTUHT

Table 9 shows the results at specialty level contrasting the median times for paper and digitized records. Only significant differences are shown, with two-tailed p value and difference between median task category times.

Table 9 Task category median differences (mm:ss) for Tables 7 and 8a,b

At MYHT there was apparently a fairly predictable ‘swap’ between categories B (Information searching – paper) and C (Information searching – computer) in vascular surgery and a simple net increase in category C in paediatrics (though on a sample size of only 5, not too much can be made of this). Gynaecology showed no net effect on times per task category.

The situation at BTUHT was altogether more complex. Analyses were limited to the two specialties for which data were adequate. Both specialties showed significant differences in category A (direct patient care), B (information searching (paper)), C (information searching (computer)), D (recording information (paper)), F (dictation) and G (third party conversation). We suspect that the changes in F and G are random effects and are in any case clinically trivial as absolute time measures. Overall, notable increases in direct patient care and time spent searching the computer are the key findings, along with corresponding reductions in the other time categories. As shown in Tables 7 and 8, SPSS calculated the medians for category H (‘other’) as zero for each specialty although there were in fact 4/154 non-zero observations (range 00:28 to 01:24). The observer recorded these as interruptions of one kind or another. So the p value of the reported change for category H shown in Table 9 is statistically correct, though not operationally meaningful.

Qualitative results

The focus group was held in June 2012 with nine clinicians from MYHT. The group comprised a cardiologist, two respiratory physicians, a paediatrician, a rheumatologist, two urologists, an anaesthetist and a vascular surgeon. Not all participants were present for the entire discussion.

Participants were asked about their experiences and perceptions regarding any possible impacts that scanned medical records have on clinical and operational activities. Several commented that the main hospital notes were often not yet fully scanned, so the first document they would look for would be the most recent clinical letter as a patient summary. When later asked how often that clinical letter contained most of the information needed, nearly half of answers ranged between 70 % and 90 %, while two said 50 % or less. One estimated that the clinical letter contained 25 % or less of the information needed, but all her patients were new referrals. Some participants observed that Emergency Department notes were now more accessible and reliably present in the record. Comments also noted that the overall standard of clinical letters had improved due to the increased reliance on them in the digitized record.

Some comments offered areas for improvement. The view was expressed that further guidance was needed to maximise the content value in notes, and that some departments were inconsistent in the structure and content of their letters. Others remarked that the operational scanning process varied between sites in the Trust, which led to uncertainty about what to expect within the digitized record. One participant commented about the legibility of handwritten text in digitized records, but also acknowledged that this was an issue with paper notes as well.

Another question asked the participants what they preferred about digitized records. Common answers related to the availability and accessibility of the record at multiple locations (including home) and the value this gave for off-site decision-making about patient care. Several participants noted how difficult care was before digitization when paper notes went missing: “We forget how it was when records did not turn up”. Two participants noted the utility of a feature of the digitized record application called the “timeline” which showed a summary of patient events in the record. One clinician observed that ward rounds were now quicker and that the nurses were better prepared, but that ward access still suffered from insufficient mobile hardware and some network issues. One clinician particularly noted the value of digitized records to support patient handovers: “Handovers morning and evening use scanned notes and PACS… I was a cynic but now I’m converted, especially for handovers. It is more intuitive than you think.”

The next question asked participants what they missed about paper-based notes. Comments included the ability to flick through notes easily, the comparative simplicity of creating a complete summary for medico-legal purposes or for patient transfers, and the display of medical photographs and hand-drawn diagrams. Divergent views offered about patient summary information may reflect varying familiarity with features of the digitized record. Alternatively, however, a real issue may be that abstracting the necessary data from each digitized document may be more difficult, even though having an additional summary timeline should represent an advantage.

When asked how their clinical time had been affected by digitized records versus paper, several said they perceived that clinics take longer. Others commented on the different approach needed for clinic preparation because there was no digital process analogous to paper-based preparation where a nurse would highlight relevant documents with sticky notes. Suggested solutions mentioned by respondents for making digitized processes better included keeping multiple windows open simultaneously for the same patient (such as PACS, digitized record, and laboratory test requests) and then previewing the various electronic sources for the next six patients so as to get a full overview. Overall, the general view seemed to be summed up by the comment that, “It is better practice but it takes longer”.

The final question asked whether respondents thought that, overall, the benefits of scanned records outweighed the disadvantages. Seven of nine participants said yes, and one said no.


Using paper records, there was a significant difference in mean consultation times between Trusts and a significant difference in consultation times between specialties within a Trust. This demonstrates a fundamental difference in standard practice both between sites and between specialties.

Aggregated at site level, there was no significant difference between mean consultation times using digitized records compared with consultations using paper records within the same Trust. This suggests that digitized records can be implemented without detrimental operational impact, when viewed at an overall Trust level. We found that differences between specialties are more pronounced than overall differences between sites or between paper and digitized records. Therefore the first part of our hypothesis, that using digitized records would make a significant difference to outpatient consultation times, was not supported.

Differences in consultation duration between specialties

In our sample, differences between specialties outweighed differences between paper and digitized records. Earlier work by the first author has hypothesised that clinical specialty could be seen as a predictor of EHR acceptance [32]. Differences quantified in this study may reflect natural differences between specialties and specialists. This study appears to offer further support to the premise that, at the risk of gross over-simplification, time differences observed may in part reflect the varying “thinking styles” associated with practitioners and practices, wherein some disciplines are more narrative driven such as paediatrics, compared to other more ‘propositional’ specialties such as surgery.

When analysed by specialty, MYHT showed no significant difference in the duration of consultations using digitized records compared with consultations using paper records (see Table 4) except in paediatrics, where the mean consultation time was 17 min with paper records and 25 min with digitized records. However, it should be re-emphasized that only five consultations using digitized records were observed for paediatric clinics so the data may be unrepresentative. BTUHT showed no significant difference by specialty.

Despite these quantitative findings, the subjective perception expressed in the MYHT focus group was that ‘it takes longer’ with digitized records. This may be due to sampling differences – only two of the seven specialties represented in the focus group were also part of the quantitative study. It may also reflect a form of recall bias in respect of an initially unpopular change. Arguably, this further strengthens the case for independent evaluation of health IT interventions (rather than by implementers), so that a holistic evidence-based case may be made for whether and how they are adopted.

Changes in time utilization

The small quantitative sample size for digitized records at MYHT is insufficient to draw conclusions at this level of detail. The larger data set from BTUHT offers some interesting findings about increased time in direct patient care. The second part of our hypothesis, that using digitized records would make a significant difference to the duration of tasks within a consultation, was supported for paediatrics and vascular surgery at MYHT and for paediatrics and gynaecology at BTUHT.

The BTUHT formal business case objectives included the need to deliver improvements in clinical efficiency and effectiveness – more generally recognised as measures for ‘releasing time to care’. The project took account of the usability lessons from other implementations where case notes have been scanned [4], recognising that the manner in which legacy scanned material is organised and indexed in the digitized record has a direct impact on the ease with which clinicians can find specific material. The three key components of the implementation that were designed to support clinical usability were:

  1. 1.

    The tab and sub-tab structure adopted for the digital record.

  2. 2.

    The identification of key-document-types within the physical record that can be individually identified within the scanned legacy material.

  3. 3.

    The association of document dates with a sub-set of the identified document-types.

Rich metadata can be associated with material that is generated ‘day-forward’ (post implementation and scanning of the legacy notes) and this helps create a structured, searchable digital record (the ‘future state’). However, associating metadata (document-type and document-date) with material in the legacy scanned record had significant cost implications for the project. Not all of the legacy scanned material could be indexed to the desired level of granularity. The implementation team therefore worked closely with a reference group of clinicians to select material in the physical case notes that was most appropriate for indexing. These discussions were augmented by a substantial number of direct observations by the project team of the way physical case notes are used in clinical settings. This helped to identify the material that was most commonly used in various outpatient clinical settings.

The digitized record was therefore effectively ‘tuned’ for clinical use – especially in outpatient clinics where it was known that there was significant pressure on clinical time. In addition, the use of ‘targeted indexing’ of the scanned legacy notes also meant that the ‘timeline’ functionality could be used to display the clinician-defined critical information for historic episodes. Digitising a clinical record removes the tactile and visual navigation pointers that help clinicians rapidly pinpoint information in the physical case note. However, by working closely with the clinical community, the project team was able to introduce digital markers – metadata – in the digitized legacy case note as simple and cost-effective navigation aids.

User satisfaction

Laerum and colleagues reported lower clinician satisfaction with digitized images of records than with other components of an EHR [9, 11]. A follow-up study after three years [12] showed that user satisfaction with the digitized records in the EHR had remained roughly the same for medical secretaries, improved substantially for nurses and improved marginally for physicians. The ranking of user satisfaction remained unchanged: secretaries the highest, nurses somewhat below them and physicians the lowest of all.

These findings echo our qualitative data, where clinicians (physicians and surgeons in our sample) were not especially enthusiastic about scanned records but mostly agreed that, on balance, the disadvantages were less than the benefits – especially when viewed as a ‘package’ with other EHR benefits like electronic test ordering. Our clinical participants expressed particular concerns about the presentation of ‘special’ content such as hand-drawn diagrams and medical photographs. Our study did not probe the views of administrators or nurses so we cannot comment on that aspect.

Workflow implications

Lium et al. noted that “old” routines built around paper records tended to persist even after the introduction of EPR and digitized records [13]. There are unavoidable workflow implications of moving either to digitized records or full EHR [33], but implementing what is necessary is not necessarily what is optimal. Furthermore, some workflow changes are planned and others are emergent.

Our qualitative data showed that the introduction of digitized records had unexpectedly led to improvements in the structure and content of clinic letters, as a contingency in the event of the full record being unavailable due to scanning delays. The majority of clinicians agreed that the latest clinic letter usually gave most of the information needed for the current patient encounter, except for new referrals.

Another workflow effect was the loss of ‘clinic preparation’, where a nurse would signpost particularly important elements in the paper record. In principle, there is no reason why a digitized record module could not support an analogous electronic process. However, this would need to take account of both digitized and natively electronic content so as to avoid the ‘paper-based thinking’ trap. It is easy to visualise some kind of electronic summary sheet for each patient where the nurse could drag files to create hyperlinks to particular documents, images and data to guide the physician into the consultation. We have not yet explored if such functionality is offered by commercial EHRs.


There might be a question as to whether each patient consultation within an overall series held as a clinic, with a notionally fixed endpoint, is a statistically independent event. We argue that clinical practice within the selected specialties was to treat each patient individually, and thus that each consultation is as long or as short as necessary. The wide variations seen in our data seem to support this inference. Other studies of general practice have treated patient consultations as statistically independent in a like manner [34, 35].

As this trial was not randomized and does not exactly compare like with like (specialty selection, timing, digitized record software), we cannot exclude confounding variables. For example, factors such as case mix, environmental features, secular trends and organizational or service changes, and the difference in the digital medical records between the two organisations, cannot be excluded. No obvious or large internal or external influences of these kinds were noted over the period of the data collections, so we are confident of the findings, but the limitation is acknowledged and the study cannot formally assert causality. We also accept that there is inevitably some unquantifiable effect from the timing of the quantitative data collection being within a few months of implementation rather than in a settled operational environment, and aspects of this were noted from our qualitative component. As we have not adjusted for multiple statistical testing, our conclusions should strictly be treated as exploratory not confirmatory [36].

Time categorization is another limitation worthy of mention. Time usage categorized as “Other (specify)” (category H) was zero, implying that the defined categories sufficiently captured the range of activity types. Category E, “Recording information (computer)” was zero too. This apparently because each hospital was continuing to take paper notes (on a form designed for immediate scanning) rather than using direct computerized data entry in clinics. Perhaps surprisingly, category G, “Third party conversation” (in effect, interruptions), was also zero in each data set except gynaecology at BTUHT. This seems to highlight a fundamental difference between the focussed and relatively undisturbed nature of clinical work in office-based clinics in contrast to the more challenging environment of inpatient wards or emergency departments [37, 38]. This difference is crucial for usability and adoption of digitized records and full EHRs [39].

The time-related data comprised relatively small samples. Each data set was uniquely collected by a single observer without any calibration or measure of intra- or inter-observer reliability, making measurement error the main weakness of this study, largely due to its unfunded nature. Additionally, although the data collection instrument was face validated, piloted and refined for ease of use, no measurement study was conducted on it. The quantitative data collection was resourced from existing NHS Trust project budgets using non-clinical observers recruited by the Trust project teams, therefore the risk of observer bias is acknowledged.

There has been considerable delay between data collection and publishing the results. As with several other factors, this is largely due to the unfunded nature of the project and hence the difficulty in resourcing the analysis and writing in competition with funded work. However, we believe there is an ethical duty to publish our findings given the paucity of work on this topic.

Further work

We propose to undertake similar studies in other hospitals, with more robust measurement methodology and standardization, along with undertaking intra-observer and inter-observer reliability before and during the study [40]. We also aim to explore the use of the online tool TimeCaT [41].

An important question for digitized records, for which we currently have only anecdotal data, is finding the right balance between scanning cost and detailed document indexing (for ease of searching). Most importantly, we did not compare paper and digitized records with fully structured EHRs. EHR-based research may in fact make digitization definable as a step in transition from paper to the fully technologically advanced solution with inherent clinical decision support capabilities in addition to the accessibility benefits found with digitization. We also propose that a comparison of the underlying or concurrent EHR solutions within varying organisations is a future undertaking worthy of execution.


The quantitative data we have reported suggests that digitized medical records can be implemented without substantial detrimental operational impact, and that inherent differences between specialties may outweigh the differences between paper and digitized records. The qualitative data stress the importance of clear and consistent operational processes to support and optimize the reliability and usability of digitized medical records. Further work is needed to compare digitized record performance with a structured and interactive EHR.


  1. McIndoe R. Time to tear ourselves away from paper. Health Serv J. 2007. Issue March 8.

  2. Bennington J, Bullas S, editors. Transforming services: health records case study. Healthcare Computing. Harrogate: BCS Health Informatics Forum; 2008.

    Google Scholar 

  3. National Information Board. Personalised health and care 2020: a framework for action. 2014. Accessed 3 Nov 2016.

    Google Scholar 

  4. Scott PJ, Williams PB. Deploying electronic document management to improve access to hospital medical records. J Manag Mark Healthcare. 2009;2(2):151–60.

    Article  Google Scholar 

  5. Gerdeman D. How Electronic Patient Records Can Slow Doctor Productivity. Harvard Business School Working Knowledge. 2014. Issue March 8.

  6. Riaz S. Rotherham NHS Foundation Trust abandons Meditech EPR system. In: Digital by Default News. 2013. Accessed 3 Nov 2016.

    Google Scholar 

  7. Bowers S. London trusts in chaos as NHS IT system 'loses' waiting lists. Guardian. 2009. Issue March 8.

  8. Ammenwerth E, de Keizer N. A web-based inventory of evaluation studies in medical informatics. 2006. Accessed 3 Nov 2016.

    Google Scholar 

  9. Laerum H, Karlsen TH, Faxvaag A. Effects of scanning and eliminating paper-based medical records on hospital physicians' clinical work practice. J Am Med Inform Assoc. 2003;10(6):588–95.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Laerum H, Karlsen TH, Faxvaag A. Use of and attitudes to a hospital information system by medical secretaries, nurses and physicians deprived of the paper-based medical record: a case report. BMC Med Inform Decis Mak. 2004;4:18.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Lium JT, Faxvaag A. Removal of paper-based health records from Norwegian hospitals: Effects on clinical workflow. Stud Health Technol Inform. 2006;124:1031–6.

    PubMed  Google Scholar 

  12. Lium JT, Laerum H, Schulz T, Faxvaag A. From the front line, report from a near paperless hospital: Mixed reception amongst health care professionals. J Am Med Inform Assoc. 2006;13(6):668–75.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Lium JT, Tjora A, Faxvaag A. No paper, but the same routines: a qualitative exploration of experiences in two Norwegian hospitals deprived of the paper based medical record. BMC Med Inform Decis Mak. 2008;8:2.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Wyatt JC, Liu JLY. Basic concepts in medical informatics. J Epidemiol Community Health. 2002;56(11):808–12. doi:10.1136/jech.56.11.808.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. US Centers for Medicare & Medicaid Services. EHR Incentive Programs. 2012. Accessed 3 Nov 2016.

    Google Scholar 

  16. HIMSS Analytics. Electronic Medical Record Adoption Model. Accessed 3 Nov 2016.

  17. Burns F. Information for health. Leeds: Department of Health; 1998. Accessed 3 Nov 2016.

    Google Scholar 

  18. Health Level 7. EHR Functional Profile. 2012. Accessed 3 Nov 2016.

    Google Scholar 

  19. ISO. ISO/TR 20514:2005 Health informatics -- Electronic health record -- Definition, scope and context. 2005. Accessed 3 Nov 2016.

    Google Scholar 

  20. The Commonwealth Fund. Electronic Health Records: An International Perspective on “Meaningful Use”. 2011.

    Google Scholar 

  21. Morrison Z, Fernando B, Kalra D, Cresswell K, Robertson A, Hemmi A, et al. An Evaluation of Different Levels of Structuring Within the Clinical Record. 2012.

    Google Scholar 

  22. Pizziferri L, Kittler AF, Volk LA, Honour MM, Gupta S, Wang S, et al. Primary care physician time utilization before and after implementation of an electronic health record: a time-motion study. J Biomed Inform. 2005;38(3):176–88.

    Article  PubMed  Google Scholar 

  23. Overhage JM, Perkins S, Tierney WM, McDonald CJ. Controlled trial of direct physician order entry: effects on physicians' time utilization in ambulatory primary care internal medicine practices. J Am Med Inform Assoc. 2001;8(4):361–71.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Munyisia EN, Yu P, Hailey D. Caregivers' time utilization before and after the introduction of an electronic nursing documentation system in a residential aged care facility. Methods Inf Med. 2013;52(5):403–10. doi:10.3414/me12-01-0024.

    Article  CAS  PubMed  Google Scholar 

  25. Ammenwerth E, Spotl HP. The time needed for clinical documentation versus direct patient care. A work-sampling analysis of physicians' activities. Methods Inf Med. 2009;48(1):84–91.

    CAS  PubMed  Google Scholar 

  26. Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Wadsworth Cengage learning. New York: Houghton Mifflin Company; 2002.

  27. Clague JE, Reed PG, Barlow J, Rada R, Clarke M, Edwards RHT. Improving outpatient clinic efficiency using computer simulation. Int J Health Care Quality Assurance. 1997;10(5):197–201.

    Article  CAS  Google Scholar 

  28. Hajioff D, Birchall M. Medical students in ENT outpatient clinics: appointment times, patient satisfaction and student satisfaction. Med Educ. 1999;33(9):669–73.

    Article  CAS  PubMed  Google Scholar 

  29. Partridge JW. Consultation time, workload, and problems for audit in outpatient clinics. Arch Dis Child. 1992;67(2):206–10.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. British Cardiovascular Society. Guidance on appropriate workload for consultant cardiologists. 2010.

    Google Scholar 

  31. Royal College of Physicians. Consultant physicians working with patients. The duties, responsibilities and practice of physicians in general medicine and the specialties. London: RCP; 2005.

    Google Scholar 

  32. Scott PJ, Briggs JS. Developing a theoretical model of clinician information usage propensity. Stud Health Technol Inform. 2009;150:605–9.

    PubMed  Google Scholar 

  33. Waterson P, Glenn Y, Eason K. Preparing the ground for the 'paperless hospital': a case study of medical records management in a UK outpatient services department. Int J Med Inform. 2012;81(2):114–29.

    Article  PubMed  Google Scholar 

  34. Venning P, Durie A, Roland M, Roberts C, Leese B. Randomised controlled trial comparing cost effectiveness of general practitioners and nurse practitioners in primary care. BMJ. 2000;320(7241):1048–53.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  35. Deveugele M, Derese A, van den Brink-Muinen A, Bensing J, De Maeseneer J. Consultation length in general practice: cross sectional study in six European countries. BMJ. 2002;325(7362):472.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Bender R, Lange S. Adjusting for multiple testing--when and how? J Clin Epidemiol. 2001;54(4):343–9.

    Article  CAS  PubMed  Google Scholar 

  37. Laxmisan A, Hakimzada F, Sayan OR, Green RA, Zhang J, Patel VL. The multitasking clinician: decision-making and cognitive demand during and after team handoffs in emergency care. Int J Med Inform. 2007;76(11-12):801–11.

    Article  PubMed  Google Scholar 

  38. Ly T, Korb-Wells CS, Sumpton D, Russo RR, Barnsley L. Nature and impact of interruptions on clinical workflow of medical residents in the inpatient setting. J Grad Med Educ. 2013;5(2):232–7.

  39. Friedberg M, Chen P, Van Busum K, Aunon F, Pham C, Caloyeras J, et al. Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy. 2013. Accessed 3 Nov 2016.

    Google Scholar 

  40. Lopetegui MA, Bai S, Yen PY, Lai A, Embi P, Payne PR. Inter-observer reliability assessments in time motion studies: the foundation for meaningful clinical workflow analysis. AMIA Annu Symp Proc. 2013;ᅟ:889–96.

    Google Scholar 

  41. Lopetegui M, Yen PY, Lai AM, Embi PJ, Payne PR. Time Capture Tool (TimeCaT): development of a comprehensive application to support data capture for Time Motion Studies. AMIA Annu Symp Proc. 2012;ᅟ:596–605.

    Google Scholar 

Download references


The authors thank the clinicians and patients whose experiences provided the data for this study. We thank Dr Paul Strike, Salisbury NHS Foundation Trust, for statistical advice on an earlier draft of the paper. We appreciate the helpful comments of the peer reviewers, which we believe have significantly improved the paper.

Authors’ contributions

PS, PC, IL and PW conceived and directed the study. SS guided the statistical analysis and presentation of results. PS drafted the paper and coordinated revisions, to which all co-authors contributed. All authors read and approved the final manuscript.

Competing interests

In October 2015, IL was appointed non-executive director of Immj Systems ltd, a new EHR vendor. The software was not involved in the study and at the time of the design and data collection the company did not exist.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Philip J. Scott.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Scott, P.J., Curley, P.J., Williams, P.B. et al. Measuring the operational impact of digitized hospital records: a mixed methods study. BMC Med Inform Decis Mak 16, 143 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: