Skip to main content
  • Research article
  • Open access
  • Published:

Hospital characteristics associated with highly automated and usable clinical information systems in Texas, United States



A hospital's clinical information system may require a specific environment in which to flourish. This environment is not yet well defined. We examined whether specific hospital characteristics are associated with highly automated and usable clinical information systems.


This was a cross-sectional survey of 125 urban hospitals in Texas, United States using the Clinical Information Technology Assessment Tool (CITAT), which measures a hospital's level of automation based on physician interactions with the information system. Physician responses were used to calculate a series of CITAT scores: automation and usability scores, four automation sub-domain scores, and an overall clinical information technology (CIT) score. A multivariable regression analysis was used to examine the relation between hospital characteristics and CITAT scores.


We received a sufficient number of physician responses at 69 hospitals (55% response rate). Teaching hospitals, hospitals with higher IT operating expenses (>$1 million annually), IT capital expenses (>$75,000 annually) and hospitals with larger IT staff (≥ 10 full-time staff) had higher automation scores than hospitals that did not meet these criteria (p < 0.05 in all cases). These findings held after adjustment for bed size, total margin, and ownership (p < 0.05 in all cases). There were few significant associations between the hospital characteristics tested in this study and usability scores.


Academic affiliation and larger IT operating, capital, and staff budgets are associated with more highly automated clinical information systems.

Peer Review reports


An emerging evidence base suggests that clinical information technologies, such as electronic medical records, computerized order entry, and electronic decision support, can improve the quality of care within the hospital environment [1, 2]. U.S. hospitals are rapidly trying to expand their capabilities in these areas but informaticians have long recognized that effective information systems do not emerge fully formed, Athena-like, from the point of purchase. Examples of failure in design, implementation and planning abound [36] and hospital systems, healthcare policy makers, and software developers are interested in how to best design and support these systems for the healthcare environment.

To flourish, an information system may require a specific blend of hospital or organizational characteristics in which to root [79]. The precise mix of this "nutrient environment" is not well defined. Attempts to characterize this environment have been challenged by a lack of standardized instruments that measure the degree to which a hospital information system is automated [10, 11]. A reliable measurement system must be constructed using a socio-technical view of inpatient medicine [12]. This view holds that the delivery and quality of clinical care is influenced by dynamic interactions between the social aspects of an organization, i.e., its policies, norms, and culture, and its technical routines, such as those imposed by an information system [12].

We previously developed a clinical information technology assessment tool (CITAT) that quantitatively measures a hospital's level of automation and usability based on a physician's daily interaction with their information system [13]. The instrument was designed and tested using a socio-technical view of inpatient clinical practice and has demonstrated reliability and validity [13, 14]. In this study, we examine the relation between specific organizational characteristics, i.e., the "nutrient environment," and the degree to which the hospital information system is automated and usable, as measured by scores on the CITAT.

We hypothesized that investment in the human resources that support information technologies, such as the size of a hospital's IT staff, would be associated with more usable clinical information systems. We also hypothesized that a hospital's automation score would be positively associated with bed size, ownership, financial strength, and teaching status. Urban hospitals that take care of underserved or minority populations in the United States, often labeled "safety net hospitals," frequently have fewer financial resources at their disposal. Some authors have suggested that current disparities in health care may be perpetuated if such hospitals are not assisted in the movement to digitization [11]. We hypothesized that urban safety net status would be negatively associated with automation and usability.


Study Design and Study Population

We conducted a cross-sectional study of urban hospitals in the state of Texas. We chose Texas because it contains among the largest number and variety of hospital organizations in the United States, several different metropolitan areas, and diverse physician and patient populations. We sampled from 125 general, acute care hospitals located within 10 geographically dispersed metropolitan statistical areas (MSAs) in Texas (Abilene, Austin, Dallas, El Paso, Houston, Laredo, Lubbock, McAllen, San Angelo, and San Antonio). We excluded rural, pediatric, specialty, or long-term care facilities or hospitals that were in the process of closing or merging with another facility. The Johns Hopkins University School of Medicine Institutional Review Board approved our research protocol.

Dependent Variables

The physician-based clinical information technology assessment tool was produced in eight steps according to established methods of survey development. These steps included: development of a conceptual model, literature review, content identification, item construction, pre-testing, item selection, and item re-classification. The CITAT instrument was further tested and validated in four diverse U.S. hospitals, and demonstrated discriminant validity, convergent validity, reliability, and precision [13]. The instrument received subsequent testing in a study of intensive care unit information systems [14].

The CITAT assesses a system's automation and usability. Automation represents the degree to which clinical information processes in the hospital are fully computerized and is divided into four distinct sub-domains: test results, notes & records, order entry, and a set of other sub processes largely consisting of decision support [13]. To score highly on a given automation sub-domain, the CITAT requires three factors of routine information practices: 1) the practice must be available as a fully computerized process; 2) the physician must know how to activate the computerized process; and 3) he or she must routinely choose the computerized process over other alternatives, such as writing an order or making a telephone call. Usability represents the degree to which information management is effective and well supported from a physician standpoint, regardless of whether a system is automated or manual. An overall measure, called the CIT score, represents an average of the automation and usability scores (the survey items can be obtained from the corresponding author).

Using the American Medical Association (AMA) master file, we selected a 50% random sample of Texas physicians from those who were indicated: 1) to have practice locations in the designated MSAs; and 2) to be practicing internal medicine (including 9 sub-specialties), general surgery (including 10 sub-specialties), or family practice (n = 7,432). We mailed surveys to each of the selected physicians between December 2005 and May 2006. We asked the physician to indicate whether they practiced inpatient medicine, and, if so, to select the hospital in which they provide the majority of their inpatient care. To be eligible, physicians had to actively practice in one of the 125 hospitals selected for this study. As guided by prior work, hospitals for which we did not receive five randomly sampled physician responses were eliminated from further analysis due to the possibility of unstable estimates [14].

The CITAT contains additional items that elicit the background characteristics of the respondents. This included information on the number of inpatient hours provided by the physician in a given week and the number of years practiced at the designated hospital. In addition, computer familiarity and attitude toward computers were assessed through three separate items that were used in previous deployments of the CITAT. Age, sex, specialty, and year of medical school graduation were obtained through the AMA master file. This information was used to assess potential relationships between IT scores and respondent characteristics that might be required for statistical adjustment.

Independent Variables

Hospital characteristics were obtained from the 2005 survey of the Texas Hospital Association and the American Hospital Association (AHA) annual survey of Texas hospitals. For each hospital in our sample, we obtained the ownership status (public, private/non-profit, and private/for-profit), bed size, total margin, IT operating expense, IT capital expense, and IT staff size. Hospitals were characterized as teaching if they possess a Council of Teaching Hospitals (COTH) status designation. Safety net hospitals were defined using previously established financial classifications [15].

Statistical Analysis

For each respondent, an automation score, usability score, and four separate sub-domain scores were calculated using methodology previously described [13, 14]. Each hospital was then assigned the median value of the scores derived from respondents affiliated with that hospital. Hospital characteristics were dichotomized based on the median value for hospitals in the target sample.

The major objective of our analysis was to examine the relationship between hospital characteristics and CITAT scores, after identifying and adjusting for potential confounders. We first examined whether any responder characteristics such as physician specialty, age, computer orientation, computer sophistication, years of practice at the hospital, or number of hours delivering care at the hospital were independently associated with the dependent variables (CIT, automation or usability) and independent variables (hospital characteristics). Student's t test and analysis of variance methods revealed no independent, simultaneous relationships. Thus respondent characteristics were eliminated as potential confounders. This finding was consistent with the results of previous work [13].

We separately compared the mean CIT, automation, and usability scores for each hospital characteristic using either Student's t test or analysis of variance. This crude analysis of hospital characteristics was a means to identify potential confounders that would require adjustment in a multivariable regression. We then tested the presence, strength, and independence of associations between each of the hospital characteristics and the CIT, automation, and usability scores using linear regression models, adjusting for the percentage of complete responses and accounting for possible within-hospital clustering of physician responses by using robust variance techniques. Three variables, bed size, ownership, and total margin, were highly correlated with other hospital characteristics (control, operating margin, and debt service coverage) and were also associated with either the automation or usability scores. We included these variables in each of the multi-variable regression models examining the relationship between hospital characteristics and CITAT scores. To normalize the expenditures for IT operating expenses, capital expenses, and IT staff for hospital size, we performed a sensitivity analysis examining the relationship of each of these independent variables, divided by bed size, with the dependent variables. Results were considered statistically significant if p value ≤ 0.05. STATA version 8.2 (College Station, TX) was used for all analyses.


Response Rate and Characteristics of Study Hospitals

We received five or more physician responses for 69 of the 125 targeted hospitals (55% response rate). Response rates were generally robust across hospital categories (Table 1); we had excellent response rates among teaching hospitals, safety net hospitals, and hospitals with large IT staffs (83%, 87%, and 74% respectively). Hospitals with smaller bed size, lower total margin, lower operating margin, or lower IT staff had lower response rates (41–45%).

Table 1 Characteristics of Responding Hospitals

Responding physicians were older (average age, 50 years) than non-responding physicians (average age, 47 years). The percentage of participating physicians who were male (81%) was not significantly different than the percentage of male physicians who did not participate (78%). Of all participating physicians, 43% specialized in internal medicine, 36% in surgery, and 22% in family practice. These proportions were similar to those among non-participating physicians. Responding physicians were asked to indicate how many hours a week they spend delivering inpatient care at their hospital. The percentage of respondents who practice <10 hours per week, 11–20 hours per week, and 21–40 hours per week were similar at 24%, 26%, and 20%. Slightly fewer respondents reported working 41–60 hours per week (14%) or > 60 hours per week (16%). As would be expected, the proportion working >40 hours per week was higher among teaching hospitals (47% vs. 24% in non-teaching hospitals) and hospitals with larger bed size (37% vs. 19%).

Distribution of CITAT Scores

Overall CITAT scores were low in this sample of hospitals (Figure 1). The median automation score was 18.3 (out of a total of 100 points), with a floor at 8.2 points. The usability score was higher than both CIT and automation, with a median score of 40.6. The CIT score, an average of the automation and usability scores, was normally distributed and also exhibited low values (median, 29.1). Most hospitals scored poorly on order entry and decision support, both of which had floors at 0 points and median values of 11.7 and 5.3, respectively. Notes and records and test results had the broadest distribution with higher median values of 28.7 and 53.4, respectively. The median total margin for hospitals in this study was 0.03; both safety net hospitals and hospitals that exceeded a total margin of 0.03 follow the distributions of other hospitals (Figure 1).

Figure 1
figure 1

Distribution of CITAT scores: a) CIT; b) automation; c) usability; d) order entry; e) notes & records; f) test results; and g) decision support for all hospitals, hospitals whose total margin exceeds the median for all hospitals (≥ 0.03), and safety net hospitals.

Relationship between Hospital Characteristics and CITAT Scores

CITAT scores were related to several hospital characteristics. In the unadjusted analysis of mean scores, automation scores were statistically significantly greater across multiple hospital characteristics (Tables 2 and 3). Academic hospitals and hospitals with larger bed sizes, IT operating expenses, and IT staff demonstrated higher mean CIT and automation scores than hospitals with fewer beds (p < 0.05 in all cases). Hospitals with higher IT operating expense, IT capital expenses, and larger IT staff had greater mean automation scores (p < 0.05 in all cases).

Table 2 Automation, Usability and Clinical Information Technology (CIT) Scores by Hospital Characteristic
Table 3 Sub-Domain Scores by Hospital Characteristics

In the adjusted multivariable model, several of these associations persisted (Tables 2 and 3). Teaching hospitals had higher CIT scores (4.6 points higher, p = 0.002) than non-teaching hospitals. Hospitals with higher IT operations expenses, capital expenses, and larger IT staff continued to have higher automation scores (p < 0.05 in all cases). In contrast, hospitals with larger bed size or higher total margins did not have higher CIT, automation, or usability scores in the adjusted models. In addition, adjusted scores for urban safety net hospitals were not lower than those for non-safety net hospitals in any category. In the adjusted analysis, the type of ownership (church or not-for-profit, government, or for-profit) was not related to CIT, automation, or usability scores.

The adjusted analyses were repeated for each of the automation and usability sub-domains. Automation of test results were statistically significantly higher for teaching and not for profit hospitals and hospitals with larger bed size, greater total margin and IT capital expenses (p < 0.05 in all cases). Teaching hospitals also scored more highly on the decision support and user support sub-domains (p < 0.05 in both cases). Hospitals with a lower average age of plant (<10 years), and larger IT operating expenses, IT capital expenses, and IT staff had higher order entry scores (p < 0.05 in all cases).

In a separate sensitivity analysis, we divided each of the IT spending variables (IT operating expense, IT capital expense, and IT staff) by bed size to normalize these variables for the organization's size. We found no relationship between the normalized IT expenditure variables and CITAT scores, indicating that positive associations in the original analysis (in particular, higher automation scores associated with higher IT expenditures) diminished after accounting for bed size.

Relationship between Automation and Usability

Every 10-point increase in the automation score was associated with a usability score 3.8 points higher (Figure 2, p < 0.01). The magnitude and significance of this relationship held after adjustment for bed size, total margin, and ownership status (3.5 points, p < 0.01).

Figure 2
figure 2

Scatter plot displaying the relationship between automation and usability scores by hospital.


Many studies evaluating the adoption of CIT consider implementation as a binary event; in other words a technology such as computerized provider order entry (CPOE) is introduced into a group of hospitals and these hospitals are then compared to hospitals without CPOE. This approach makes it difficult to generalize results because technology implementations are often on-going processes with no distinct end point. The same CPOE system is likely to have different performance characteristics at 2 years post-implementation compared to 6 months post-implementation, partly as a result of dynamic changes involving both the technologic and organizational processes. Furthermore, the definition of information technology is rarely standardized from the perspective of the respondent; what may be defined as CPOE at one institution may be significantly different in scope, maturation, capability, and performance characteristics at another institution. It would be challenging to simply apply results based on simple terminologies cross-sectionally to different hospitals. The Clinical Information Technology Assessment Tool (CITAT) examines information technology capabilities in the hospital within the context of the socio-technical environment of the organization. This view holds that successful IT implementations jointly optimize both the technology and the social aspects of an organization, and that one aspect cannot be understood without knowledge of the other[12, 16] The CITAT was designed with these concepts in mind and avoids simple terminological definitions of hospital IT that may not account for the usage, maturation, and capabilities of the information system and the organizational context in which it operates. Instead, the CITAT asks physicians whether a host of specific clinical activities in the hospital are routinely and preferentially conducted using computers. If there is insufficient user training, if the technology itself is unfriendly, or if the physician and organizational routines are not aligned with the technology, the CITAT scores for that hospital will be low, regardless of the cost or scope of the technologic acquisition. This approach allows the IT variable to be standardized across study hospitals and renders highest scores to those hospitals in which the technology, organizational routines, and clinical users are self-reinforcing, a fundamental feature of a highly optimized socio-technical environment [12, 17].

In exploring which hospital characteristics are most associated with highly automated and usable clinical information systems as measured by the CITAT, we found that hospitals with larger information technology staff, budgets, and capital expenses had statistically significantly higher scores on automation, test results, and order entry scores. Spending on these factors alone appears to be more relevant than other structural factors, such as bed size, ownership status, and total margin, and persisted after adjustment for these factors. In a separate sensitivity analysis, however, after we normalized each of these factors for hospital size the association diminished or disappeared. Although bed size, by itself, was not related to higher automation scores, these results suggest that larger hospitals may enjoy an economy of scale with respect to the high fixed costs associated with large IT projects. Achieving this level of cost-effectiveness with respect to IT spending may be more challenging for smaller hospitals. Likewise, teaching hospitals, perhaps because of their history of innovation and experimentation, appear to embrace information technologies sooner than other types of hospitals. These hospitals scored higher on the CIT score and on multiple automation and usability sub-domains. As with other innovations in medicine, it is possible that academic physicians advocate for newer information technologies, increasing the speed of its adoption in these organizations.

Contrary to our expectations, a number of hospital characteristics do not appear to be related to the CITAT scores. Ownership status is not significantly associated with any of the IT variables, with the notable exception of test results. Within this latter sub-domain, not-for-profit hospitals scored 20 points higher than for-profit hospitals. Historically, test results has been among the earliest components of the information system to be automated and it is possible that not-for-profit hospitals, which constitute the more traditional form of hospital organization, may have more experience developing this component of their information systems [18]. Though there has been significant attention placed on the promise of computerized order entry systems to reduce medical errors, starting with the IOM reports in the 1990s, fewer hospitals have successfully installed such systems. We found that hospitals with older age of plant (i.e., building) scored 8 points lower on the order entry sub-domain. One might suspect that newer hospital facilities would be more easily equipped with computerized order entry systems than hospitals with older physical facilities, as these results suggest. Perhaps more important than the age of the building is the newness of its technological infra-structure. The latter may not necessarily correlate with the building age, though it could be captured in the age of plant variable and may explain the findings we observe.

Historically, urban safety net hospitals in the United States are least able to meet the challenges associated with acquiring new medical technology [19]. These hospitals balance multiple claims on their resources, perhaps reducing the capability to invest in the information technologies that support healthcare. Our analysis suggests, however, that urban safety net hospitals in Texas do not significantly trail their peers. Due to their size and scale, these hospitals may achieve IT parity because they can afford the fixed costs necessary for the IT infra-structure and have decided to pursue this course. In addition, all of the safety net hospitals in this sample are major teaching hospitals. Thus, it is difficult to differentiate between the effects of teaching status and safety net status.

According to recent estimates, adoption of clinical information technologies remains low but follows certain patterns [18, 20]. Our findings are consistent with these trends. Historically, the computerized display of lab results has been among the first aspects to be automated [20]. In the last decade, digitization of radiological images has also increased [20]. Both of these components fall under the test results sub-domain, which in our study showed the greatest degree of adoption. Though some hospitals may be experimenting with computerized order entry and decision support, these efforts have not yet translated into systems that physicians widely use, as indicated by the low scores in these areas. Electronic decision support is perhaps the most challenging component to implement since it requires all other components first. In this study, notes & records scores were higher than scores for order entry and decision support, consistent with this theory and other studies [18].

Usability items in the CITAT do not presuppose the use of technology. The usability domain is constructed to measure the ease, effectiveness, and support of the information system regardless of the technologies in place [13]. As an example of the types of questions in this domain, one of the survey items asks whether physicians are able to obtain adequate computer support in less than 2 minutes. As might be expected, we found that usability scores were generally higher than automation scores. It is feasible that thoughtfully planned paper-based systems could produce usability scores higher than, or equal to, systems which employ poorly designed electronic processes. However, consistent with two previous studies, we found that a higher automation score correlated with higher usability scores, suggesting that digitization may be necessary to produce usable information systems. Alternatively, these results may indicate that physicians' expectations are changing; electronic processes may be perceived to be more usable than non-electronic processes, independent of overall merits, and therefore are rated more highly. Usability of the information system, an often elusive goal for hospital systems, was not specifically associated with any of the hospital characteristics we measured, with the exception of teaching status. In that case, hospitals with a teaching affiliation had higher user support scores than non-teaching hospitals. Our results suggest that usability may be more dependent on factors we did not measure as part of our set of hospital characteristics; these may include the quality and direction of leadership at the institution, the focus on quality improvement, and the concentration on human factors engineering in designing the information system. This will need to be further examined in future studies.

This study has important limitations. Our analysis explores a number of hospital characteristics, raising issues of multiple testing and increasing the probability of some false-positive relationships. As with all cross-sectional studies, positive associations will need to be confirmed in repeated studies. A Bonferroni correction for the number of tests performed would have eliminated many of the significant relationships we report. However, the Bonferroni method of correction for multiple testing is itself controversial, and argued by some to be too severe a method for correction [21]. The purpose of this study was to find potential relationships to explore further, given that the explanatory power of a cross-sectional study may be weak despite the construction of a well-validated instrument. Appropriate assessment of information technology requires multiple methods. Survey-based methods are one important method, but other methods such as electronic queries, time-motion studies, and qualitative analyses are needed to arrive at a complete portrait of an information system. Furthermore this study attaches importance to higher scores on the CITAT, as a measure of the strength of the socio-technical environment at the hospital. However, we do not yet know whether, and to what degree, CITAT scores correlate with important clinical and financial outcomes. These relationships will need to be assessed in the future.


This study explores the relationship between hospital characteristics and information system characteristics among a diverse set of urban hospitals in the United States. Our findings suggest that those hospitals with an academic affiliation or those who spend significantly on IT capital and staff achieve higher automation scores. We found that fewer of the hospital characteristics we measured were meaningfully associated with usability scores. Further studies, using a variety of methods, should examine what organizational factors, such as policies, norms, and cultures, could explain these relationships.


  1. Bates DW: The quality case for information technology in healthcare. BMC Med Inform Decis Mak. 2002, 2: 7-10.1186/1472-6947-2-7.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Bates DW, Gawande AA: Improving safety with information technology. N Engl J Med. 2003, 348 (25): 2526-2534. 10.1056/NEJMsa020847.

    Article  PubMed  Google Scholar 

  3. Berg M: Implementing information systems in health care organizations: myths and challenges. Int J Med Inform. 2001, 64 (2–3): 143-156. 10.1016/S1386-5056(01)00200-3.

    Article  CAS  PubMed  Google Scholar 

  4. Kilbridge P: Computer crash–lessons from a system failure. N Engl J Med. 2003, 348 (10): 881-882. 10.1056/NEJMp030010.

    Article  PubMed  Google Scholar 

  5. Ash JS, Berg M, Coiera E: Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc. 2004, 11 (2): 104-112. 10.1197/jamia.M1471.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Southon FC, Sauer C, Grant CN: Information technology in complex health services: organizational impediments to successful technology transfer and diffusion. J Am Med Inform Assoc. 1997, 4 (2): 112-124.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Ash J: Organizational factors that influence information technology diffusion in academic health sciences centers. J Am Med Inform Assoc. 1997, 4 (2): 102-111.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Anderson JG: Clearing the way for Physicians' Use of Clinical Information Systems. Communications of the ACM. 1997, 40: 83-90. 10.1145/257874.257895.

    Article  Google Scholar 

  9. Meijden Van Der MJ, Tange HJ, Troost J, Hasman A: Determinants of success of inpatient clinical information systems: a literature review. J Am Med Inform Assoc. 2003, 10 (3): 235-243. 10.1197/jamia.M1094.

    Article  PubMed Central  Google Scholar 

  10. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG: Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006, 144 (10): 742-752.

    Article  PubMed  Google Scholar 

  11. Jha AK, Ferris TG, Donelan K, DesRoches C, Shields A, Rosenbaum S, Blumenthal D: How common are electronic health records in the United States? A summary of the evidence. Health Aff (Millwood). 2006, 25 (6): w496-507. 10.1377/hlthaff.25.w496.

    Article  Google Scholar 

  12. Wears RL, Berg M: Computer technology and clinical work: still waiting for Godot. Jama. 2005, 293 (10): 1261-1263. 10.1001/jama.293.10.1261.

    Article  CAS  PubMed  Google Scholar 

  13. Amarasingham R, Diener-West M, Weiner M, Lehmann H, Herbers JE, Powe NR: Clinical information technology capabilities in four U.S. hospitals: testing a new structural performance measure. Med Care. 2006, 44 (3): 216-224. 10.1097/01.mlr.0000199648.06513.22.

    Article  PubMed  Google Scholar 

  14. Amarasingham R, Pronovost PJ, Diener-West M, Goeschel C, Dorman T, Thiemann DR, Powe NR: Measuring clinical information technology in the ICU setting: application in a quality improvement collaborative. J Am Med Inform Assoc. 2007, 14 (3): 288-294. 10.1197/jamia.M2262.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Zuckerman S, Bazzoli G, Davidoff A, LoSasso A: How did safety-net hospitals cope in the 1990s?. Health Aff (Millwood). 2001, 20 (4): 159-168. 10.1377/hlthaff.20.4.159.

    Article  CAS  Google Scholar 

  16. Berg M, Aarts J, Lei van der J: ICT in health care: sociotechnical approaches. Methods Inf Med. 2003, 42 (4): 297-301.

    CAS  PubMed  Google Scholar 

  17. Aarts J, Doorewaard H, Berg M: Understanding implementation: the case of a computerized physician order entry system in a large Dutch university medical center. J Am Med Inform Assoc. 2004, 11 (3): 207-216. 10.1197/jamia.M1372.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Poon EG, Jha AK, Christino M, Honour MM, Fernandopulle R, Middleton B, Newhouse J, Leape L, Bates DW, Blumenthal D: Assessing the level of healthcare information technology adoption in the United States: a snapshot. BMC Med Inform Decis Mak. 2006, 6: 1-10.1186/1472-6947-6-1.

    Article  PubMed  PubMed Central  Google Scholar 

  19. NAPH: Capital Investments and the Safety Net. NAPH Issue Brief. 2003, Washington, DC

    Google Scholar 

  20. Ash JS, Bates DW: Factors and forces affecting EHR system adoption: report of a 2004 ACMI discussion. J Am Med Inform Assoc. 2005, 12 (1): 8-12. 10.1197/jamia.M1684.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Perneger TV: What's wrong with Bonferroni adjustments. Bmj. 1998, 316 (7139): 1236-1238.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Pre-publication history

Download references


The authors wish to thank the Commonwealth Fund, NY, for their generous funding support of this study. The Commonwealth Fund was not involved in the design and conduct of the study, the collection, management, analysis or interpretation of the data, or the preparation, review, or approval of the manuscript. The authors wish to thank our partners in this study: the Texas Hospital Association, the Texas Health Institute, the Texas Medical Association, and the Texas Department of State Health Services.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ruben Amarasingham.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

RA and NRP conceived of and designed the study. RA and ACC disseminated the survey and acquired the data. RA, NRP, LP, MDW, DJG participated in the analysis and interpretation of the data. RA, NRP, ACC drafted the manuscript; RA, NRP, LP, MDW, DJG made critical revisions to the manuscript; All authors participated in the statistical analysis and read and gave final approval to the manuscript. Both RA and NRP had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. None of the authors have any conflicts of interest, including specific financial interests and relationships and affiliations relevant to the subject matter or materials discussed in the manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Amarasingham, R., Diener-West, M., Plantinga, L. et al. Hospital characteristics associated with highly automated and usable clinical information systems in Texas, United States. BMC Med Inform Decis Mak 8, 39 (2008).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: