Selecting information technology for physicians' practices: a cross-sectional study
© Eden; licensee BioMed Central Ltd. 2002
Received: 27 December 2001
Accepted: 05 April 2002
Published: 05 April 2002
Many physicians are transitioning from paper to electronic formats for billing, scheduling, medical charts, communications, etc. The primary objective of this research was to identify the relationship (if any) between the software selection process and the office staff's perceptions of the software's impact on practice activities.
A telephone survey was conducted with office representatives of 407 physician practices in Oregon who had purchased information technology. The respondents, usually office managers, answered scripted questions about their selection process and their perceptions of the software after implementation.
Multiple logistic regression revealed that software type, selection steps, and certain factors influencing the purchase were related to whether the respondents felt the software improved the scheduling and financial analysis practice activities. Specifically, practices that selected electronic medical record or practice management software, that made software comparisons, or that considered prior user testimony as important were more likely to have perceived improvements in the scheduling process than were other practices. Practices that considered value important, that did not consider compatibility important, that selected managed care software, that spent less than $10,000, or that provided learning time (most dramatic increase in odds ratio, 8.2) during implementation were more likely to perceive that the software had improved the financial analysis process than were other practices.
Perhaps one of the most important predictors of improvement was providing learning time during implementation, particularly when the software involves several practice activities. Despite this importance, less than half of the practices reported performing this step.
Health care providers compete for managed care contracts based on cost-effectiveness and quality of care [1–4]. Information technology (IT) provides a cost-effective way to document productivity, performance measures, cost, and quality of care. Since IT has dropped in cost over time, physician practices are now turning to it to meet these needs. Information technology for this study is defined as computer software used to store, transport, or communicate information [2, 5–7].
The health care organizations that succeed in the 21st century will be those that improve quality and reduce cost. These juxtaposed objectives most likely will be reached through improved handling of information [2, 8, 9]. The Committee on Quality of Health Care in America reported that most clinical information remains in paper form . This committee made several recommendations for improving quality, including moving clinical information to an electronic format by the end of the decade.
Information technology selection in health care has often been performed in a rather informal way, resulting in the purchase of "white elephants" . The systems may not perform as planned and may cause additional work for medical staff. The systems are often purchased or developed in pieces without consideration to the overall business strategy .
To date, few publications have documented the selection process and the resulting impact of the IT on the health care organization. Most papers give anecdotal descriptions, often by vendors, but lack client perceptions of the information system's value [1, 2, 7, 11–14]. Even at the hospital level, only a few client perceptions of IT adoption have been reported [15–19]. The number of available papers that examine IT selections within physician practices is even smaller than those papers addressing hospital selections [3, 20]. However, many physicians are transitioning from paper to electronic formats for billing records, medical charts, etc. This study aims to understand the process for selecting IT for physicians' practices and the perceptions of the IT after it is implemented. The primary objective of this research was to identify the relationship (if any) between the IT selection process and the office staff's perceptions of the it's impact on practice activities.
The telephone survey was conducted with 407 physician practices in Oregon . The survey elements were based on the literature review and on the feedback from the expert panel. The survey addressed the following descriptive research questions:
Q1: Who selects IT for a physician practice (e.g., administrators, clinicians, computer specialists)?
Q2: What selection steps are used?
Q3: What factors influence the purchase?
Q4: Which IT features are selected?
Q5: Who (within the practice) customizes the IT?
Q6: Is time given to learn the IT?
Q7: What are the clinical and office staff members' perceptions of this IT's impact on several office activities (e.g., scheduling, communication, quality reporting)?
The design of the telephone survey was reviewed by the Human Subjects Research Review Committee at Portland State University.
Providence Health System in Portland, Oregon provided a database of practices (n = 933) for this study. These practices all served Providence Health System in some capacity – e.g., as primary care physicians or specialists. Eligible practices had acquired software within the past five years but not within the past six months. Practices with software older than five years were disqualified because it was unlikely that the decision makers (if present) would recall the details of the selection process. Practices with software selected within the last six months were dropped because new software often requires a learning time period. The original sample of 933 contained 70 practices that had no computers and 35 that had software purchased only in past six months or more than five years ago. In total, 11.1% of the original sample were excluded.
Description of respondents and participating practices
Frequency (n = 399)
Role in practice
Administrator/office manager, finance manager, etc.
Billing or scheduling staff
Physician, physician's assistant or nurse practitioner
Other staff members
Information system managers
Nurses and medical assistants
Type of practice
Primary care and various specialties
More than 10 practitioners
Health system owned
Second interviews were gathered for 189 of the 407 responding practices. Since almost half of the responding offices represented single practitioners, many of these smaller offices had only one eligible participant.
The survey questions were developed based on the literature review and discussions with an expert panel. Since many of the respondents were not familiar with technical IT terms, care was taken to present the survey in a "respondent friendly" format.
Thirteen college student interviewers and two supervisors conducted the interviews using a telephone interviewing software package, Computer Assisted Survey Execution System. A program was written to provide the interviewers with precise dialogue, questions, and precoded responses. As the interview progressed, the interviewer entered the responses into a personal computer.
Since the study objective included capturing the perceived impacts of IT, we attempted to record perceptions from two representatives from each practice: the decision maker and a primary user (see Additional File 1: "Physician Practice Software Telephone Survey, Dialog and Questions"). The initial interview that included questions related to the selection process and perceived impacts of the IT lasted approximately 15–25 minutes. The respondent was asked to describe a recent IT purchase (at least six months old). For each practice, the respondent indicated whether a person in a specific role – e.g., an administrator – was involved or not involved in selection, and involved or not involved in software customization. Customization in this study referred to providing input to the software vendor for writing software specific to the practice.
During the interview we read the respondents a list of selection steps. For each step, the respondent answered "yes" or "no" as to whether it was performed. During the interview the respondents were read several potential factors that might have influenced the purchase. For each one they rated the statement on a 1-to-6 scale of importance, (ranging from "no importance" to "very high importance"). Finally, we asked the respondents to react to 12 statements describing potential impacts of the IT on selected practice activities. The statements were intentionally not grouped by any particular theme. The respondents rated each impact statement on a 1-to-5 scale of agreement ("strongly disagree", "slightly disagree", "neither agree or disagree", "slightly agree", "strongly agree") or selected "not applicable."
The second interview with a primary user of the software included mainly the perceived impact questions, and lasted 7–10 minutes. At the completion of the initial interview, each respondent was offered a summary of the results.
The data from all interviews were first descriptively evaluated, primarily by computing frequencies of responses for each question. Factor analysis (principal components) revealed four latent factors related to the respondent's perceived impacts of the IT on four practice activities: scheduling, financial analysis, communication, and medical documentation . Therefore, four subscales were created. The scheduling, financial analysis, communication subscales each included two items, and the medical documentation subscale included three items. Responses of "not applicable" were coded as missing. For each subscale the mean of the items was computed.
Diagnostic plots of the four practice activity subscales suggested that an explanatory model might be best approached using logistic regression, which relaxes the assumption of normality. The four subscales were recoded to dichotomous variables corresponding to agree or not agree. If the mean score (of 2–3 impact statements) for a practice activity was greater than 3.0, the respondent was scored as "1" for agree. If the mean score for a practice activity was 3.0 ("neither agree or disagree") or less, the respondent was scored as "0" for not agree. Each of the four practice activity subscales became the dependent variable in a predictive model. The independent variables entered into the models included the demographic and selection variables.
Multiple logistic regression
We attempted four predictive models, one for each of the newly created dichotomous subscales. Only respondents who found the impact statements relevant were included in the predictive models. Multiple logistic regression revealed relationships between the selection process and the perceptions related to the scheduling, financial analysis, and communication processes. Variables that achieved a significance level of p < .05 were retained in the models. For the perceptions related to medical documentation, no significant selection variables survived the analysis. This was most likely due to the small number of practices with electronic medical records (n = 89) and aggregating all types of electronic medical record (EMRs) regardless of type and number of functions. It is also possible that the decision to purchase an EMR is often made outside the practice – e.g., a large health system offers EMRs to the practices. For 11 of the 89 practices that had EMRs, the decision was made by a large health system. Data from these practices were not included in the predictive models, thus reducing the number of available practices with EMRs to 78.
A summary of the models is presented in this paper. The complete analysis and models are available elsewhere . The predictive models were built using a model building data set (299 randomly selected interviews). The models were then tested with a testing data set (the remaining 100 interviews). One-hundred interviews were needed to insure adequate statistical power. As a check for cross-validation, the accuracy with which the models predicted the perceived impact subscale values using the model building data set was compared to the accuracy achieved with the testing data set. Using the parameters established with the model building data set, agreement (or not agreement) to a perceived impact subscale was predicted for the testing data set.
Model Building Data Accuracy
Testing Data Accuracy
73%(n = 136)
65%(n = 43)
86%(n = 166)
73%(n = 56)
90%(n = 89)
66%(n = 35)
Once the results were completed, the expert panel was reconvened to provide insight in interpreting the results. In the sections that follow, the descriptive results, comparison of the decision maker vs user, and each cross-validated model are summarized and discussed.
Results and Discussion
Who selects (Q1)? And Who customizes (Q5)? At least 1 of the following
Selection Frequency (n = 399)
Customization Frequency (n = 399)
Administrators: office manager, financial manager, or medical director
Clinical staff members: a physician, physician's assistant, nurse practitioner, nurse or medical technicians
Computer consultant from outside the practice
Office staff members: billing clerk, scheduler, receptionist, or secretary
Representative from: health system, insurance company or patients
Computer specialist within the practice
What selection steps are used (Q2)?
Frequency (n = 399)
Performed cost comparisons
Viewed software demonstration
Issued a RFP (Request For Proposal) or RFI
Compared software options with the best in the field
Conducted prior user interviews
Performed a needs assessment
Developed selection criteria
Reviewed your long term business plan
Made a site visit
Developed a decision analysis
Formed a selection committee
What factors influence the purchase (Q3)?
Frequency (n = 399) Rated "high or very high importance"
The software appeared easy to use.
Software appeared to improve one or more of the business processes in the practice process.
The software provided the most value for cost.
The software would help the practice perform processes needed to reach our long term business strategy.
The vendor had many sites and was responsive to our needs during the selection process.
There were strong testimonies from prior users.
The software was already in use by other sites affiliated with this practice.
Software was compatible with existing practice systems in the practice.
Eighty percent or more of the practices performed cost comparisons and/or viewed software demonstrations. The frequencies for the steps the practices took in selecting software are depicted in descending order in Table 3.
Seventy percent or more of the practices stated that "ease of use," "improving a business process," and "most value for cost" were important factors influencing the purchase (Table 3). The frequency of factors receiving either "high" or "very high importance" is also presented in descending order in Table 3.
Details of IT
Frequency (n = 399)
Given to the practice ($0)
Less than $10,000
$More than $50,0
Commercial package (no customization)
Commercial package + customization
Completely custom package
Number of users
Only 1 user
More than five users
Computer Activities Which IT features are selected (Q4)?
Frequency (n = 399)
Electronic Medical Record
Access and complete patient records using computerized patient records
Track incoming and outgoing referrals
Track patient enrollment
Statistical reporting on utilization and outcomes
Follow clinical guidelines
At least one managed care activity
Email or telemedicine to external colleagues
Email within the practice
Remote link with other information systems
Access to internet
Electronic data interchange (EDI)
Online literature searches
At least one communication activity
Billing and collections
At least one practice management activity
What are the clinical and office staffs members perceptions of this it's impact on office activities (Q7)?
"Relevant Proportion (n = 399)
For "relevant" responders only, the "agreed" proportion
Improved billing process
More accurate documents
Improved ability to analyze managed care costs
Improved scheduling process
Improved access to patient information at multiple sites
Reduced malpractice costs
Improved referral process
Reduced time for recording patient information
Improved documented quality
Quicker lab results
Access to more journals
Comparison of decision-maker vs user
The primary respondents agreed with users on their perceptions of the software's impact on scheduling and financial analysis activities (p < .001). For the scheduling model, Phi was .359, with a maximal Phi of .778. For the financial analysis model, Phi was .418 with a maximal Phi of .920. Since the primary respondent was reasonably knowledgeable about the perceived impacts of the software, we did not include the user data in the remainder of the cross-validated models. The user provided only a few demographics and the perceived impact data, while the primary respondent provided the selection data as well as the perceived impact data.
Predicting the impact of the software on scheduling activities
For the scheduling model, five selection variables as a group predicted with 73% accuracy the subscale of whether the respondents on average would agree with the following two impact statements:
"The software has improved the scheduling of patients for routine, preventive and urgent appointments."
"The software has improved the referral process in sending and receiving referrals quickly."
Respondent's Reaction to Scheduling Statements
Software with electronic medical record features.
More likely to agree
The practice compared software options with the best in the field.
More likely to agree
Software with practice management features.
More likely to agree
Importance of prior user testimony
More likely to agree
The respondent personally selected the software.
Less likely to agree
Looking at the odds ratios in Table 6, the likelihood of agreement with the scheduling subscale is almost four times (odds ratio, OR = 3.89) as great when practices selected EMR packages than if they did not select EMR packages. At first this finding was surprising. Many EMRs, however, have automatic recall features when the patient should be called or sent a reminder for a health check. Similarly, the likelihood of agreement was almost four times (OR = 3.88) as great when the practice compared the software options with the best in the field than if it did not perform this step.
The practices that selected practice management software were 1.70 times more likely to agree that the software had improved the scheduling and referring of patients than practices that selected other types of software. This finding was expected since these packages typically include a scheduling module. Additionally, practices that considered "prior user testimony" important in the selection process were 1.39 times more likely to agree with the scheduling subscale than those practices that did not consider prior user testimony as an important influence.
Finally, a respondent who had personally selected the software was less likely to agree with the impact statements (OR = 0.20). The members of the expert panel felt this was a symptom of "unmet expectations." The members of the selection team knew how the software was supposed to perform and were likely disappointed when it didn't live up to the vendor promises. These respondents had also probably seen the "Cadillac" performers and realized that their software had only achieved "Chevrolet" status. Another explanation is that these practices failed to fully implement the software or to adapt clinic workflows to fully utilize the software.
In summary, practices that selected EMR or practice management software, that made software comparisons, or that considered prior user testimony as important were more likely to have perceived improvements in the scheduling process than were other practices.
Predicting the impact of the software on financial analysis activities
For the financial analysis model, five selection variables as a group predicted with 86% accuracy the subscale of whether the respondents on average would agree with the following two impact statements:
"The software has created a more accurate and timely billing process."
"The software has improved the practice's ability to track and analyze costs and revenues associated with managed care contracts."
Financial Analysis Model
Financial Analysis Model Predictor
Respondent's Reaction to Financial Analysis Statements
Time to learn (reduced workload to learn the software).
More likely to agree
Software with managed care features.
More likely to agree
Importance of "value for cost' purchase influence.
More likely to agree
Importance of compatibility purchase influence.
Less likely to agree
The cost of the software.
Less likely to agree
The odds of agreement were increased by more than a factor of four (OR = 4.59) for each increase in managed care activities the software contained. Since most managed care software packages are marketed to assist the practice in documenting costs associated with managed care contracts, this finding was expected.
Practices that considered value an important consideration were twice (OR = 2.0) as likely to agree with the financial analysis subscale. By contrast, practices that considered compatibility an important influence were less likely (OR = 0.66) to agree with financial analysis subscale. At first the compatibility result was surprising. However, 51% of these practices were first-time buyers, and usually buying billing software, so compatibility was not a critical consideration. Ninety-one percent of first-time buyers who rated compatibility as low-to-no importance agreed with the financial analysis subscale. It is also possible that practices with existing good financial analysis processes (and little room to improve) rated compatibility as important but disagreed that the new software had improved the existing good process.
The finding that less expensive packages related to more satisfied buyers was interesting (OR = 0.25). There were many good financial packages available for less than $10,000 in 1996. Practices that spent less than $10,000 bought software packages with few, but very functional, features. Those practices that spent more than $10,000 were purchasing complex systems, perhaps for multiple sites. Financial analysis may just have been a small module of these multi-purpose packages.
In summary, practices that considered value important, that did not consider compatibility important, that selected managed care software, that spent less than $10,000, or that provided learning time during implementation were more likely to perceive that the software had improved the financial analysis process than were other practices.
Observations from both models
In looking over the predictors for the two cross-validated models (scheduling and financial analysis), some predictors naturally belong in one model or the other – e.g., practice management software in the scheduling model and managed care software in the financial analysis model. The themes in the scheduling model center around software features (emr and practice management software, comparison of software options) and usability (prior user testimony and personal selection by respondent). The themes in the financial analysis model include cost (software cost, value), software features (managed care software and compatibility), and learning time. This might suggest that the respondents for the financial analysis model had differing roles in the practice than the respondents for the scheduling model. In both of these models, 79% of the respondents were administrators.
Since all types of administrators (e.g., office managers, finance managers) were grouped together, it was impossible to identify the primary role of administrator who responded. The differences in the models also suggest that the predictors of success differ by the types of activities the software is intended to perform.
It might appear odd that some predictors (e.g., learning time) did not carry through to both models. It is likely that the type and complexity of software package contributed to the learning demands on the office. Many of the respondents who agreed with the financial analysis subscale chose managed care software that bundled together many activities (tracking incoming and outgoing referrals, patient enrollment, capitation accounting, and/or utilization reporting). For practices learning this type of software, protected learning time was an important predictor of success. For practices implementing practice management software (scheduling, billing, and/or accounting spreadsheets), the learning demand was less. This naturally suggests that the decision to reduce the workload while learning a software package should consider the number and complexity of the tasks to be learned.
Limitations and research opportunities
The respondents for this study primarily represented practices that serve Providence Health System in Oregon. These practices served either as managed care providers or as fee-for-service providers. The only practices excluded were pure HMO providers – e.g., Kaiser Permanente. The pure HMO practices were excluded because it was unclear whom to interview regarding software selections. Often these practices are given software directly from the organization. Eighty-seven percent of these practices in this study had 10 practitioners or less. Only 17% of these practices had in-house computer specialists assisting with software selection. The results of this study may not generalize to large practices that often have in-house computer specialists assisting with selection. A future study could include a nationwide survey of all types of physician practices, regardless of managed care status, ownership, specialty, or size.
This study is retrospective in nature, requiring the respondents to recall a software purchase that occurred several months, perhaps more than a year, earlier. In an "ideal study design," a questionnaire should be distributed to practices that have recently made selections. Another questionnaire addressing the impact on the practice could be sent at a pre-defined follow-up period – e.g., six months after implementation. This "ideal study design" would be difficult to conduct without a sufficient list of practices that have recently purchased software. Perhaps software manufacturers and vendors could provide lists of recent clients (with permission) to interested researchers.
The cross-sectional survey design of this study captured the technical aspects of the selection process (e.g., who was involved, what steps that were taken). Although the respondents were given a few "open-ended" questions, most provided little additional information. There could have been additional selection steps, influences, and impacts. It is also possible that the observed changes in impact were related to variables we didn't attempt to measure – e.g., ability and desire of management to implement new technologies and to change existing practice activities. Focus groups might be more effective at capturing underlying management expertise. Another very time-invasive approach would be to conduct a series of case studies, documenting the decision-making process over time. This research would need support from practices for observers to remain on-site during the selection process. This format would also promote a more well-rounded, multiple perspectives evaluation. The current study relies on perceptive responses (primarily from office managers) to measure many variables, including impact variables. Their perceptions were related to business-related practice activities. Only 5.3% of the respondents were clinicians. It is likely that expanding this study to include more clinician responses would reveal perceptions related to other processes – e.g., medical documentation or treatment processes.
The subscales (related to practice activities) were formed from responses to only two to three original impact questions. A stronger design of these practice activities impacts would include several questions related to each activity. Given the exploratory nature of this current research, this limitation could not have been foreseen. However, the results of this study open doors for more confirmatory types of studies to design survey instruments that measure software impact with underlying practice activity constructs. This study does not attempt to demonstrate cause and effect. It would be important to have respondents rate existing practice activities (before purchasing software) to control for a "ceiling effect" – practices with existing good processes have little room to improve. If such a trial were designed, it would also need to control for the type of IT and the needs of the buyer.
To move toward a more direct measure of impact would require the practices to closely measure performance and behavior. For example, in this study, the respondent is asked if the practitioners have an improved ability to consult professional literature online. A direct measurement method would determine the number of online literature consultations before and after the software installation.
The results of this research describe the software selection process as it occurs in physician practices. Using a telephone interview survey gave the researcher (and other interviewers) direct contact with the decision makers in each practice. The results of this study also describe how software is perceived to affect several practice activities.
The objective of this study was to identify relationships (if any) between the IT selection process and the office staff's perceptions of the IT's impact on practice activities. The results of the multiple logistic regression models confirmed relationships between the selection process and the perceived impacts related to the scheduling and financial analysis activities. The results of this study demonstrated a relationship (not cause and effect) between the selection process and the user perception of software usefulness.
Although many of the relationships were expected (e.g., performing software comparisons, interviewing prior users, and selecting certain software features improved perceptions about practice activities), perhaps one of the most important predictors of improvement was reducing the workload during implementation. Despite the importance of this predictor, only 36% of the practices performed this step in this study. If more practices had performed this step, it might have carried even more weight in the analysis. From a practical standpoint, many of the offices selected and implemented IT but expected the staff to learn the software while caring for a full load of patients. Investigators from a previous study by Ambroso et al.  cite this expectation as a common reason for IT failure.
One of the secondary findings of this research is that the purchasers of the software (often office managers) had perceptions about the software's use similar to those of users (who were not involved in the selection process). This finding supports the use of a single-survey-response study design for understanding perceived impacts related to software's impacts on business-related practice activities.
Author comments on prior presentation of results
The results of this study were presented at the Portland International Conference on Management of Engineering and Technology, Portland Oregon, 1997 and 1999. The results were also presented at the Institute for Operations Research and Management Science, Philadelphia, Pennsylvania, 1999. The references for the conference proceedings are listed below.
Eden K, Kocoaglu, D. Information Technology Selection Process and Perceived Impacts in Physician Practices. In Technology and Innovation Management. Portland State University, PICMET conference proceedings, 1999, pp. 562–568. Executive summary presented in proceedings, Portland International Conference on Management of Engineering and Technology, Portland, Oregon, 1999, pp. 392–394.
Eden K, Kocaoglu, D. Selection of Information Technology in the Health Care Industry. Presented at Institute for Operations Research and the Management Sciences conference. Philadelphia, Pennsylvania, November, 1999.
Eden K, Kocaoglu D. Selection and Implementation of Information Technology in the Health Care Industry. Preliminary results presented at the Portland International Conference on Management of Engineering and Technology, published in proceedings, Portland, Oregon, 1997, pp. 199–202.
List of Abbreviations
Electronic Medical Record
I would like to thank Dundar Kocaoglu, PhD, Nancy Perrin, PhD, Mara Tableman, PhD, Wayne Wakeland, PhD, Laurie Skokan, PhD, Dr. Robert Eder, PhD and Bruce Bayley, PhD, for providing direction on this study. I would also like to thank Nancy Perrin, PhD, Mark Helfand, MD, MPH, William Hersh, MD, FACP, Joan Ash, PhD, John Beekman, PhD, Jane Beekman and the BioMed Central reviewers for critically reviewing earlier versions of this manuscript and providing thoughtful feedback. I would like to acknowledge Gary Miranda for his careful editing and suggestions to make this manuscript "reader friendly." This project would not have been possible without the support and funding from Providence Health System and the use of the Portland State University Regional Research Institute. Finally, I would like to thank my husband, Kevin, and my children, Matthew and Erika, who were very supportive during this project.
- Bolley HB: Physicians in health care management: 6. Physician *bytes* computer. Canadian Medical Association Journal. 1994, 150: 1977-1982.PubMedPubMed CentralGoogle Scholar
- Eden KB: Selection of Information Technology in the Health Care Industry. Portland: Portland State University; Dissertation. 1997Google Scholar
- Renner K: Cost-justifying electronic medical records. Healthcare Financial Management. 1996, 63-70.Google Scholar
- Simpson RL: The role of technology in a managed care environment. The first in a series of three related articles. Nursing Management. 1994, 25: 26-28.View ArticlePubMedGoogle Scholar
- Aronow DB, Coltin KL: Information technology applications in quality assurance and quality improvement, Part I. [Review]. Joint Commission Journal on Quality Improvement. 1993, 19: 403-415.PubMedGoogle Scholar
- Bakopoulos JY: Toward a more precise concept of information technology. In: International Conference Information Systems. 1985, 17-24.Google Scholar
- King WR, Grover V: The strategic use of information resources: an exploratory study. IEEE Transactions On Engineering Management. 1991, 38: 293-305. 10.1109/17.97436.View ArticleGoogle Scholar
- Martin JB: The environment and future of health information systems. Journal of Health Administration Education. 1990, 8: 11-24.PubMedGoogle Scholar
- Institute Of Medicine: Crossing the Quality Chasm. Washington, D.C.: National Academy Press. 2001Google Scholar
- Wall R: Computer Rx: more harm than good?. Journal of Medical Systems. 1991, 15: 321-334.View ArticlePubMedGoogle Scholar
- Elevitch F, Treling C, Spackman K, Weilert M, Aller R, Skinner M, Pasia O: A clinical laboratory information systems survey. A challenge for the decade. Archives of Pathology & Laboratory Medicine. 1993, 117: 12-21.Google Scholar
- Simpson RL, Somers JB: The role of the clinical nurse specialist in information systems selection. Clinical Nurse Specialist. 1991, 5: 159-163.View ArticlePubMedGoogle Scholar
- Weaver RR: Assessment and diffusion of computerized decision support systems. International Journal of Technology Assessment in Health Care. 1991, 7: 42-50.View ArticlePubMedGoogle Scholar
- Eden KB, Kocaoglu DF: Selection and Implementation of Information Technology in the Health Care Industry. In: PICMET 97; Portland, Oregon. 1997, 199-202.Google Scholar
- Holland GJ: Hospital characteristics associatedwith adoption of clinical information systems. University of Alabama; Ph.D. 1989Google Scholar
- Zinn TK: Healthcare I/S executives look toward the next decade. Computers in Healthcare. 1992, 32-35.Google Scholar
- Romano CA: Predictors of nurse adoption of a computerized information system as an innovation. In: Annual Symposium On Computer Applications In Medical Care.;. 1994, 961-Google Scholar
- Chocholik JK, Bouchard SE, Tan JKH, Ostrow DN: The determination of relevant goals and criteria used to select an automated patient care information system: a Delphi approach. JAMIA. 1999, 6: 219-233.PubMedPubMed CentralGoogle Scholar
- Weiner M, Gress T, Thiemann DR, Jenckes M, Reel SL, Mandell SF, Bass EB: Contrasting views of physicians and nurses about an inpatient computer-based provider order-entry system. JAMIA. 1999, 6: 234-244.PubMedPubMed CentralGoogle Scholar
- Garrett LE, Hammond WE, Stead WW: The effects of computerized medical records on provider efficiency and quality of care. Methods Of Information In Medicine. 1986, 25: 151-157.PubMedGoogle Scholar
- Ambroso C, Bowes C, Chambrin MC, Gilhooly K, Green C, Kari A, Logie R, Marraro G, Mereu M, Rembold P, Reynolds M: INFORM: European survey of computers in intensive care units. International Journal of Clinical Monitoring & Computing. 1992, 9: 53-61.View ArticleGoogle Scholar
- Carr : What Is The Relationship, If Any, Between Nurse Involvement In The Development, Design and Selection Of A Hospital Information System (HIS) And Subsequent Utilization Of That System?. Nova University;. D.B.A. 1993Google Scholar
- Dunbar C: Nurses want I/S selection power, but do they have it?. Computers in Healthcare. 1992, 13:Google Scholar
- Gibson RP, Berger S, Ciotti G: Selecting an information system without an RFP. Healthcare Financial Management. 1995Google Scholar
- Neal T: Evaluating and selecting an information system, part 1. American Journal of Hospital Pharmacy. 1993, 50: 117-120.PubMedGoogle Scholar
- Remmlinger E, Grossman M: Physician utilization of information systems: bridging the gap between expectations and reality. American Hospital Association. 1991Google Scholar
- Simpson RL: Clinical information systems vs. practicing physicians. Nursing Management. 1992, 23: 14-16.View ArticlePubMedGoogle Scholar
- Ash JS: Factors for information technology innovation diffusion and infusion in health sciences organizations: a systems approach. Portland, Oregon: Portland State University;. 1997Google Scholar
- Broderick R, Boudreau JW: Human resource management, information technology, and the competitive edge. Academy of Management Executive. 1992, 6: 7-17.View ArticleGoogle Scholar
- Henderson J, Venkatraman N: Strategic alignment: leveraging information technology for transforming organizations. IBM Systems Journal. 1993, 32: 4-16.View ArticleGoogle Scholar
- Rawitz JG, Cowan WY, Paige BM: Justifying costs of software purchases. Healthcare Financial Management. 1995Google Scholar
- Roth : World class health care. Quality Management in Health Care. 1993, 1: 1-9.View ArticlePubMedGoogle Scholar
- Simpson RL: Benchmarking MIS performance. Nursing Management. 1994, 25: 20-21.PubMedGoogle Scholar
- Kimberly JR, Evanisko MJ: Organizational innovation: the influence of individual, organizational and contextual factors in hospital adoption of technological and administrative innovations. Academy of Management Journal. 1981, 24: 689-713.View ArticlePubMedGoogle Scholar
- Rogers EM: Diffusion of innovations. New York: The Free Press. 1995Google Scholar
- Becker MH: American Journal of Public Health. 1970, 294-303.Google Scholar
- Singh AK, Moidu K, Trell E, Wigertz O: Impact on the management and delivery of primary health care by a computer-based information system. Computer Methods And Programs In Biomedicine. 1992, 37: 55-64. 10.1016/0169-2607(92)90029-7.View ArticlePubMedGoogle Scholar
- Lind MR, Zmud RW: The influence of a convergence in understanding between technlogy providers and users of information technology innovativeness. Organizational Science. 1991, 2: 195-217.View ArticleGoogle Scholar
- Juang M: The Medas Network: Overall Design And Applications At Cook County Hospital (Patient Records, Local Area Network). Illinois Institute of Technology; Ph.D. 1992Google Scholar
- Roderer NK, Clayton PD: Bulletin Of The Medical Library Association. 1992, 80: 253-262.PubMedPubMed CentralGoogle Scholar
- Sandiford PAHCR: What can information systems do for primary health care? An international perspective. [Review]. Social Science & Medicine. 1992, 34: 1077-1087. 10.1016/0277-9536(92)90281-T.View ArticleGoogle Scholar
- Anderson TR: Physicians' resistance to claims automation. Journal Of Health Care Benefits. 1993, 54-56.Google Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6947/2/4/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL.