Skip to main content
  • Research article
  • Open access
  • Published:

Selecting information technology for physicians' practices: a cross-sectional study

Abstract

Background

Many physicians are transitioning from paper to electronic formats for billing, scheduling, medical charts, communications, etc. The primary objective of this research was to identify the relationship (if any) between the software selection process and the office staff's perceptions of the software's impact on practice activities.

Methods

A telephone survey was conducted with office representatives of 407 physician practices in Oregon who had purchased information technology. The respondents, usually office managers, answered scripted questions about their selection process and their perceptions of the software after implementation.

Results

Multiple logistic regression revealed that software type, selection steps, and certain factors influencing the purchase were related to whether the respondents felt the software improved the scheduling and financial analysis practice activities. Specifically, practices that selected electronic medical record or practice management software, that made software comparisons, or that considered prior user testimony as important were more likely to have perceived improvements in the scheduling process than were other practices. Practices that considered value important, that did not consider compatibility important, that selected managed care software, that spent less than $10,000, or that provided learning time (most dramatic increase in odds ratio, 8.2) during implementation were more likely to perceive that the software had improved the financial analysis process than were other practices.

Conclusion

Perhaps one of the most important predictors of improvement was providing learning time during implementation, particularly when the software involves several practice activities. Despite this importance, less than half of the practices reported performing this step.

Peer Review reports

Background

Health care providers compete for managed care contracts based on cost-effectiveness and quality of care [1–4]. Information technology (IT) provides a cost-effective way to document productivity, performance measures, cost, and quality of care. Since IT has dropped in cost over time, physician practices are now turning to it to meet these needs. Information technology for this study is defined as computer software used to store, transport, or communicate information [2, 5–7].

The health care organizations that succeed in the 21st century will be those that improve quality and reduce cost. These juxtaposed objectives most likely will be reached through improved handling of information [2, 8, 9]. The Committee on Quality of Health Care in America reported that most clinical information remains in paper form [9]. This committee made several recommendations for improving quality, including moving clinical information to an electronic format by the end of the decade.

Information technology selection in health care has often been performed in a rather informal way, resulting in the purchase of "white elephants" [10]. The systems may not perform as planned and may cause additional work for medical staff. The systems are often purchased or developed in pieces without consideration to the overall business strategy [1].

To date, few publications have documented the selection process and the resulting impact of the IT on the health care organization. Most papers give anecdotal descriptions, often by vendors, but lack client perceptions of the information system's value [1, 2, 7, 11–14]. Even at the hospital level, only a few client perceptions of IT adoption have been reported [15–19]. The number of available papers that examine IT selections within physician practices is even smaller than those papers addressing hospital selections [3, 20]. However, many physicians are transitioning from paper to electronic formats for billing records, medical charts, etc. This study aims to understand the process for selecting IT for physicians' practices and the perceptions of the IT after it is implemented. The primary objective of this research was to identify the relationship (if any) between the IT selection process and the office staff's perceptions of the it's impact on practice activities.

Methods

To address the research objective, a literature review was completed; an expert panel was formed and consulted; a conceptual model was developed; a telephone interview survey was designed; an exploratory factor analysis was performed; and finally, a logistics regression analysis was performed. The conceptual model for this study was not based on one single overriding pre-established theory (Figure 1). Rather, it was drawn from a body of literature as well as from the observations of an expert panel regarding technology selection and how it facilitates or impedes practice activities [1–3, 11, 12, 16, 21–42]. The expert panel included physicians, health services researchers, informatics researchers, and health care industry consultants.

Figure 1
figure 1

Conceptual Model.

The telephone survey was conducted with 407 physician practices in Oregon [2]. The survey elements were based on the literature review and on the feedback from the expert panel. The survey addressed the following descriptive research questions:

Q1: Who selects IT for a physician practice (e.g., administrators, clinicians, computer specialists)?

Q2: What selection steps are used?

Q3: What factors influence the purchase?

Q4: Which IT features are selected?

Q5: Who (within the practice) customizes the IT?

Q6: Is time given to learn the IT?

Q7: What are the clinical and office staff members' perceptions of this IT's impact on several office activities (e.g., scheduling, communication, quality reporting)?

The design of the telephone survey was reviewed by the Human Subjects Research Review Committee at Portland State University.

Sample

Providence Health System in Portland, Oregon provided a database of practices (n = 933) for this study. These practices all served Providence Health System in some capacity – e.g., as primary care physicians or specialists. Eligible practices had acquired software within the past five years but not within the past six months. Practices with software older than five years were disqualified because it was unlikely that the decision makers (if present) would recall the details of the selection process. Practices with software selected within the last six months were dropped because new software often requires a learning time period. The original sample of 933 contained 70 practices that had no computers and 35 that had software purchased only in past six months or more than five years ago. In total, 11.1% of the original sample were excluded.

Of the remaining eligible practices (n = 828), 407 completed the telephone survey, representing a response rate of 49.2%. If a qualified respondent at a practice was not reached after at least three attempts (n = 269) or the respondents declined the interview (n = 152), the practice was counted as a nonrespondent. Qualified respondents were involved with software selection or software customization for the practice. Seven practices gave partial interviews and were also counted as nonrespondents. These respondents had to leave in the middle of the interview to address urgent clinic needs. Although these respondents were rescheduled, they were not reached to complete the interviews. Additionally, one respondent gave many "don't know" responses. The interviewer wrote in the comment section for this office that the respondent was not qualified for the study and should be dropped. Thus, in total, seven partial interviews, and one unqualified interview were dropped from the sample, reducing the total number of offices in the study to 399. The respondents and participating practices are summarized in Table 1.

Table 1 Description of respondents and participating practices

Second interviews were gathered for 189 of the 407 responding practices. Since almost half of the responding offices represented single practitioners, many of these smaller offices had only one eligible participant.

Telephone survey

The survey questions were developed based on the literature review and discussions with an expert panel. Since many of the respondents were not familiar with technical IT terms, care was taken to present the survey in a "respondent friendly" format.

Thirteen college student interviewers and two supervisors conducted the interviews using a telephone interviewing software package, Computer Assisted Survey Execution System. A program was written to provide the interviewers with precise dialogue, questions, and precoded responses. As the interview progressed, the interviewer entered the responses into a personal computer.

Since the study objective included capturing the perceived impacts of IT, we attempted to record perceptions from two representatives from each practice: the decision maker and a primary user (see Additional File 1: "Physician Practice Software Telephone Survey, Dialog and Questions"). The initial interview that included questions related to the selection process and perceived impacts of the IT lasted approximately 15–25 minutes. The respondent was asked to describe a recent IT purchase (at least six months old). For each practice, the respondent indicated whether a person in a specific role – e.g., an administrator – was involved or not involved in selection, and involved or not involved in software customization. Customization in this study referred to providing input to the software vendor for writing software specific to the practice.

During the interview we read the respondents a list of selection steps. For each step, the respondent answered "yes" or "no" as to whether it was performed. During the interview the respondents were read several potential factors that might have influenced the purchase. For each one they rated the statement on a 1-to-6 scale of importance, (ranging from "no importance" to "very high importance"). Finally, we asked the respondents to react to 12 statements describing potential impacts of the IT on selected practice activities. The statements were intentionally not grouped by any particular theme. The respondents rated each impact statement on a 1-to-5 scale of agreement ("strongly disagree", "slightly disagree", "neither agree or disagree", "slightly agree", "strongly agree") or selected "not applicable."

The second interview with a primary user of the software included mainly the perceived impact questions, and lasted 7–10 minutes. At the completion of the initial interview, each respondent was offered a summary of the results.

Statistical evaluation

The data from all interviews were first descriptively evaluated, primarily by computing frequencies of responses for each question. Factor analysis (principal components) revealed four latent factors related to the respondent's perceived impacts of the IT on four practice activities: scheduling, financial analysis, communication, and medical documentation [2]. Therefore, four subscales were created. The scheduling, financial analysis, communication subscales each included two items, and the medical documentation subscale included three items. Responses of "not applicable" were coded as missing. For each subscale the mean of the items was computed.

Diagnostic plots of the four practice activity subscales suggested that an explanatory model might be best approached using logistic regression, which relaxes the assumption of normality. The four subscales were recoded to dichotomous variables corresponding to agree or not agree. If the mean score (of 2–3 impact statements) for a practice activity was greater than 3.0, the respondent was scored as "1" for agree. If the mean score for a practice activity was 3.0 ("neither agree or disagree") or less, the respondent was scored as "0" for not agree. Each of the four practice activity subscales became the dependent variable in a predictive model. The independent variables entered into the models included the demographic and selection variables.

Multiple logistic regression

We attempted four predictive models, one for each of the newly created dichotomous subscales. Only respondents who found the impact statements relevant were included in the predictive models. Multiple logistic regression revealed relationships between the selection process and the perceptions related to the scheduling, financial analysis, and communication processes. Variables that achieved a significance level of p < .05 were retained in the models. For the perceptions related to medical documentation, no significant selection variables survived the analysis. This was most likely due to the small number of practices with electronic medical records (n = 89) and aggregating all types of electronic medical record (EMRs) regardless of type and number of functions. It is also possible that the decision to purchase an EMR is often made outside the practice – e.g., a large health system offers EMRs to the practices. For 11 of the 89 practices that had EMRs, the decision was made by a large health system. Data from these practices were not included in the predictive models, thus reducing the number of available practices with EMRs to 78.

A summary of the models is presented in this paper. The complete analysis and models are available elsewhere [2]. The predictive models were built using a model building data set (299 randomly selected interviews). The models were then tested with a testing data set (the remaining 100 interviews). One-hundred interviews were needed to insure adequate statistical power. As a check for cross-validation, the accuracy with which the models predicted the perceived impact subscale values using the model building data set was compared to the accuracy achieved with the testing data set. Using the parameters established with the model building data set, agreement (or not agreement) to a perceived impact subscale was predicted for the testing data set.

For cross-validation, the accuracy levels were compared using a z-test for proportions. As seen in Table 2, the scheduling and financial analysis models had non-significant (p > .05) drops in accuracy. This suggests that the models may be generalized to other physician offices with similar demographics. Since the accuracy level dropped dramatically for the communication model, this model did not "cross-validate." The observations made in this study accurately describe the idiosyncrasies of this sample used to build the communication model, but may not accurately describe other samples of physician offices.

Table 2 Cross-Validation Summary

Once the results were completed, the expert panel was reconvened to provide insight in interpreting the results. In the sections that follow, the descriptive results, comparison of the decision maker vs user, and each cross-validated model are summarized and discussed.

Results and Discussion

Most administrators were involved in the selection (68%) and customization (63%) processes (Table 3). Clinical staff members were also very involved in selection (62%) but not as involved in customization (33%).

Table 3 Selection process

Eighty percent or more of the practices performed cost comparisons and/or viewed software demonstrations. The frequencies for the steps the practices took in selecting software are depicted in descending order in Table 3.

Seventy percent or more of the practices stated that "ease of use," "improving a business process," and "most value for cost" were important factors influencing the purchase (Table 3). The frequency of factors receiving either "high" or "very high importance" is also presented in descending order in Table 3.

The practices typically chose commercial packages that cost less than $50,000. (Note: these data were collected during the fall, 1996). Information related to IT cost, customization level, and the number of users is presented in Table 4. There were four basic software packages considered in this study. The type of package, associated computer activities, and frequencies are presented in Table 4. The results indicate that more than 85% of the practices used the software for managed care or practice management activities. Fewer than half of the practices used the software for communication activities. Only 23% of the practices accessed a completed patient record with the software.

Table 4 Selected IT

Ninety percent of the respondents felt the software had impacted their billing process (Table 5). The first column in Table 5 lists the theme of the impact statement. The middle column is the proportion of respondents who rated the software – meaning the impact statement was relevant to their software. For those who found the impact statement relevant, the last column depicts the proportion who slightly or strongly agreed with the impact statement. For example, in Table 5, 74% of the respondents felt the software affected the accuracy of their practice documents. Of those, 85% of the respondents agreed that practice documents were more accurate since the software was implemented.

Table 5 What are the clinical and office staffs members perceptions of this it's impact on office activities (Q7)?

Comparison of decision-maker vs user

The primary respondents agreed with users on their perceptions of the software's impact on scheduling and financial analysis activities (p < .001). For the scheduling model, Phi was .359, with a maximal Phi of .778. For the financial analysis model, Phi was .418 with a maximal Phi of .920. Since the primary respondent was reasonably knowledgeable about the perceived impacts of the software, we did not include the user data in the remainder of the cross-validated models. The user provided only a few demographics and the perceived impact data, while the primary respondent provided the selection data as well as the perceived impact data.

Predicting the impact of the software on scheduling activities

For the scheduling model, five selection variables as a group predicted with 73% accuracy the subscale of whether the respondents on average would agree with the following two impact statements:

"The software has improved the scheduling of patients for routine, preventive and urgent appointments."

"The software has improved the referral process in sending and receiving referrals quickly."

The statistically significant (p < .05) predictors are presented in Table 6 along with the expected response by the respondent and the results of the multiple logistic regression analysis. The second column of the table contains the coefficient (or weighting value of B). The Wald statistic (Bj/standard error) gives a measure of significance of B for the predictor variable.

Table 6 Scheduling Model

Looking at the odds ratios in Table 6, the likelihood of agreement with the scheduling subscale is almost four times (odds ratio, OR = 3.89) as great when practices selected EMR packages than if they did not select EMR packages. At first this finding was surprising. Many EMRs, however, have automatic recall features when the patient should be called or sent a reminder for a health check. Similarly, the likelihood of agreement was almost four times (OR = 3.88) as great when the practice compared the software options with the best in the field than if it did not perform this step.

The practices that selected practice management software were 1.70 times more likely to agree that the software had improved the scheduling and referring of patients than practices that selected other types of software. This finding was expected since these packages typically include a scheduling module. Additionally, practices that considered "prior user testimony" important in the selection process were 1.39 times more likely to agree with the scheduling subscale than those practices that did not consider prior user testimony as an important influence.

Finally, a respondent who had personally selected the software was less likely to agree with the impact statements (OR = 0.20). The members of the expert panel felt this was a symptom of "unmet expectations." The members of the selection team knew how the software was supposed to perform and were likely disappointed when it didn't live up to the vendor promises. These respondents had also probably seen the "Cadillac" performers and realized that their software had only achieved "Chevrolet" status. Another explanation is that these practices failed to fully implement the software or to adapt clinic workflows to fully utilize the software.

In summary, practices that selected EMR or practice management software, that made software comparisons, or that considered prior user testimony as important were more likely to have perceived improvements in the scheduling process than were other practices.

Predicting the impact of the software on financial analysis activities

For the financial analysis model, five selection variables as a group predicted with 86% accuracy the subscale of whether the respondents on average would agree with the following two impact statements:

"The software has created a more accurate and timely billing process."

"The software has improved the practice's ability to track and analyze costs and revenues associated with managed care contracts."

The most dramatic increase in odds of agreement (OR = 8.2) occurred when the practice reduced the workload to allow time to learn the software, Table 7. However, only 36% of the 399 practices reported that reduced workloads were provided during the implementation phase. According to the survey conducted by Ambosa et al. [21], expecting medical staff to learn new software while caring for a full load of patients is a common reason for failure.

Table 7 Financial Analysis Model

The odds of agreement were increased by more than a factor of four (OR = 4.59) for each increase in managed care activities the software contained. Since most managed care software packages are marketed to assist the practice in documenting costs associated with managed care contracts, this finding was expected.

Practices that considered value an important consideration were twice (OR = 2.0) as likely to agree with the financial analysis subscale. By contrast, practices that considered compatibility an important influence were less likely (OR = 0.66) to agree with financial analysis subscale. At first the compatibility result was surprising. However, 51% of these practices were first-time buyers, and usually buying billing software, so compatibility was not a critical consideration. Ninety-one percent of first-time buyers who rated compatibility as low-to-no importance agreed with the financial analysis subscale. It is also possible that practices with existing good financial analysis processes (and little room to improve) rated compatibility as important but disagreed that the new software had improved the existing good process.

The finding that less expensive packages related to more satisfied buyers was interesting (OR = 0.25). There were many good financial packages available for less than $10,000 in 1996. Practices that spent less than $10,000 bought software packages with few, but very functional, features. Those practices that spent more than $10,000 were purchasing complex systems, perhaps for multiple sites. Financial analysis may just have been a small module of these multi-purpose packages.

In summary, practices that considered value important, that did not consider compatibility important, that selected managed care software, that spent less than $10,000, or that provided learning time during implementation were more likely to perceive that the software had improved the financial analysis process than were other practices.

Observations from both models

In looking over the predictors for the two cross-validated models (scheduling and financial analysis), some predictors naturally belong in one model or the other – e.g., practice management software in the scheduling model and managed care software in the financial analysis model. The themes in the scheduling model center around software features (emr and practice management software, comparison of software options) and usability (prior user testimony and personal selection by respondent). The themes in the financial analysis model include cost (software cost, value), software features (managed care software and compatibility), and learning time. This might suggest that the respondents for the financial analysis model had differing roles in the practice than the respondents for the scheduling model. In both of these models, 79% of the respondents were administrators.

Since all types of administrators (e.g., office managers, finance managers) were grouped together, it was impossible to identify the primary role of administrator who responded. The differences in the models also suggest that the predictors of success differ by the types of activities the software is intended to perform.

It might appear odd that some predictors (e.g., learning time) did not carry through to both models. It is likely that the type and complexity of software package contributed to the learning demands on the office. Many of the respondents who agreed with the financial analysis subscale chose managed care software that bundled together many activities (tracking incoming and outgoing referrals, patient enrollment, capitation accounting, and/or utilization reporting). For practices learning this type of software, protected learning time was an important predictor of success. For practices implementing practice management software (scheduling, billing, and/or accounting spreadsheets), the learning demand was less. This naturally suggests that the decision to reduce the workload while learning a software package should consider the number and complexity of the tasks to be learned.

Limitations and research opportunities

The respondents for this study primarily represented practices that serve Providence Health System in Oregon. These practices served either as managed care providers or as fee-for-service providers. The only practices excluded were pure HMO providers – e.g., Kaiser Permanente. The pure HMO practices were excluded because it was unclear whom to interview regarding software selections. Often these practices are given software directly from the organization. Eighty-seven percent of these practices in this study had 10 practitioners or less. Only 17% of these practices had in-house computer specialists assisting with software selection. The results of this study may not generalize to large practices that often have in-house computer specialists assisting with selection. A future study could include a nationwide survey of all types of physician practices, regardless of managed care status, ownership, specialty, or size.

This study is retrospective in nature, requiring the respondents to recall a software purchase that occurred several months, perhaps more than a year, earlier. In an "ideal study design," a questionnaire should be distributed to practices that have recently made selections. Another questionnaire addressing the impact on the practice could be sent at a pre-defined follow-up period – e.g., six months after implementation. This "ideal study design" would be difficult to conduct without a sufficient list of practices that have recently purchased software. Perhaps software manufacturers and vendors could provide lists of recent clients (with permission) to interested researchers.

The cross-sectional survey design of this study captured the technical aspects of the selection process (e.g., who was involved, what steps that were taken). Although the respondents were given a few "open-ended" questions, most provided little additional information. There could have been additional selection steps, influences, and impacts. It is also possible that the observed changes in impact were related to variables we didn't attempt to measure – e.g., ability and desire of management to implement new technologies and to change existing practice activities. Focus groups might be more effective at capturing underlying management expertise. Another very time-invasive approach would be to conduct a series of case studies, documenting the decision-making process over time. This research would need support from practices for observers to remain on-site during the selection process. This format would also promote a more well-rounded, multiple perspectives evaluation. The current study relies on perceptive responses (primarily from office managers) to measure many variables, including impact variables. Their perceptions were related to business-related practice activities. Only 5.3% of the respondents were clinicians. It is likely that expanding this study to include more clinician responses would reveal perceptions related to other processes – e.g., medical documentation or treatment processes.

The subscales (related to practice activities) were formed from responses to only two to three original impact questions. A stronger design of these practice activities impacts would include several questions related to each activity. Given the exploratory nature of this current research, this limitation could not have been foreseen. However, the results of this study open doors for more confirmatory types of studies to design survey instruments that measure software impact with underlying practice activity constructs. This study does not attempt to demonstrate cause and effect. It would be important to have respondents rate existing practice activities (before purchasing software) to control for a "ceiling effect" – practices with existing good processes have little room to improve. If such a trial were designed, it would also need to control for the type of IT and the needs of the buyer.

To move toward a more direct measure of impact would require the practices to closely measure performance and behavior. For example, in this study, the respondent is asked if the practitioners have an improved ability to consult professional literature online. A direct measurement method would determine the number of online literature consultations before and after the software installation.

Conclusions

The results of this research describe the software selection process as it occurs in physician practices. Using a telephone interview survey gave the researcher (and other interviewers) direct contact with the decision makers in each practice. The results of this study also describe how software is perceived to affect several practice activities.

The objective of this study was to identify relationships (if any) between the IT selection process and the office staff's perceptions of the IT's impact on practice activities. The results of the multiple logistic regression models confirmed relationships between the selection process and the perceived impacts related to the scheduling and financial analysis activities. The results of this study demonstrated a relationship (not cause and effect) between the selection process and the user perception of software usefulness.

Although many of the relationships were expected (e.g., performing software comparisons, interviewing prior users, and selecting certain software features improved perceptions about practice activities), perhaps one of the most important predictors of improvement was reducing the workload during implementation. Despite the importance of this predictor, only 36% of the practices performed this step in this study. If more practices had performed this step, it might have carried even more weight in the analysis. From a practical standpoint, many of the offices selected and implemented IT but expected the staff to learn the software while caring for a full load of patients. Investigators from a previous study by Ambroso et al. [21] cite this expectation as a common reason for IT failure.

One of the secondary findings of this research is that the purchasers of the software (often office managers) had perceptions about the software's use similar to those of users (who were not involved in the selection process). This finding supports the use of a single-survey-response study design for understanding perceived impacts related to software's impacts on business-related practice activities.

Author comments on prior presentation of results

The results of this study were presented at the Portland International Conference on Management of Engineering and Technology, Portland Oregon, 1997 and 1999. The results were also presented at the Institute for Operations Research and Management Science, Philadelphia, Pennsylvania, 1999. The references for the conference proceedings are listed below.

Eden K, Kocoaglu, D. Information Technology Selection Process and Perceived Impacts in Physician Practices. In Technology and Innovation Management. Portland State University, PICMET conference proceedings, 1999, pp. 562–568. Executive summary presented in proceedings, Portland International Conference on Management of Engineering and Technology, Portland, Oregon, 1999, pp. 392–394.

Eden K, Kocaoglu, D. Selection of Information Technology in the Health Care Industry. Presented at Institute for Operations Research and the Management Sciences conference. Philadelphia, Pennsylvania, November, 1999.

Eden K, Kocaoglu D. Selection and Implementation of Information Technology in the Health Care Industry. Preliminary results presented at the Portland International Conference on Management of Engineering and Technology, published in proceedings, Portland, Oregon, 1997, pp. 199–202.

Abbreviations

EMR:

Electronic Medical Record

IT:

Information Technology

OR:

Odds Ratio

References

  1. Bolley HB: Physicians in health care management: 6. Physician *bytes* computer. Canadian Medical Association Journal. 1994, 150: 1977-1982.

    CAS  PubMed  PubMed Central  Google Scholar 

  2. Eden KB: Selection of Information Technology in the Health Care Industry. Portland: Portland State University; Dissertation. 1997

    Google Scholar 

  3. Renner K: Cost-justifying electronic medical records. Healthcare Financial Management. 1996, 63-70.

    Google Scholar 

  4. Simpson RL: The role of technology in a managed care environment. The first in a series of three related articles. Nursing Management. 1994, 25: 26-28.

    Article  CAS  PubMed  Google Scholar 

  5. Aronow DB, Coltin KL: Information technology applications in quality assurance and quality improvement, Part I. [Review]. Joint Commission Journal on Quality Improvement. 1993, 19: 403-415.

    CAS  PubMed  Google Scholar 

  6. Bakopoulos JY: Toward a more precise concept of information technology. In: International Conference Information Systems. 1985, 17-24.

    Google Scholar 

  7. King WR, Grover V: The strategic use of information resources: an exploratory study. IEEE Transactions On Engineering Management. 1991, 38: 293-305. 10.1109/17.97436.

    Article  Google Scholar 

  8. Martin JB: The environment and future of health information systems. Journal of Health Administration Education. 1990, 8: 11-24.

    CAS  PubMed  Google Scholar 

  9. Institute Of Medicine: Crossing the Quality Chasm. Washington, D.C.: National Academy Press. 2001

    Google Scholar 

  10. Wall R: Computer Rx: more harm than good?. Journal of Medical Systems. 1991, 15: 321-334.

    Article  CAS  PubMed  Google Scholar 

  11. Elevitch F, Treling C, Spackman K, Weilert M, Aller R, Skinner M, Pasia O: A clinical laboratory information systems survey. A challenge for the decade. Archives of Pathology & Laboratory Medicine. 1993, 117: 12-21.

    CAS  Google Scholar 

  12. Simpson RL, Somers JB: The role of the clinical nurse specialist in information systems selection. Clinical Nurse Specialist. 1991, 5: 159-163.

    Article  CAS  PubMed  Google Scholar 

  13. Weaver RR: Assessment and diffusion of computerized decision support systems. International Journal of Technology Assessment in Health Care. 1991, 7: 42-50.

    Article  CAS  PubMed  Google Scholar 

  14. Eden KB, Kocaoglu DF: Selection and Implementation of Information Technology in the Health Care Industry. In: PICMET 97; Portland, Oregon. 1997, 199-202.

    Google Scholar 

  15. Holland GJ: Hospital characteristics associatedwith adoption of clinical information systems. University of Alabama; Ph.D. 1989

    Google Scholar 

  16. Zinn TK: Healthcare I/S executives look toward the next decade. Computers in Healthcare. 1992, 32-35.

    Google Scholar 

  17. Romano CA: Predictors of nurse adoption of a computerized information system as an innovation. In: Annual Symposium On Computer Applications In Medical Care.;. 1994, 961-

    Google Scholar 

  18. Chocholik JK, Bouchard SE, Tan JKH, Ostrow DN: The determination of relevant goals and criteria used to select an automated patient care information system: a Delphi approach. JAMIA. 1999, 6: 219-233.

    CAS  PubMed  PubMed Central  Google Scholar 

  19. Weiner M, Gress T, Thiemann DR, Jenckes M, Reel SL, Mandell SF, Bass EB: Contrasting views of physicians and nurses about an inpatient computer-based provider order-entry system. JAMIA. 1999, 6: 234-244.

    CAS  PubMed  PubMed Central  Google Scholar 

  20. Garrett LE, Hammond WE, Stead WW: The effects of computerized medical records on provider efficiency and quality of care. Methods Of Information In Medicine. 1986, 25: 151-157.

    PubMed  Google Scholar 

  21. Ambroso C, Bowes C, Chambrin MC, Gilhooly K, Green C, Kari A, Logie R, Marraro G, Mereu M, Rembold P, Reynolds M: INFORM: European survey of computers in intensive care units. International Journal of Clinical Monitoring & Computing. 1992, 9: 53-61.

    Article  CAS  Google Scholar 

  22. Carr : What Is The Relationship, If Any, Between Nurse Involvement In The Development, Design and Selection Of A Hospital Information System (HIS) And Subsequent Utilization Of That System?. Nova University;. D.B.A. 1993

    Google Scholar 

  23. Dunbar C: Nurses want I/S selection power, but do they have it?. Computers in Healthcare. 1992, 13:

    Google Scholar 

  24. Gibson RP, Berger S, Ciotti G: Selecting an information system without an RFP. Healthcare Financial Management. 1995

    Google Scholar 

  25. Neal T: Evaluating and selecting an information system, part 1. American Journal of Hospital Pharmacy. 1993, 50: 117-120.

    CAS  PubMed  Google Scholar 

  26. Remmlinger E, Grossman M: Physician utilization of information systems: bridging the gap between expectations and reality. American Hospital Association. 1991

    Google Scholar 

  27. Simpson RL: Clinical information systems vs. practicing physicians. Nursing Management. 1992, 23: 14-16.

    Article  CAS  PubMed  Google Scholar 

  28. Ash JS: Factors for information technology innovation diffusion and infusion in health sciences organizations: a systems approach. Portland, Oregon: Portland State University;. 1997

    Google Scholar 

  29. Broderick R, Boudreau JW: Human resource management, information technology, and the competitive edge. Academy of Management Executive. 1992, 6: 7-17.

    Article  Google Scholar 

  30. Henderson J, Venkatraman N: Strategic alignment: leveraging information technology for transforming organizations. IBM Systems Journal. 1993, 32: 4-16.

    Article  Google Scholar 

  31. Rawitz JG, Cowan WY, Paige BM: Justifying costs of software purchases. Healthcare Financial Management. 1995

    Google Scholar 

  32. Roth : World class health care. Quality Management in Health Care. 1993, 1: 1-9.

    Article  CAS  PubMed  Google Scholar 

  33. Simpson RL: Benchmarking MIS performance. Nursing Management. 1994, 25: 20-21.

    CAS  PubMed  Google Scholar 

  34. Kimberly JR, Evanisko MJ: Organizational innovation: the influence of individual, organizational and contextual factors in hospital adoption of technological and administrative innovations. Academy of Management Journal. 1981, 24: 689-713.

    Article  CAS  PubMed  Google Scholar 

  35. Rogers EM: Diffusion of innovations. New York: The Free Press. 1995

    Google Scholar 

  36. Becker MH: American Journal of Public Health. 1970, 294-303.

    Google Scholar 

  37. Singh AK, Moidu K, Trell E, Wigertz O: Impact on the management and delivery of primary health care by a computer-based information system. Computer Methods And Programs In Biomedicine. 1992, 37: 55-64. 10.1016/0169-2607(92)90029-7.

    Article  CAS  PubMed  Google Scholar 

  38. Lind MR, Zmud RW: The influence of a convergence in understanding between technlogy providers and users of information technology innovativeness. Organizational Science. 1991, 2: 195-217.

    Article  Google Scholar 

  39. Juang M: The Medas Network: Overall Design And Applications At Cook County Hospital (Patient Records, Local Area Network). Illinois Institute of Technology; Ph.D. 1992

    Google Scholar 

  40. Roderer NK, Clayton PD: Bulletin Of The Medical Library Association. 1992, 80: 253-262.

    CAS  PubMed  PubMed Central  Google Scholar 

  41. Sandiford PAHCR: What can information systems do for primary health care? An international perspective. [Review]. Social Science & Medicine. 1992, 34: 1077-1087. 10.1016/0277-9536(92)90281-T.

    Article  CAS  Google Scholar 

  42. Anderson TR: Physicians' resistance to claims automation. Journal Of Health Care Benefits. 1993, 54-56.

    Google Scholar 

Pre-publication history

Download references

Acknowledgments

I would like to thank Dundar Kocaoglu, PhD, Nancy Perrin, PhD, Mara Tableman, PhD, Wayne Wakeland, PhD, Laurie Skokan, PhD, Dr. Robert Eder, PhD and Bruce Bayley, PhD, for providing direction on this study. I would also like to thank Nancy Perrin, PhD, Mark Helfand, MD, MPH, William Hersh, MD, FACP, Joan Ash, PhD, John Beekman, PhD, Jane Beekman and the BioMed Central reviewers for critically reviewing earlier versions of this manuscript and providing thoughtful feedback. I would like to acknowledge Gary Miranda for his careful editing and suggestions to make this manuscript "reader friendly." This project would not have been possible without the support and funding from Providence Health System and the use of the Portland State University Regional Research Institute. Finally, I would like to thank my husband, Kevin, and my children, Matthew and Erika, who were very supportive during this project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karen Beekman Eden.

Additional information

Competing Interests

None Declared.

Electronic supplementary material

12911_2001_10_MOESM1_ESM.doc

Additional file 1: Scripted telephone survey, "Physician Practice Software Telephone Survey, Dialog and Questions", by K.B. Eden, The file contains the script, questions and pre-coded responses, variables names (in left margins, that appear as: >xxxx<), and several logical statements (e.g., goto, if, etc.) to lead the interviewer through the interview. (DOC 127 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eden, K.B. Selecting information technology for physicians' practices: a cross-sectional study. BMC Med Inform Decis Mak 2, 4 (2002). https://doi.org/10.1186/1472-6947-2-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6947-2-4

Keywords