Skip to main content

The relationship between user interface problems of an admission, discharge and transfer module and usability features: a usability testing method



The admission, discharge and transfer (ADT) module is used in the hospital information system (HIS) for the purposes of managing appointments, patient admission, daily control of hospital beds, planning surgery procedures, keeping up-to-date on patient discharges, and registering patient transfers within or outside the hospital. The present study aimed to evaluate the usability of ADT module of a HIS through usability testing and assess the relationship between the number of user interface problems and usability features (i.e. effectiveness, efficiency, and satisfaction).


This descriptive analytical study was conducted in Shahid Beheshti hospital in Kashan, Iran, in 2017. The participating users were eight students in their last semester of a Bachelor of Health Information Technology Sciences degree. First, the users were introduced to the module functions in a two-hour session; ten days later, the users were asked to perform scenarios designed based on seven tasks and take notes of the problems encountered in performing each task after it was over. Effectiveness was measured based on the rate of completing the tasks, efficiency based on the time taken to perform each task, and satisfaction based on the users’ answers to a satisfaction questionnaire. The relationship between these three usability features and the number of problems noted was assessed using Spearman’s test in SPSS version 16.


Thirteen unique usability problems were identified from the perspective of the users. Effectiveness was rated as 58.9%, efficiency as 53.3%, and mean user satisfaction as 53.4 ± 10.6. The number of problems in each task had significant relationships to the effectiveness (P = 0.009) and efficiency (P = 0.016) scores. User satisfaction also had a significant relationship with the effectiveness (P = 0.043) but not with the efficiency (P = 0.230) scores.


In the view of the potential users, a HIS, used in more than 200 hospitals in a developing country, has several usability problems in its ADT module and its effectiveness, efficiency, and user satisfaction were not acceptable. The number of usability problems in the HIS user interface affected the effectiveness, efficiency and user satisfaction of the system.

Peer Review reports


Hospital information systems (HISs) are among the most popular health information systems that can increase the efficiency of healthcare providers, improve the quality of care, and increase patient safety by supporting care activities, increasing the speed and accuracy of performing tasks, and reducing errors [1,2,3].

Despite the many benefits of HISs, there are also some barriers related to the use of them [4]. One of the barriers is poor usability which leads to a reduced acceptance of the system, increased errors, and reduced user efficiency and can even adversely affects patient safety [5,6,7]. Poor usability and system failure have been observed in many HISs implemented so far [8,9,10]. The international standard organization (ISO) has defined the usability of systems with three features of effectiveness, efficiency and satisfaction [11], and user interface problems may be associated with these features. Studies [12, 13] have shown that usability problems such as overcrowded pages and many steps to perform a task entail reduced user productivity.

Resolving usability problems can increase user effectiveness and efficiency, such that effectiveness and efficiency improved significantly in one study [14] as a result of modification of identified usability problems. In another study, Saleem et al. [15] showed that redesigning a clinical reminder system based on the problems identified in its usability evaluation significantly improved the time taken for completing the processes (i.e. improved efficiency). In another study, Karahoca et al. [16] found that the tasks defined in emergency information system are performed significantly faster when using a user interface that has a high usability score (i.e. a higher efficiency).

The Admission, discharge and transfer (ADT) module is used in the HIS for the purposes of managing appointments, patient admission, daily control of hospital beds, planning surgery procedures, keeping up-to-date on patient discharges, and registering patient transfers within or outside the hospital [17]. Several studies have identified the usability problems of this module [18,19,20]. A better insight into the usability of this module requires not only the identification of its usability problems, but also determination of the relationship between these problems and usability features, i.e. effectiveness, efficiency and satisfaction [21, 22]. No studies were found that both identified the usability problems of this module and investigated the relationship between number of problems and usability features. The present study was conducted to identify the usability problems of the ADT module in a HIS by usability testing from the perspective of health information technology students as potential system users. The relationship between the number of identified problems and usability features was then also assessed.


The usability of the ADT module was assessed through usability testing [23] and the usability problems were recorded by potential users. Assessing the usability problems with this method often leads to the identification of 80–85% of the problems by five to eight users [24,25,26].

Study population and setting

The participants were eight senior bachelor students of Health Information Technology Sciences who volunteered to take part. The study inclusion criteria were: (1) familiarity with health information management department tasks; (2) computer skills; and (3) no working experience with the ADT module.

This study was conducted in the health information management department of Shahid Beheshti hospital, affiliated to Kashan University of Medical Sciences, in Kashan, Iran, in summer 2017. This hospital is the largest one affiliated with this university and has 615 registered beds.

Description of the evaluated hospital information system

Shafa HIS is developed and supported by Tirajeh Rayaneh Co. and is an information system that organizes key hospital management and clinical activities. This system covers all hospital activities, from patient admission to discharge, based on a patient-oriented approach, and stores all the data in the patient’s electronic health record. At the time of performing this study, more than 200 hospitals in Iran were covered by the services of this company [27].

Assessed tasks

In a session with the participation of the manager of health information management department and after assessing the health information management department tasks, seven main and common tasks performed through the ADT module were selected (Table 1).

Table 1 Main and common tasks performed through the ADT module

Scenarios were designed based on these tasks and using real data while preserving the patient data confidentiality. These scenarios were modified and approved by the manager of the health information management department.

Evaluation method

This study used the usability testing method to identify the system problems, and three features (i.e. effectiveness, efficiency and user satisfaction) were measured. Usability testing is a user-based usability evaluation method [23]. In this study, the users performed their tasks through the information system and, at the same time, the evaluator took notes of the problems they encountered during interaction with the system.

ISO-based usability features [11]


The extent to which the user can fully and accurately achieve his goals in performing a task [28,29,30,31]. Effectiveness was measured using the following equation [32].

$$ \mathrm{Effectiveness}=\left(\left(\mathrm{number}\ \mathrm{of}\ \mathrm{successfully}\ \mathrm{completed}\ \mathrm{tasks}\right)/\left(\mathrm{total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{tasks}\ \mathrm{performed}\right)\right)\ast 100 $$

The range of effectiveness was taken as ‘awful’ (0–50%), ‘bad’ (50–75%), ‘normal’ (75–90%), and ‘good’ (90–100%) [33].


The mean time taken for the users to perform each task [28, 34, 35] based on the following equation [32]:

$$ \mathrm{Efficiency}=\left(\left(\mathrm{totaloffullcompletionofatask}(1)\mathrm{ornon}-\mathrm{completion}(0)/\mathrm{timespentonatask}\right)/\left(\mathrm{totalnumberoftasks}\ast \mathrm{numberofusers}\right)\right)\ast 100 $$

In the above formula, the value of one means that the task has been fully performed by the user and the value of zero means that the task has not been fully performed. This value is divided by the time spent on a task. The calculation is done for all the tasks performed by users and the values are added up. Finally, the result is divided by the multiplication of the total number of tasks and the number of users to calculate system efficiency in percentage.


Participants’ satisfaction was assessed using the System Usability Scale (SUS) [36]. The validity and reliability of the Persian version of this scale had been confirmed in a study conducted by Taheri et al. [37]. This scale measures “the perceived ease of use” of a system. It contains ten items on a five-point Likert scale, and its scores are calculated based on Brooke’s scoring guideline [36]. According to previous studies [28, 38, 39], a mean score ≤ 50 is taken as poor satisfaction (i.e. ‘not acceptable’), a score between 50 and 70 taken as a system requiring modification but deemed ‘passable’, between the high 70 and 80 were considered as ‘good’, and a score 90 and above were deemed as ‘superior’.

Data collection

The participants were first introduced to the ADT module functions in a two-hour session. To prevent any learning effect, after a washout period of 10 days, they performed the tasks based on the designed scenarios. They were asked to take note of the usability problems encountered in each task and the reasons for completing or not completing the tasks. One researcher supervised the evaluation session, but neither user received instruction during the task performance stage. The users completed the user satisfaction questionnaire afterwards. Effectiveness was measured based on the ‘completion’ or ‘non-completion’ of the tasks. The users were asked to announce the time of completion of each task or the time when they were no longer able to complete the task. The efficiency feature was measured by recording the time taken to perform each task.

Data analysis

Data were analyzed in Excel 2013 using descriptive statistics, and the relationship between the three usability features and the number of usability problems was assessed in SPSS 16.0 (SPSS Inc., Chicago, IL, USA) using Spearman’s test at the significance level of 0.05.


Of the 56 tasks performed by the eight users (seven tasks per user), 33 (59%) were completed and 23 (41%) were left incomplete. Totally, 36 usability problems were identified in this evaluation, but 13 remained after the elimination of duplicates cases. Table 2 shows the problems expressed by the users and their related tasks.

Table 2 The problems identified by the users in usability testing


Figure 1 presents the results on the tasks’ effectiveness. Task 2 (i.e. outpatient admission) was performed by all the users. The users reported the simplicity, not needing to know the next steps of the task and existence of data recording prompts as the reasons for performing this task in full. Task 4 (i.e. reporting on the performance of the hospital departments) was not performed completely by any of the users. The reasons given by the users for not completing this task were having to perform the task in two different parts of the system, the unclear function of the items on the page, the many steps of the task, and the lack of help when performing the task.

Fig. 1
figure 1

The tasks success rate

According to the results, the effectiveness of the ADT module was 58.9%. Spearman’s correlation test showed a significant, inverse, linear relationship between effectiveness and the number of problems in each task (P = 0.009, r = − 0.881). Spearman’s test also showed a significant relationship between the effectiveness and efficiency scores of each user (P = 0.039, r = 0.731).


Table 3 presents the mean time taken to perform the tasks. Task 7 (i.e. editing the patient data) took the shortest time. Simplicity, single-step nature of the task, and no need to complete some data elements were the reasons given by the users for the short time taken to perform this task. Task 1 (i.e. inpatient admission) took the longest time. The large number of data elements in the admission form and the fact that the elements were on two separate pages were the reasons given by the users for the long length of time taken to complete this task.

Table 3 Mean time to perform the tasks

According to the noted equation, the system’s relative overall efficiency was 53.3%. Spearman’s correlation test showed a significant, inverse, linear relationship between efficiency and the number of usability problems in each task (P = 0.016, r = −0.847).

Users’ satisfaction

The mean user satisfaction score was 53.4 ± 10.6, and according to Fig. 2, satisfaction with the system was at a borderline acceptability with grade F. Spearman’s test results showed a significant relationship between user satisfaction and the task effectiveness score (P = 0.043, r = 0.722). No significant relationships were observed between user satisfaction and the efficiency score (P = 0.230, r = 0.479).

Fig. 2
figure 2

Overview of modified SUS rating table [38]


In this study, 13 unique usability problems were identified by the potential users for the ADT module. Effectiveness was < 60%, Efficiency about 50%, and mean users’ satisfaction about 50. Significant inverse relationships were found between the effectiveness score and the number of problems and also between the efficiency score and the number of problems in each task. Significant relationships were also found between the effectiveness score and users’ satisfaction and also between the effectiveness score and the efficiency score of users.

Out of the 13 unique usability problems, three were common to all the tasks: (1) an automatically-hiding menu bar and it’s unclear retrieval icon; (2) the unclear function of the keys based on their icon; and (3) the absence of help. The users argued that the first problem made accessing the other parts of the system troublesome. Similarly, Li et al. [39] argued that the lack of navigation control was an important problem that made the users’ access to the other parts of the system difficult. In the present study, the users reported that the second and third problems were confusing and wasted their time. The results of other usability studies [40, 41] using expert-based usability evaluation methods confirm the presence of these two problems in other HISs. Health information system designers and developers should therefore consider the modification of the poor navigation control in HISs as high priority, which exists from the joint perspective of experts and users.

In the present study, less than 60% of the tasks were completed, which shows the poor effectiveness of the system. The results also showed that the number of usability problems had an inverse relationship with effectiveness. Similarly, in their study, Thyvalikakath et al. [42] argued that there is a strong relationship between the frequency of usability problems in a computer-based patient record system and the users’ failure to perform the tasks. None of the users in our study were able to fully perform one task (i.e. “reporting on the performance of the hospital departments”), which had the largest number of problems compared to the others. The reasons given by the users for not performing this task included performing one task in two different parts of the system, the unclear function of the items in the page, the multiple steps of the task, and the absence of help when performing the task. Unlike the noted task, another task (“outpatient admission”) was performed in full by all the users and had the smallest number of problems compared to the other tasks. The reasons given by the users for fully completing this task included simplicity, not requiring to know the next steps and data recording prompts for the users.

The efficiency of the evaluated system was about 50%, which is considered poor. The “inpatient admission” task took the longest time. The large number of data elements in the admission form and their placement on two separate pages had prolonged this task according to the users. Moreover, “editing the patient data” task took the shortest time, which, according to the users, was due to the task’s simplicity, single-step nature, and not needing to complete some the data elements. The present findings also showed that greater usability problems decrease efficiency. In another study [43], user interface problems affected the nurses’ efficiency in using an electronic medication administration record application. With a larger number of usability problems in user interface, the user has to spend more time to perform a task, and his efficiency of performing the task therefore reduces. The results also showed a significant relationship between the efficiency and effectiveness scores of performing the tasks in each user. Similarly, Georgsson et al. [28] showed that users who successfully completed their tasks (i.e. greater effectiveness) performed their tasks more quickly (i.e. with higher efficiency).

The present findings showed that the overall user satisfaction of the system was low, and there was a significant relationship between the satisfaction and effectiveness scores. For example, the “edit” icon for editing the patient record number was poorly located on the page, and finding it required a long time or led to the unsuccessful task completion. The researcher observed up close that this problem had led to user dissatisfaction. The results of another study [31] showed that user satisfaction increases when the system is efficient and effective and the users can better perform their tasks. Moreover, the results of other studies [44, 45] showed that user satisfaction with the CPOE system is strongly tied to the system’s ease of use and its response efficiency and time.

The present study was conducted with the participation of potential users in a real setting and using actual scenarios. To the researchers’ knowledge, students have not yet been used as potential users for assessing HISs to identify their usability problems. The present study first identified the usability problems and measured three usability features (i.e. effectiveness, efficiency and satisfaction) and then assessed the relationship between the number of problems and the usability features. This study also had some limitations. Due to the homogeneity of the study population, the analysis of the results based on the users’ demographic features was not possible. The small number of the users could have affected the statistical generalizability of the results. Also, due to not including all the tasks performing through the ADT module, the users could not have carried out a systematic search of the system problems; therefore, the ADT module should have other problems that have remained unidentified.

The results revealed the participants’ dissatisfaction with the ADT module and its inadequate effectiveness and efficiency. The researchers recommend that certain parts of the system that showed a large number of problems be redesigned. A larger-scale study is recommended to be conducted with a larger number of users randomly selected to enable the comparison of the users’ demographic features and functional parameters such as the rate of completion of tasks and the time taken to perform the tasks.


In view of the potential users, the examined HIS, which is used in more than 200 hospitals in a developing country, has many usability problems in terms of its user interface in the ADT module. Moreover, the system effectiveness and efficiency and the users’ satisfaction were not at an acceptable level. The number of problems identified in the HIS user interface affected the effectiveness and efficiency of the system and the users’ satisfaction. To improve these features before and while using the system, these usability problems should be resolved.

Availability of data and materials

The data generated and analyzed during this study are available from the corresponding author on reasonable request.



Admission, discharge and transfer


Computerized physician order entry


Hospital information system


International standard organization


System usability scale


  1. Giuse DA, Kuhn KA. Health information systems challenges: the Heidelberg conference and the future. Int J Med Inform. 2003;69(2–3):105–14.

    Article  Google Scholar 

  2. Abdelhak M, Grostick S, Hanken MA. Health Information - E-Book: Management of a Strategic Resource. U.S.: Elsevier Health Sciences; 2014.

  3. Farzandipur M, Jeddi FR, Azimi E. Factors affecting successful implementation of hospital information systems. Acta Inform Med. 2016;24(1):51–5.

    Article  Google Scholar 

  4. Sagiroglu O, Ozturan M. Implementation difficulties of hospital information systems. Inf Technol J. 2006;5(5):892–9.

    Article  Google Scholar 

  5. Horsky J, Zhang J, Patel VL. To err is not entirely human: complex technology and user cognition. J Biomed Inform. 2005;38(4):264–6.

    Article  Google Scholar 

  6. Lowry SZ, Brick D, Gibbons MC, Latkany P, Lowry SZ, Patteron ES, et al. Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records: Empirically Based Use Cases for Validating Safety-Enhanced Usability and Guidelines for Standardization. U.S.: National Institute of Standards and Technology; 2015.

  7. Middleton B, Bloomrosen M, Dente MA, Hashmat B, Koppel R, Overhage JM, Payne TH, Rosenbloom ST, Weaver C, Zhang J. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc. 2013;20(e1):e2–8.

    Article  Google Scholar 

  8. Johnson KB. Barriers that impede the adoption of pediatric information technology. Arch Pediatrics Adolesc Med. 2001;155(12):1374–9.

    CAS  Article  Google Scholar 

  9. Lærum H, Ellingsen G, Faxvaag A. Doctors' use of electronic medical records systems in hospitals: cross sectional survey. Bmj. 2001;323(7325):1344–8.

    Article  Google Scholar 

  10. Smith A. Human computer factors : a study of users and information systems / Andy Smith. London ; New York: McGraw-Hill Companies; 1997.

    Google Scholar 

  11. Bevan N. International standards for usability should be more widely used. J Usability Stud. 2009;4(3):106–13.

    Google Scholar 

  12. Kuqi K, Eveleigh T, Holzer T, Sarkani S, Levin JE, Crowley RS. Design of electronic medical record user interfaces: a matrix-based method for improving usability. J Healthcare Eng. 2013;4(3):427–51.

    Article  Google Scholar 

  13. Sengstack P. CPOE configuration to reduce medication errors. J Healthcare Inform Manage. 2010;24(4):26–34.

    Google Scholar 

  14. Peute LW, de Keizer N, Van Der Zwan EP, Jaspers MW. Reducing clinicians' cognitive workload by system redesign; a pre-post think aloud usability study. MIE. 2011;2011:925–9.

    Google Scholar 

  15. Saleem JJ, Patterson ES, Militello L, Anders S, Falciglia M, Wissman JA, Roth EM, Asch SM. Impact of clinical reminder redesign on learnability, efficiency, usability, and workload for ambulatory clinic nurses. J Am Med Inform Assoc. 2007;14(5):632–40.

    Article  Google Scholar 

  16. Karahoca A, Bayraktar E, Tatoglu E, Karahoca D. Information system design for a hospital emergency department: a usability analysis of software prototypes. J Biomed Inform. 2010;43(2):224–32.

    Article  Google Scholar 

  17. Liljegren E. Usability in a medical technology context assessment of methods for usability evaluation of medical equipment. Int J Ind Ergon. 2006;36(4):345–52.

    Article  Google Scholar 

  18. Ebnehoseini Z, Tara M, Meraji M, Deldar K, Khoshronezhad F, Khoshronezhad S. Usability evaluation of an admission, discharge, and transfer information system: a heuristic evaluation. Open Access Maced J Med Sci. 2018;6(11):1941–5.

    Article  Google Scholar 

  19. Ehteshami A, Sadoughi F, Saeedbakhsh S, Isfahani MK. Assessment of medical records module of health information system according to ISO 9241-10. Acta Informatica Medica. 2013;21(1):36.

    Article  Google Scholar 

  20. Farzandipour M, Nabovati E, Zaeimi G-H, Khajouei R. Usability evaluation of three admission and medical records subsystems integrated into Nationwide Hospital information systems: heuristic evaluation. Acta Informatica Medica. 2018;26(2):133.

    Article  Google Scholar 

  21. Frøkjær E, Hertzum M, Hornbæk K. Measuring usability: are effectiveness, efficiency, and satisfaction really correlated? Proceedings of the SIGCHIconference on Human Factors in Computing Systems; April 1-6; The Hague, The Netherlands. New York: ACM Press; 2000. p. 345–52.

  22. Lyles CR, Sarkar U, Osborn CY. Getting a technology-based diabetes intervention ready for prime time: a review of usability testing studies. Curr Diabetes Rep. 2014;14(10):534.

    Article  Google Scholar 

  23. Rubin J, Chisnell D. Handbook of usability testing: how to plan, design and conduct effective tests. 2rd ed. 10475 Crosspoint Boulevard Indianapolis, IN46256: Wiley; 2008.

  24. Nielsen J, Landauer TK, editors. A mathematical model of the finding of usability problems. Proceedings of the INTERACT'93 and CHI'93 conference on Human factors in computing systems; 1993. Amsterdam, The Netherlands: ACM; April 24 - 29; 1993.

  25. Virzi RA. Refining the test phase of usability evaluation: how many subjects is enough? Hum Factors. 1992;34(4):457–68.

    Article  Google Scholar 

  26. Jaspers MW. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform. 2009;78(5):340–53.

    Article  Google Scholar 

  27. Tirajeh Rayaneh Tehran Software Engineering Company. Available at: Accessed 20 Aug 2019.

  28. Georgsson M, Staggers N. Quantifying usability: an evaluation of a diabetes mHealth system on effectiveness, efficiency, and satisfaction metrics with associated user characteristics. J Am Med Inform Assoc. 2015;23(1):5–11.

    Article  Google Scholar 

  29. Kushniruk AW, Borycki EM, Kuwata S, Kannry J. Emerging approaches to usability evaluation of health information systems: towards in-situ analysis of complex healthcare systems and environments. Stud Health Technol Inform. 2011;169:915–9.

    PubMed  Google Scholar 

  30. Peute LWP, Jaspers MWM. The significance of a usability evaluation of an emerging laboratory order entry system. Int J Med Inform. 2007;76(2):157–68.

    Article  Google Scholar 

  31. Yen P-Y, Bakken S. A comparison of usability evaluation methods: heuristic evaluation versus end-user think-aloud protocol – an example from a web-based communication tool for nurse scheduling. AMIA Ann Symp Proc. 2009;2009:714–8.

    Google Scholar 

  32. Justin M. Usability Metrics – A Guide To Quantify The Usability Of Any System. Available at: Accessed 20 Aug 2019.

  33. Sergeev A. User interfaces design and UX/usability evaluation. Available at: Accessed 20 Aug 2019.

  34. Khajouei R, Zahiri Esfahani M, Jahani Y. Comparison of heuristic and cognitive walkthrough usability evaluation methods for evaluating health information systems. J Am Med Inform Assoc. 2017;24(e1):e55–60.

    PubMed  Google Scholar 

  35. Svanaes D, Das A, Alsos OA. The contextual nature of usability and its relevance to medical informatics. Stud Health Technol Inform. 2008;136:541–6.

    PubMed  Google Scholar 

  36. Brooke J. SUS-A quick and dirty usability scale. Usability Eval Industry. 1996;189(194):4–7.

    Google Scholar 

  37. Taheri F, Kavusi A, Faghihnia Torshozi Y, Farshad AA, Saremi M. Assessment of validity and reliability of Persian version of system usability scale (SUS) for traffic signs. Iran Occup Health J. 2017;14(1):12–22.

    Google Scholar 

  38. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3):114–23.

    Google Scholar 

  39. Li LC, Adam PM, Townsend AF, Lacaille D, Yousefi C, Stacey D, Gromala D, Shaw CD, Tugwell P, Backman CL. Usability testing of ANSWER: a web-based methotrexate decision aid for patients with rheumatoid arthritis. BMC Med Inform Decis Mak. 2013;13(1):131.

    Article  Google Scholar 

  40. Atashi A, Khajouei R, Azizi A, Dadashi A. User Interface problems of a Nationwide inpatient information system: a heuristic evaluation. Appl Clin Inform. 2016;7(1):89–100.

    Article  Google Scholar 

  41. Khajouei R, Azizi A, Atashi A. Usability evaluation of an emergency information system: a heuristic evaluation. J Health Admin. 2013;16(52):61–72.

    Google Scholar 

  42. Thyvalikakath TP, Monaco V, Thambuganipalle HB, Schleyer T. A usability evaluation of four commercial dental computer-based patient record systems. J Am Dent Assoc. 2008;139(12):1632–42.

    Article  Google Scholar 

  43. Guo J, Iribarren S, Kapsandoy S, Perri S, Staggers N. Usability evaluation of an electronic medication administration record (eMAR) application. Appl Clin Inform. 2011;2(02):202–24.

    CAS  Article  Google Scholar 

  44. Lee F, Teich JM, Spurr CD, Bates DW. Implementation of physician order entry: user satisfaction and self-reported usage patterns. J Am Med Inform Assoc. 1996;3(1):42–55.

    CAS  Article  Google Scholar 

  45. Murff HJ, Kannry J. Physician satisfaction with two order entry systems. J Am Med Inform Assoc. 2001;8(5):499–509.

    CAS  Article  Google Scholar 

Download references


We are grateful to the manager and the employees of the health information management department of the Shahid Beheshti hospital and Ali Razzaghi for their contributions to this study.


No funding was obtained for this study.

Author information

Authors and Affiliations



EN, FRJ, and RK designed the study. EN supervised the project. RF and MSJ performed the experiments. EN and RF analyzed the data. All authors discussed the results and reviewed and approved the final manuscript. R.F. wrote the final manuscript.

Corresponding author

Correspondence to Ehsan Nabovati.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Research Ethics Committee of Kashan University of Medical Sciences Research Council (Number: 96049) on 30 June 2017 and conducted following the guidelines of the Declaration of Helsinki. In accordance with the opinion of the above-mentioned Ethics Committee and given the fact that no information about participants is provided in this paper, participants who participated in usability studies gave verbal consent to participate in this research.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Farrahi, R., Rangraz Jeddi, F., Nabovati, E. et al. The relationship between user interface problems of an admission, discharge and transfer module and usability features: a usability testing method. BMC Med Inform Decis Mak 19, 172 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Hospital information system
  • Usability evaluation
  • User Interface
  • Effectiveness
  • Efficiency
  • Satisfaction