Skip to main content

An intelligent decision support system for acute postoperative endophthalmitis: design, development and evaluation of a smartphone application

Abstract

Background

Today, clinical decision support systems based on artificial intelligence can significantly help physicians in the correct diagnosis and quick rapid treatment of endophthalmitis as the most important cause of blindness in emergency diseases. This study aimed to design, develop, and evaluate an intelligent decision support system for acute postoperative endophthalmitis.

Methods

This study was conducted in 2020–2021 in three phases: analysis, design and development, and evaluation. The user needs and the features of the system were identified through interviews with end users. Data were analyzed using thematic analysis. The list of clinical signs of acute postoperative endophthalmitis was provided to ophthalmologists for prioritization. 4 algorithms support vector machine, decision tree classifier, k-nearest neighbors, and random forest were used in the design of the computing core of the system for disease diagnosis. The acute postoperative endophthalmitis diagnosis application was developed for using by physicians and patients. Based on the data of 60 acute postoperative endophthalmitis patients, 143 acute postoperative endophthalmitis records and 12 non-acute postoperative endophthalmitis records were identified. The learning process of the algorithm was performed on 70% of the data and 30% of the data was used for evaluation.

Results

The most important features of the application for physicians were selecting clinical signs and symptoms, predicting diagnosis based on artificial intelligence, physician–patient communication, selecting the appropriate treatment, and easy access to scientific resources. The results of the usability evaluation showed that the application was good with a mean (± SD) score of 7.73 ± 0.53 out of 10.

Conclusion

A decision support system with accuracy, precision, sensitivity and specificity, negative predictive values, F-measure and area under precision-recall curve 100% was created thanks to widespread participation, the use of clinical specialists' experiences and their awareness of patients' needs, as well as the availability of a comprehensive acute postoperative endophthalmitis clinical dataset.

Peer Review reports

Background

Endophthalmitis is one of the most important causes of blindness in emergency eye diseases [1, 2]. Endophthalmitis affects the inner layers of the eye and is associated with substantial and progressive inflammation of the vitreous [3]. Acute postoperative endophthalmitis (APE) is a type of exogenous endophthalmitis that usually occurs after any invasive eye surgery, and is the most devastating complication after intraocular procedures [4]. With an incidence rate of 0.04–4% worldwide, this type of endophthalmitis is the most common type of the disease, and in 90% of cases, the disease occurs after cataract surgery [4]. A study conducted in an Eye Institute showed that the incidence of endophthalmitis after cataract surgery was 0.09% in a 10-year period [5]. The results of a review article conducted in Iran from January 2015 to February 2016 show that the incidence of endophthalmitis after cataract surgery is 0.02–0.1% [6]. In case of early diagnosis of endophthalmitis, the destruction of the eye structure can be somewhat prevented to preserve vision. Therefore, early diagnosis and proper treatment are critically important [7, 8].

Mobile health has created new opportunities to provide healthcare services. The need for quick access to information, physicians’ correct diagnosis and early action based on clinical guidelines, and patients’ early presentation has led to the use of this technology by physicians and other healthcare staff. This technology has been increasingly used by medical specialists, including ophthalmologists, in clinical and educational settings [9, 10].

In the last decade, ophthalmologists have used electronic health record systems, clinical decision support systems (CDSSs), office management, and video consultations through smartphones and tablets to diagnose and treat eye diseases [11, 12]. The smartphone-based intelligent decision support system is a type of mobile health platform that assists ophthalmologists in diagnosis and treatment, and as a result, increases the accuracy of diagnosis and reduces treatment costs. Romero et al., (2019) designed a decision support system based on fuzzy rules to help diabetic retinopathy (DR) screening programs [13]. The OphthalDSS decision support application was designed for the diagnosis of the red eye at the University of Valladolid, Spain. This application is capable of diagnosing over 30 eye diseases and can be used as an educational tool [14].

Few clinical computer systems are currently used in ophthalmology, mostly as a guideline for eye diseases or for teaching ophthalmic surgical skills [15]. The biggest challenge and error in the ophthalmology emergency department, from an ophthalmologist’s perspective, is related to the ability to distinguish endophthalmitis from common postoperative inflammations in a timely manner [16, 17]. To the best of our knowledge, this study is the first of its kind to design a smartphone-based CDSS to assist physicians with early diagnosis of endophthalmitis and also help patients present to medical centers early so as to improve the quality of care and prevent complications of the disease.

Methods

Setting

The study was conducted with the participation of a multi-specialty team from Khatam-Al-Anbia Eye Hospital, Mashhad and Kashan University of Medical Sciences in 2020–2021. Khatam-Al-Anbia Eye Hospital is the only educational, research, and healthcare center affiliated with Mashhad University of Medical Sciences and the ophthalmology hub of Northeast Iran. This hospital has 64 licensed beds, 11 specialized clinics, para clinical units (angiography, optometry, imaging and laboratory), 19 operating rooms, 32 recovery beds and a pediatric anesthesia room. The hospital provides ophthalmology specialty and subspecialty services to 250,000 patients annually.

Study design

The study was conducted in three phases of analysis, design and development, and evaluation.

Research and development team

The research team included ophthalmologists (retina fellows), health information technology and medical informatics specialists, fellow assistants, and second and third year ophthalmology residents. In the first phase, six ophthalmologists (retina fellows), five health information technology and medical informatics specialists, and five fellow assistants and ophthalmology residents participated. In the second phase, two application designers were added to the team. In the third phase, all the second and third year ophthalmology residents (n = 18) and an ophthalmologist working in the emergency room were enrolled.

Study phases

Needs assessment and analysis

Literature review

In order to identify the clinical signs of APE and also valid clinical guidelines for the disease, all related sources including books (the 9th and 12th Editions of the American Academy of Ophthalmology [18, 19]), articles, reputable websites such as the website of the Ministry of Health of Iran [20], and the website of the American Academy of Ophthalmology [21], Endophthalmitis Vitrectomy Study [22], and the clinical guidelines of the European Society of Cataract and Refractive Surgeons [23] were reviewed for the prevention and treatment of endophthalmitis after cataract surgery. To determine the features of the decision support application, articles published between 2010 and 2020 and indexed in PubMed, Scopus and Web of Science as well as websites and similar CDSSs were examined to prepare an initial list of their functional features.

Interviews

To determine the needs and expectations of users and the content of the application, several semi-structured interviews based on the interview guide were conducted with ophthalmologists from April, 2021 to June, 2021(Supplementary file 1: Table S1). Arrangements were made to invite ophthalmologists (retina fellows) and residents for an interview in person. The interviews were conducted in a comfortable place, and each interview lasted for an average of 40–120 min. Interviews were audio-recorded and key notes were taken by the interviewer. The participants gave their informed consent to participate and the interviews were audio-recorded if they consented to. Also, the participants were assured about the confidentiality of the information and the anonymous publication of the study results. After the completion of each interview, the recorded interview was transcribed and the concepts were coded. Due to the small number of participants, the analysis of conceptual codes and themes was done manually. Data saturation was achieved after completion of 11 interviews.

Analysis

Braun and Clarke's thematic analysis was used to investigate the content of the interviews. This analysis allows continuous back-and-forth among datasets and code sets, and analysis of the generated data (Fig. 1) [24].

Fig. 1
figure 1

Braun and Clarke’s six-step thematic analysis [24]

Based on this method, the interviews audio-recorded by the researcher were first transcribed. The interviews were reread and rewritten to become more familiar with the data. After extracting the basic concepts from the transcripts, two evaluators performed open coding independently (coded by evaluator two and reviewed by evaluator one) and subthemes were drawn. For example, several concepts (codes) were related to drug dosage and providing therapeutic procedures for endophthalmitis, which were merged into a subtheme called selecting the appropriate treatment. Two or more subthemes were merged to form the main theme, and the themes were reviewed again to present the features that ophthalmologists needed. The transcripts of all interviews were examined and coded using Microsoft Word 2016. The desired technical features of the application were also identified in the focus group meeting attended by health information technology and medical informatics specialists. Then, the final list of features was prepared by merging of the features drawn from the interview and throughout the focus group meeting.

Design and development

Constructing model

Data collection

The list of clinical signs of APE was prepared based on scientific sources and clinical guidelines and provided to retina fellows for prioritization and weighting. The list of clinical signs was completed based on the clinical data of 60 patients with endophthalmitis admitted to the study hospital from December 2018 to June 2020.

Data preparation

Through the process of data preparation and cleaning processes, Missing data were replaced by multiplying the median of each feature by the weight obtained for each feature which was determined by each specialist. Because clinical signs were multivariate, after pre-processing and converting them to univariate, a number of 155 records were obtained. Out of 155 records,143 APE records and 12 non-APE records remained for further analysis.

Data partition

Since the number of data was too small to detect the presence or absence of the disease with the application, machine learning was needed. Therefore, 4 algorithms was used to diagnose the disease to determine the most appropriate algorithm with the best performance.

We used Support Vector Machine (SVM), Decision Tree (DT), K-Nearest Neighbors (KNN) and Random Forest (RF) algorithms to build a classifier using training data.

The following machine learning algorithms are used for classification and regression problems [25]. The codes of each algorithm were used from the corresponding class of the scikit-learn library available in the Python programming language.

SVM: SVM is an extension of the support vector classifier and is obtained by extending the feature space in a specific way, using the kernel [26]. Here, the radial kernel with default parameter is used to classify the patients into two categories, "disease-affected" and "disease-free".

DT: A decision tree is a set of conditions organized in a hierarchical structure. It is a predictive model in which an instance is classified by following the path satisfying the conditions from the root of the tree until it reaches a leaf, which will correspond to a class label. This algorithm is formed in the form of a decision tree by examining the characteristics of the data and making decisions based on them, and is applied to each conditional branch that is placed on the characteristics of the data to predict the output [25]. decision_tree (X_Train, Y_Train): This function builds a decision tree model using the CART method on the training data and draws its graph. It then returns the model as output.

KNN: In pattern recognition, the KNN algorithm is an instance-based learning method used to classify objects based on their closest training examples in the feature space [25]. In this method, three of the nearest neighbors (K = 3) for each subject were discovered in the entire training dataset (APE and non-APE). According to the label of these neighbors, which indicates how many of these three people are not healthy and how many are healthy, the most frequent label can be considered for the person under examination.

RF: An RF classifier consists of a number of trees, each of which is grown using some form of randomness. In the standard tree, each node is split using the best distribution among all the variables while in RF each node is split using the best distribution among the predictors in a way random [25].

RF is essentially a set of associative DTs where each voting tree for the class is assigned to a given sample, with the most frequent answer winning the vote [27].

To this end, the dataset of this study, consisting of 143 APE records and 12 non-APE records, was divided into two groups, the training dataset, and the testing dataset. The learning process of the algorithm was performed on 70% of the data (n = 108) in the training dataset, and performance evaluation of the algorithm was performed using 30% of the data (n = 47) in the testing dataset.

Evaluation of the model performance

The effectiveness of the application was investigated using the confusion matrix. The classification and diagnosis of individuals with APE using confusion matrix analyses produced four cases: true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). By using a confusion matrix, the four classification indices accuracy, precision, sensitivity, and specificity were calculated (Table 1).

Table 1 Common metrics in model evaluation [28]

CDSS design

The user interface of the decision support application was designed based on the functional features determined by Microsoft Visio Drawing 2016. The user interface was presented to the interviewees through face-to-face meetings for re-examination, and then based on their comments, final modifications were made to it. The user interface designed in Microsoft Visio Drawing was also provided to the programmer for coding.

Usability evaluation

After registering as a hypothetical physician and patient, the procedure of working with different parts of the APE Dx were explained to the residents in person. Then the application was reviewed by 19 ophthalmologists for two weeks. After completion of the reviews, the Questionnaire for User Interface Satisfaction (QUIS) [29] whose validity [30, 31] and reliability had previously been confirmed [32] was filled out for usability evaluation. QUIS consists of five main domains with 27 items rated on a 10-point (0–9) Likert scale (Supplementary file 2: Table S2). To analyze the data on usability evaluation, the mean (± standard deviation (SD)) scores on each domain was first calculated, and then the scores were classified into three levels of poor [0–3], average [3-6] and good (6–9].

Results

Needs assessment and analysis

In the focus group meeting attended by six retina fellows, consensus was achieved to refer to the 12th Edition of the American Academy of Ophthalmology [19] and the European Society of Cataract and Refractive Surgeons [23] for the clinical guideline of endophthalmitis. The clinical signs of the disease drawn from the literature review were prioritized and weighted (from 1 to 10) by the experts (Table 2).

Table 2 Average weight and priority of clinical signs of endophthalmitis

The interviewees participated in this phase were five residents and six faculty members (nine men and two women). The thematic analysis of the interview transcripts yielded 18 primary themes and 9 subthemes, and finally two main themes were generated by the evaluators (Supplementary file 3: Table S3).

The results from the literature review and interviews yielded seven functional features for physicians and five functional features for patients in the application (Table 3).

Table 3 Application’s functional features identified for physicians and patients

The application design and development phase

The APE clinical decision support application (APE Dx) was designed based on the functional features determined in the back-end and front-end modules. The artificial intelligence algorithm was coded with Python programming language. The front-end module of the physician–patient user interface was written using PWA and jQuery, and the back-end module was written using Django. The application is web-based and available anywhere any time through all browsers and operating systems. After designing the initial version, the application link was provided to ophthalmologists for review and approval. The application bugs identified by the experts were fixed and the final version was developed.

The main components of the physician–patient user interface are thematically illustrated in Fig. 2 and screenshots of the decision support application in Figs. 3, 4, 5 and 6. (Supplementary file 4: Fig. S1).

Fig. 2
figure 2

The main components of physician–patient user interface

Fig. 3
figure 3

Signs & symptoms from physician interface

Fig. 4
figure 4

Diagnosis from physician interface

Fig. 5
figure 5

Patient questionnaire from patient interface

Fig. 6
figure 6

Information sharing from patient interface

Evaluation of the application

The criteria calculated in this research include: accuracy, precision, sensitivity, specificity, NPV and F-measure.

This criteria of the application were calculated by confusion matrix (Table 4).

Table 4 Confusion matrix
$$\mathrm{Accuracy }=\frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{TN}+\mathrm{FP}+\mathrm{FN}}=\frac{147}{47}= 1$$
$$\mathrm{Precision }=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}= \frac{42}{42+0}= 1$$
$$\mathrm{Sensitivity }=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}= \frac{42}{42+0}= 1$$
$$\mathrm{Specificity }=\frac{\mathrm{TN}}{\mathrm{TN}+\mathrm{FP}}=\frac{5}{5+0}= 1$$
$$\mathrm{NPV}=\frac{\mathrm{TN}}{\mathrm{TN}+\mathrm{FN}}=\frac{5}{5+0}= 1$$
$$\mathrm{F}-\mathrm{measure}=\frac{2 * \mathrm{T}\mathrm{P}}{(2 * \mathrm{T}\mathrm{P} + \mathrm{F}\mathrm{P}+\mathrm{F}\mathrm{N}) }=\frac{2 *42}{(2 * 42 + 0+0) }= 1$$

The findings showed in all 4 reviewed algorithms that the decision support application's sensitivity was calculated at 100%, meaning that it could properly diagnose 100% of participants with APE. The application's specificity was determined to be 100%, meaning that it could properly classify 100% of the investigated individuals as non-APE patients even though they did not have the disease. The application successfully distinguished 100% of disease cases from true and false positive cases when the precision was calculated at 100. Overall, 100% of instances with and without APE could be correctly diagnosed by the application. Considering accuracy and sensitivity, the F-measure criterion was obtained 1, which indicates the high accuracy of this decision support system in diagnosing the disease.

In addition, the Receiver Operator Characteristic (ROC) diagram and the Precision-Recall (sensitivity) diagram are drawn, which show how well the machine learning model performs for the diagnosis of two categories sick and healthy.

The same accuracy curve was obtained for all 4 algorithms (Fig. 7).

Fig. 7
figure 7

Precision- recall curve

The ROC curve is widely used to show the performance of a model or algorithm. The ROC curve provides performance information across a range of thresholds and can be summarized by the area under the curve (AUC), a single number [33]. Typically, an ROC analysis shows how sensitivity (true positive rate) changes with varying specificity (true negative rate or 1 – false positive rate) for different thresholds.

The curve was the same for all 4 algorithms and the value of AUC was obtained 1 (Fig. 8).

Fig. 8
figure 8

ROC curve

The dataset was divided into two categories: healthy with 0 label and patient with 1 label. By evaluating the datasets with four algorithms, the performance evaluation criteria values of each model were similar to each other. The reason that all criteria are 100% is that the dataset of healthy people is zero and the dataset of patients has values greater than zero, so all algorithms can easily distinguish between healthy and sick.

According to the obtained results, all the algorithms of this study have worked successfully and all the evaluation criteria show very high precision. Therefore, to choose the best algorithm for this decision support system, the range and type of data, data volume, speed and efficiency, and specific needs should be considered. According to the opinion of experts, a decision tree is one of the best classifiers due to its simple interpretation and expressive quality [34]. Therefore, the decision tree algorithm was used in the decision support system (Fig. 9).

Fig. 9
figure 9

Decision tree trained

The demographic characteristics of the residents participating in usability evaluation are shown in Table 5.

Table 5 Demographic characteristics of users participating in usability evaluation

The results of the usability evaluation showed that the application was good with a mean (± SD) score of 7.73 ± 0.53 and obtained the highest score on the learning and screen domain (Table 6).

Table 6 The results of usability evaluation of the APE Dx application

Discussion

In this study, an intelligent clinical decision support application (APE Dx) was designed and developed for APE diagnosis. The most important features of this application for physicians were, respectively, selecting clinical signs and symptoms, predicting diagnosis based on artificial intelligence, physician–patient communication, selecting the appropriate treatment, easy access to scientific resources, summary and reporting, and assigning the English language. The most important features for patients were, respectively, playing audio files, uploading images, sharing data with physicians, patient-physician communication, education (access to clinical information), and assigning the Persian language. The accuracy, precision, sensitivity, specificity, NVP, F-measure and AUC of the application were obtained 100%. The usability evaluation of the application was obtained as good by the residents, as well.

Artificial intelligence algorithm was used in the APE Dx decision support application to more accurately diagnose the disease based on clinical signs. Artificial intelligence algorithm was used in our study for the diagnosis of eye diseases, which is consistent with other studies which designed a CDSS based on fuzzy random forest and a set of fuzzy decision trees for the diagnosis and treatment of diabetic retinopathy [13], Deep Neural Networks (DNNs) and Boltzmann Machines model for the early diagnosis of hypertensive retinopathy [35], artificial neural network algorithm for retina image analysis [36], and complex image analysis and machine learning techniques for the early screening of diabetic retinopathy [37]. Artificial intelligence techniques can support medical diagnosis processes, increase the accuracy of diagnosis, and be used in designing appropriate models to identify affected people and predict diseases [38]. In this study, we used 4 artificial intelligence algorithms including SVM, DT, KNN and RF to predict disease diagnosis. Since, to the best of our knowledge, CDSS based on artificial intelligence algorithms in ophthalmology have so far focused on diabetic retinopathy, it is recommended that artificial intelligence algorithm-based CDSS be used for accurate diagnosis and provision of appropriate treatment for other eye diseases, as with the current study, with regard to setting conditions, expert opinions and available data.

In the current study, the clinical and technical expert team was employed to identify the needs and functional features of the decision support application. In the study of Karthikeyan et al., (2019) a total of 475 applications related to eye care were identified on the Android platform, yet ophthalmologists participated in the design and development of only 107 (22.53%) applications [39]. The design of clinical systems according to the opinion of users with different backgrounds helps to interpret and use the systems more easily and consequently enhance them [40, 41]; it is therefore recommended to use the opinions of both technical and clinical experts in designing these systems more fully from technical and clinical perspective.

In this study, retina fellows were interviewed to identify the user needs. In the present study, similar to the studies on the development of decision support applications (MELANI) for medicinal plants [42] and Parkinson's disease [43], and the development of decision support systems for Sleep Staging Tasks [44] and rare diseases [45], end users were interviewed to identify the needs and the features. In interviews with key users, experiences, attitudes and personal beliefs related to the subject of interest are determined [46]. Identifying the needs and features of the system in this way helps design the systems in the best way possible according to the needs of end users, which can affect usability by interested end users and the success of the system.

The multipurpose application in the present study, as with the clinical decision support monitoring and support systems in the study of Kart et al. (2017) [47] and the electronic tool designed in the study of Melnick et al., (2017) [48] was designed for use by both physicians and patients, which allows the sharing of clinical information among physicians and also between the physicians and patients. Meta-analyses [49, 50] have shown that CDSSs that are capable of providing recommendations to patients and specialists, are more effective than CDSSs that are only capable of providing recommendations to healthcare professionals. Besides this, multipurpose clinical decision support applications can facilitate their joint utilization by all authorized people in delivering patient care, which in turn can increase awareness, facilitate access to data, improve physician efficiency, and lead to more constructive communication between patients and healthcare teams [51, 52].

Ophthalmology residents rated this application as good through the usability evaluation and attained the highest score on the screen domain, which is consistent with the evaluation of the design and development of personal electronic health records for patients with thalassemia major [53] in the learning domain, as well as user interface of a smartphone-based application for increasing the self-care of patients with asthma [30]. Users rated this application as good, similar to the results of other studies [31, 54,55,56,57]. Given the importance of usability evaluation in the design and implementation of applications and identifying strengths and weaknesses [58], it is recommended that in future studies, the application be evaluated by patients to determine the problems related to the patient user interface.

Study strengths and limitations

This study was the first to design, develop and evaluate an intelligent clinical decision support application for APE to assist the physicians in early diagnosis of the disease and also to help patients present to healthcare centers early. The strengths of the study included the active engagement of ophthalmologists and other specialists in the needs assessment and application design and adherence to evidence-based principles. Besides that, the use of artificial intelligence algorithms for more accurate prediction of disease diagnosis and the utilization of multimedia functions in the application are two other strengths of our study. One of the limitations of the study is the lack of sufficient time to interview with the physicians and hold focus group meetings, which was partly resolved by negotiating and making necessary coordination with them and conducting interviews when they were not present in the clinic. Model performance evaluation in our study using training data with an precision of 1 shows good performance in detection, but these results cannot be generalized to new data and do not represent the true performance of the model, as no system is 100% and should be checked with new data. The design phase in our study was done based on the opinions of the ophthalmologists of a teaching medical center and the lack of participation of patients in the design of the user interface was another limitation of the study, however, we did our best to design a user-friendly and practical application by interviewing all retina fellows and ophthalmology residents in this center, and to design the patient user interface based on the specialists' knowledge about patient needs and expectations.

Conclusion

All-round participation and using the experiences of clinical specialists, and their awareness of patient needs, as well as the availability of comprehensive APE clinical dataset in the hospital led to the design of a system with accuracy, precision sensitivity, specificity, NPV, F-measure and AUC 100%. Since we reached the same results for all 4 algorithms, considering the volume and type of dataset, the decision tree algorithm was chosen for its simple interpretation and expressive quality for disease diagnosis in the decision support system. It is suggested that for the real evaluation of the performance of the model, new data for test should be used and the results determined using evaluation criteria. It is recommended to evaluate the extent of physician’s use of the application and how it affected the diagnosis of endophthalmitis in clinical settings in future studies.

Availability of data and materials

The datasets used and analysed during the current study available from the corresponding author on reasonable request.

Abbreviations

APE:

Acute postoperative endophthalmitis

CDSSs:

Clinical decision support systems

DR:

Diabetic retinopathy

SVM:

Support vector machine

DT:

Decision tree

KNN:

K-nearest neighbors

RF:

Random forest

QUIS:

Questionnaire for user interface satisfaction

ROC:

Receiver operator characteristic

AUC:

Area under precision-recall curve

SD:

Standard deviation

DNNs:

Deep neural networks

References

  1. Loh K, Agarwal P. Contact lens related corneal ulcer. Malays Fam Physician. 2010;5(1):6–8. PubMed PMID: 25606178. Pubmed Central PMCID: PMC4170392. Epub 2010/01/01. eng, PMID: 25606178.

    PubMed  PubMed Central  Google Scholar 

  2. Durand ML. Endophthalmitis. Clin Microbiol Infect. 2013;19(3):227–34. https://doi.org/10.1111/1469-0691.12118.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Sadiq MA, Hassan M, Agarwal A, Sarwar S, Toufeeq S, Soliman MK, et al. Endogenous endophthalmitis: diagnosis, management, and prognosis. J Ophthalmic Inflamm Infect. 2015;5(1):32. https://doi.org/10.1186/s12348-015-0063-y.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Verma L, Chakravarti A. Prevention and management of postoperative endophthalmitis: a case-based approach. Indian J Ophthalmol. 2017;65(12):1396–402. https://doi.org/10.4103/ijo.IJO_1058_17.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Prophylaxis of postoperative endophthalmitis following cataract surgery. results of the ESCRS multicenter study and identification of risk factors. J Cataract Refract Surg. 2007;33(6):978–88. https://doi.org/10.1016/j.jcrs.2007.02.032.

    Article  Google Scholar 

  6. Hashemian H, Mirshahi R, Khodaparast M, Jabbarvand M. Post-cataract surgery endophthalmitis: brief literature review. J Curr Ophthalmol. 2016;28(3):101–5. https://doi.org/10.1016/j.joco.2016.05.002.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Naderi A. Endophthalmitis or intraocular infection. 2021. http://www.dr-naderi.com. Accessed 6 Mar 2021.

  8. Barry P, Seal DV, Gettinby G, Lees F, Peterson M, Revie CW. ESCRS study of prophylaxis of postoperative endophthalmitis after cataract surgery: preliminary report of principal results from a European multicenter study. J Cataract Refract Surg. 2006;32(3):407–10. https://doi.org/10.1016/j.jcrs.2006.02.021.

    Article  PubMed  Google Scholar 

  9. Vinay C, Kumar VV, Kumar V. Smartphone applications for medical students and professionals 1 2. Nitte Univ J Health Sci. 2013;3:59–66. https://doi.org/10.1055/s-0040-1703635.

    Article  Google Scholar 

  10. Bakken S, Jia H, Chen ES, Choi J, John RM, Lee N-J, et al. The Effect of a mobile health decision support system on diagnosis and management of obesity, tobacco use, and depression in adults and children. J Nurse Pract. 2014;10(10):774–80. https://doi.org/10.1016/j.nurpra.2014.07.017.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Wicklund E. mHealth’s Benefits Are Coming into Focus for Eye Doctors: Xtelligent Healthcare Media, LLC. https://mhealthintelligence.com/news/mhealths-benefits-are-coming-into-focus-for-eye-doctors (2012–2020). Accessed 6 Mar 2021.

  12. Charlesworth JM, Davidson MA. Undermining a common language: smartphone applications for eye emergencies. Med Devices (Auckl). 2019;12:21–40. https://doi.org/10.2147/MDER.S186529.

    Article  PubMed  Google Scholar 

  13. Romero-Aroca P, Valls A, Moreno A, Sagarra-Alamo R, Basora-Gallisa J, Saleh E, et al. A clinical decision support system for diabetic retinopathy screening: creating a clinical support application. Telemed J E Health. 2019;25(1):31–40. https://doi.org/10.1089/tmj.2017.0282.

    Article  PubMed  PubMed Central  Google Scholar 

  14. De la Torre-Díez I, Martínez-Pérez B, López-Coronado M, Díaz JR, López MM. Decision support systems and applications in ophthalmology: literature and commercial review focused on mobile apps. J Med Syst. 2015;39(1):174. https://doi.org/10.1007/s10916-014-0174-2.

    Article  PubMed  Google Scholar 

  15. López MM, López MM, de la Torre DI, Jimeno JC, López-Coronado M. A mobile decision support system for red eye diseases diagnosis: experience with medical students. J Med Syst. 2016;40(6):151. https://doi.org/10.1007/s10916-016-0508-3.

    Article  PubMed  Google Scholar 

  16. Vision Loss Resulting from Eye Infection: $1.5M Settlement-Failure of ophthalmologist to diagnose and treat endophthalmitis in type 2 diabetic results in vision loss Boston’s Innovative The Leader in Medical Malpractice and Personal Injury Law. 2015. http://www.lubinandmeyer.com/cases/endophthalmitis-lawsuit.html. Accessed 6 Mar 2021.

  17. Delayed Diagnosis of Endophthalmitis Following Cataract Surgery United States Ophthalmic Mutual Insurance Company (OMIC). 2021. https://www.omic.com/delayed-diagnosis-of-endophthalmitis-following-cataract-surgery/. Accessed 6 Mar 2021.

  18. Nida H, Thomas A, Bryn M, Sam S, Emilio M, Thellea K, et al. Uveitis and Ocular Inflammation. America: American Academy of Ophthalmology. 2019–2020. p. 291–5.

  19. Colin A, Audina M, Graham E, Stephen J, Brian C, Richard B, et al. Retina and Vitreous. America: American Academy of Ophthalmology; 2019–2020. p. 362,88–91.

  20. Ministry of Health and Medical Education Iran. https://behdasht.gov.ir/(2007-2020). Accessed 6 Mar 2021.

  21. American Academy of Ophthalmology America. 2021. https://www.aao.org/. Accessed 6 Mar 2021.

  22. Forster RK. The endophthalmitis vitrectomy study. Arch Ophthalmol. 1995;113(12):1555–7. https://doi.org/10.1001/archopht.1995.01100120085015.

    Article  CAS  PubMed  Google Scholar 

  23. Barry PCL, Gardner S ESCRS Guidelines for prevention and treatment of endophthalmitis following cataract surgery: the European Society of Cataract and Refractive Surgeons, Temple House, Temple Road, Blackrock, Co Dublin, Ireland. 2013. www.escrs.org; https://www.escrs.org/downloads/Endophthalmitis-Guidelines.pdf. Accessed 6 Mar 2021.

  24. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. https://doi.org/10.1191/1478088706qp063oa.

    Article  Google Scholar 

  25. Boateng EY, Otoo J, Abaye D. Basic tenets of classification algorithms k-nearest-neighbor, support vector machine, random forest and neural network: a review. J Data Analys Inform Process. 2020;08:341–57. https://doi.org/10.4236/jdaip.2020.84020.

    Article  Google Scholar 

  26. Dietrich R, Opper M, Sompolinsky H. Statistical mechanics of support vector networks. Phys Rev Lett. 1999;82(14):2975–8. https://doi.org/10.1103/PhysRevLett.82.2975.

    Article  CAS  Google Scholar 

  27. Sun L, Schulz K. The improvement of land cover classification by thermal remote sensing. Remote Sensing. 2015;7(7):8368–90. https://doi.org/10.3390/rs70708368.

    Article  Google Scholar 

  28. Tong Y, Lu W, Yu Y, Shen Y. Application of machine learning in ophthalmic imaging modalities. Eye Vis. 2020;7(1):1–15. https://doi.org/10.1186/s40662-020-00183-6.

    Article  Google Scholar 

  29. Chin JP, Diehl VA, Norman KL. Development of an instrument measuring user satisfaction of the human-computer interface. ACM Digit Lib. 1988. https://doi.org/10.1145/57167.57203.

    Article  Google Scholar 

  30. Farzandipour M, Nabovati E, Heidarzadeh Arani M, Akbari H, Sharif R, Anvari S. Enhancing asthma patients’ self-management through smartphone-based application: design, usability evaluation, and educational intervention. Appl Clin Inform. 2019;10(5):870–8. https://doi.org/10.1055/s-0039-1700866.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Ayatollahi H, Hasannezhad M, Fard HS, Haghighi MK. Type 1 diabetes self-management: developing a web-based telemedicine application. Health Inform Manag J. 2016;45(1):16–26. https://doi.org/10.1177/1833358316639456.

    Article  Google Scholar 

  32. Alexandru C-A. Usability testing and improvement of telemedicine websites. M Sc diss University of Edinburgh Edinburgh. (2010–2021). https://www.yumpu.com/. Accessed 6 Mar 2021.

  33. Muschelli J. ROC and AUC with a binary predictor: a potentially misleading metric. J Classif. 2020;37(3):696–708. https://doi.org/10.1007/s00357-019-09345-1.

    Article  PubMed  Google Scholar 

  34. Panhalkar AR, Doye DD. A novel approach to build accurate and diverse decision tree forest. Evol Intel. 2022;15(1):439–53. https://doi.org/10.1007/s12065-020-00519-0. Epub 2021/01/12.

    Article  Google Scholar 

  35. Triwijoyo BK, Pradipto YD. Detection of hypertension retinopathy using deep learning and boltzmann machines. J Phys Conf Ser. 2017. https://doi.org/10.1088/1742-6596/801/1/012039.

    Article  Google Scholar 

  36. Bourouis A, Feham M, Hossain M, Zhang L. An intelligent mobile based decision support system for retinal disease diagnosis. Decis Support Syst. 2014;59:341–50. https://doi.org/10.1016/j.dss.2014.01.005.

    Article  Google Scholar 

  37. Prasanna P, Jain S, Bhagat N, Madabhushi A, editors. Decision support system for detection of diabetic retinopathy using smartphones. 2013 7th International Conference on Pervasive Computing Technologies for Healthcare and Workshops; 2013. pp. 176–179. https://doi.org/10.4108/icst.pervasivehealth.2013.252093.

  38. Abbasi Hasanabadi N, Firouzi Jahantigh F, Tabarsi P. Diagnosis of pulmonary tuberculosis using artificial intelligence (Naive Bayes Algorithm). Payavard Salamat. 2020;13(6):419–28. https://doi.org/10.23919/CISTI52073.2021.9476329.

    Article  Google Scholar 

  39. Karthikeyan S, Thangarajan R, Theruvedhi N, Srinivasan K. Android mobile applications in eye care. Oman J Ophthalmol. 2019;12(2):73–7.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Reis M, Almeida AM. Designing an application to support game-based learning: gathering functional requirements from a qualitative approach. 2021. p. 1–6. https://doi.org/10.23919/CISTI52073.2021.9476329.

  41. Moura J, Almeida AM, Roque F, Figueiras A, Herdeiro MT. A Mobile app to support clinical diagnosis of upper respiratory problems (eHealthResp): co-design approach. J Med Internet Res. 2021;23(1):e19194. https://doi.org/10.2196/19194.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Dipo Anugrah Salam CR, Ellina Rienovita. Mobile Intelligent Decision Support System MELANI (Medicinal Plant Identifier) Development indonesia: jounal of education & human resources. 2020. https://ejournal.upi.edu/index.php/JEHR/article/view/24450. Accessed 6 Mar 2021.

  43. Timotijevic L, Hodgkins CE, Banks A, Rusconi P, Egan B, Peacock M, et al. Designing a mHealth clinical decision support system for Parkinson’s disease: a theoretically grounded user needs approach. BMC Med Inform Decis Mak. 2020;20(1):34. https://doi.org/10.1186/s12911-020-1027-1. PubMed PMID: 32075633. Pubmed Central PMCID: PMC7031960. Epub 2020/02/23.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  44. Hwang J, Lee T, Lee H, Byun S. A clinical decision support system for sleep staging tasks with explanations from artificial intelligence: user-centered design and evaluation study. J Med Internet Res. 2022;24(1):e28659–e. https://doi.org/10.1186/s12911-020-1027-1.

    Article  Google Scholar 

  45. Schaaf J, Prokosch H-U, Boeker M, Schaefer J, Vasseur J, Storf H, et al. Interviews with experts in rare diseases for the development of clinical decision support system software - a qualitative study. BMC Med Inform Decis Mak. 2020;20(1):230. https://doi.org/10.1186/s12911-020-01254-3.

    Article  PubMed  PubMed Central  Google Scholar 

  46. DeJonckheere M, Vaughn LM. Semistructured interviewing in primary care research: a balance of relationship and rigour. Fam Med Community Health. 2019;7(2):e000057. https://doi.org/10.1136/fmch-2018-000057.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Kart Ö, Mevsim V, Kut A, Yürek İ, Altın AÖ, Yılmaz O. A mobile and web-based clinical decision support and monitoring system for diabetes mellitus patients in primary care: a study protocol for a randomized controlled trial. BMC Med Inform Decis Mak. 2017;17(1):154. https://doi.org/10.1186/s12911-017-0558-6.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Melnick ER, Hess EP, Guo G, Breslin M, Lopez K, Pavlo AJ, et al. Patient-centered decision support: formative usability evaluation of integrated clinical decision support with a patient decision aid for minor head injury in the emergency department. J Med Internet Res. 2017;19(5):e174. https://doi.org/10.2196/jmir.7846.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Roshanov PS, Fernandes N, Wilczynski JM, Hemens BJ, You JJ, Handler SM, et al. Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ. 2013;346:f657. https://doi.org/10.1136/bmj.f657.

    Article  PubMed  Google Scholar 

  50. Collins S, Drew P, Watt I, Entwistle V. “Unilateral” and “bilateral” practitioner approaches in decision-making about treatment. Soc Sci Med (1982). 2005;61(12):2611–27. https://doi.org/10.1016/j.socscimed.2005.04.047.

    Article  Google Scholar 

  51. Abbasgholizadeh Rahimi S, Menear M, Robitaille H, Légaré F. Are mobile health applications useful for supporting shared decision making in diagnostic and treatment decisions? Glob Health Action. 2017;10(sup3):1332259. https://doi.org/10.1080/16549716.2017.1332259.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Gee PM, Greenwood DA, Paterniti DA, Ward D, Miller LM. The eHealth enhanced chronic care model: a theory derivation approach. J Med Internet Res. 2015;17(4):e86. https://doi.org/10.2196/jmir.4067.

    Article  PubMed  PubMed Central  Google Scholar 

  53. MoeilTabaghdehi K, Ghazisaeedi M, Shahmoradi L, Karami H. Designing and creating personal electronic health records for thalassemia major patients. Payavard-Salamat. 2018;11(5#M00225):567–77.

    Google Scholar 

  54. Moulaei K, Sheikhtaheri A, Ghafaripour Z, Bahaadinbeigy K. The Development and Usability Assessment of an mHealth Application to Encourage Self-Care in Pregnant Women against COVID-19. J Healthc Eng. 2021;2021:9968451. https://doi.org/10.31661/jbpe.v0i0.2103-1294.

  55. Ghazisaeedi M, Shahmoradi L, Ranjbar A, Sahraei Z, Tahmasebi F. Designing a mobile-based self-care application for patients with heart failure. J Health Biomed Inform. 2016;3(3):195–204.

    Google Scholar 

  56. Langarizadeh M, Behzadian H, Samimi M. Development of personal health record application for gestational diabetes, based on smart phone. J Urmia Nurs Midwife Fac. 2016;14(8):714–27.

    Google Scholar 

  57. Hamborg V B, Bludau H B, Questionnaire based usability evaluation of hospital information systems. (Academic Conferences International Limited Curtis Farm, Kidmore End, Reading RG4 9AY, United Kingdom, 2004. https://academic-publishing.org/index.php/ejise/article/view/355. Accessed 6 Mar 2021.

  58. Joshi A, Wilhelm S, Aguirre T, Trout K, Amadi C. An interactive, bilingual touch screen program to promote breastfeeding among Hispanic rural women: usability study. JMIR Res Protoc. 2013;2(2):e47. https://doi.org/10.2196/resprot.2872.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We are grateful to the ophthalmologists and eye residents in the Khatam-Al-Anbia Eye Hospital, Mashhad University of Medical Sciences who gave of their time and expertise. Additionally, we gratefully acknowledge funding from Vice Chancellor for Research of Kashan University of Medical Sciences to help to carry out the present study.

Funding

The present paper is the result of a Master thesis in Health Information Technology which was approved by the Ethics Committee of Kashan University of Medical Sciences (IR.KAUMS.MEDNT.REC.1400.054) and funded by deputy of research in Kashan University of Medical Sciences with the grant number Reg. Code: 400029.The funding body played no role in the design of the study and collection, analysis, interpretation of data, and in writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

F.R, M. Sh, R.F, N. Sh, M. H, and A.S: Conceptualization, R. F, E. N and A.S: Methodology, F. R, M.Sh: Supervision. A. S drafted the first version of the manuscript. All authors contributed to the design of the manuscript, critically revised the manuscript, and gave final approval for publication.

Corresponding author

Correspondence to Azam Salehzadeh.

Ethics declarations

Ethics approval and consent to participate

All methods were carried out in accordance with relevant guidelines and regulations. This manuscript was presented in the research ethics committee of Kashan University of Medical Sciences and was approved with the ID IR.KAUMS.MEDNT.REC.1400.054. participants gave informed consent to participate in the study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

 Table S1. Interview guide.

Additional file 2:

 Table S2. Questionnaire to evaluate the satisfaction of users of the clinical decision support application for the diagnosis of endophthalmitis.

Additional file 3:

 Table S3. The results from interviews based on thematic analysis.

Additional file 4:

 Figure S1. Screenshots of the decision support application.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shaeri, M., Shoeibi, N., Hosseini, S.M. et al. An intelligent decision support system for acute postoperative endophthalmitis: design, development and evaluation of a smartphone application. BMC Med Inform Decis Mak 23, 130 (2023). https://doi.org/10.1186/s12911-023-02214-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-023-02214-3

Keywords