Skip to main content

Patient facing decision support system for interpretation of laboratory test results

Abstract

Background

In some healthcare systems, it is common that patients address laboratory test centers directly without a physician’s recommendation. This practice is widely spread in Russia with about 28% of patients who visiting laboratory test centers for diagnostics. This causes an issue when patients get no help from the physician in understanding the results.

Computer decision support systems proved to efficiently solve a resource consuming task of interpretation of the test results. So, a decision support system can be implemented to rise motivation and empower the patients who visit a laboratory service without a doctor’s referral.

Methods

We have developed a clinical decision support system for patients that solves a classification task and finds a set of diagnoses for the provided laboratory tests results.

The Wilson and Lankton’s assessment model was applied to measure patients’ acceptance of the solution.

Results

A first order predicates-based decision support system has been implemented to analyze laboratory test results and deliver reports in natural language to patients. The evaluation of the system showed a high acceptance of the decision support system and of the reports that it generates.

Conclusions

Detailed notification of the laboratory service patients with elements of the decision support is significant for the laboratory data management, and for patients’ empowerment and safety.

Peer Review reports

Background

In some healthcare systems, it is common that patients address laboratory test centers directly without a physician’s recommendation [1]. This practice is widely spread in Russia with about 28% of patients who visiting laboratory test centers for diagnostics [2]. This causes an issue when patients get no help from the physician in understanding the results. Patients face a problem when they need to decide how to continue the diagnostics and treatment process. A possible solution to this problem could be that a laboratory test center not only delivers the test results but also their explanation to the patients. This, however, should be done automatically, or at least semi-automatically, to exclude a critical load on the test centers. Clinical decision support systems can become a good technology for an automatic interpretation of test results [3, 4]. The experience in implementation of decision support systems for health care professionals shows their efficiency for medical diagnostics. However, patients require a different approach in data presentation and interpretation [5,6,7,8,9,10,11].

Studies [12,13,14,15] have demonstrated that many providers do not have systems that can ensure that the test results are reliably communicated to patients. As shown in [16, 17] normal and abnormal test results are commonly missed, even when a health care system widely uses electronic health records (EHRs), and providers miss 1–10% of abnormal test results. It would not be an exaggeration to say that we do not have sufficient mechanisms to ensure that test results are consistently delivered to patients and understood by them.

As a problem importance is recognized, a number of potential solutions has been studied [18,19,20,21,22,23]. The first approach originates from the development of computerized decision support systems that support test centers in reviewing results and notifying patients in case of abnormal results [18, 19, 21, 23]. Another approach has involved implementation of such testing processes where test centers consistently deliver test results directly to patients. Such systems vary from sending each piece of the test results by mail to complex patient web portals, where they can have access to the history of test results [20, 24].

Interpretation of the test results is a resource consuming task that delays the results and increases costs of each test [25, 26]. However, the computer decision support systems proved to solve such tasks efficiently. To increase motivation and support the patients who refer to a test center without a doctor’s referral in making better informed decisions, a computer decision support system can be designed and implemented.

The goal of this study is to develop and evaluate a decision support system for patients, which:

  1. a)

    provides a personalized tool to inform patients on the results of the laboratory tests

  2. b)

    empowers patients to form opinions on how to continue or not to continue with a treatment

  3. c)

    prepares patients to have an informed discussion with their doctor.

To support patients, we have implemented and evaluated a decision support system that automatically generates interpretations for laboratory test results:

This paper focuses on the evaluation of correctness and user acceptance of a decision support system for patients of a test center in Saint-Petersburg, Russia.

Methods

Implementation

We have developed a clinical decision support system for the patients that solves a classification problem by connecting test results to a list of diagnoses. The decision support is based on a classification algorithm, which produces the following conclusions:

  • Located a list of diagnoses that can be related to the test results;

  • Found no fitting diagnoses;

To enable a definition of inference rules we have developed a knowledge representation language that is based on the predicate calculus [27] and a user interface to allow medical professionals defining the system rules. For the pilot project, we have chosen a limited set of laboratory tests that could be automatically interpreted by the system. We have interviewed 3 laboratory physicians and 3 specialist physicians (gynecologist, urologist and general practitioner) to define the inference rules for the system.

The decision support system has been implemented and operating in the Helix laboratory center in Saint-Petersburg, Russia.

The system has been implemented using the following technologies:

  1. 1.

    User interfaces and a back-end are based on the. NET Core 2.0

  2. 2.

    Data storage is based on PostgreSQL

Evaluation

Accuracy of the decision support

To evaluate accuracy of the results produced by the system, we have performed a validation of 1000 randomly generated reports. The reports were generated in a way to allow validating all of 89 decision support algorithms. The reports were given to two independent pathology experts to be reviewed independently. The results of the expert review were used to calculate the following criteria [28]:

  1. 1.

    Error rate as an average classification error

  2. 2.

    Accuracy as an average effectiveness of a classifier

  3. 3.

    Precision ((All terms – Mistakes)/All terms),

  4. 4.

    Recall (ratio of true positives to (true positives + false negatives)), and

  5. 5.

    F-measure (\( 2\bullet \frac{recall\bullet precision}{recall+ precision} \)).

The reviewers’ disagreements were settled by consensus. Cohen’s kappa was calculated to rate the disagreement between reviewers [29].

User acceptance

To assess the user acceptance of the system, a Wilson and Lankton’s model of patients’ acceptance of electronic health solutions was applied [30]. The model allowed measuring the following criteria: behavioral intention (BI) to use, intrinsic motivation (IM), perceived ease-of-use (PEOU), and perceived usefulness (PU) of the decision support system.

BI represents the intention to utilize the system and to rely on the decision support that it provides; IM represents the willingness to use the system provided that no direct compensation is available; PEOU represents the extent to which the provided reports are clearly presented and comprehended by users; and PU denotes the degree to which the patients believe that the utilization of the decision support system will improve their experience with laboratory tests.

We have applied a Wilson’s and Lankton [30] revision of the Davis’s et al. [31] method to measure BI, PEOU, and PU. Intrinsic motivation was measured by utilizing the Davis’s et al. method [31].

Questionnaire

We started with detection of possible items for the questionnaire by collecting a large list of acceptance test questions. The questions were collected from preceding internal studies, from the literature and from brainstorming. The list was then reviewed by the study team to eliminate the items that do not help to reach the goals of the study and duplicate questions. The remaining items were simplified and worded as clear to the potential participants as possible.

BI measure consisted of 2 objects whereas IM, PEOU, and PU consisted of 3 objects each. Russian translation of the questionnaires made by the research team was used during the study. To rate each item a Likert scale from 1 (not at all) to 7 (very much) was applied [32].

Recruitment

The recruitment of the study participants was done in Saint-Petersburg, Russia. The patients were eligible to be invited if they had experience using the system with a minimum of 5 reports on the test outcomes. The recruitment was done by sending invitations to the 500 eligible patients. Later, we have formed a group of 120 patients based on the first responded – first included principal with a recruitment rate of 24%.

Demographic characteristics of the study participants are shown in Table 1. We assessed Information technology (IT) literacy of the patients based on how frequently they use smartphones or personal computer. We assessed IT literacy on the scale from beginners – patients who started using computer or smartphone maximum 6 months before the study begin; intermediate – computer or smartphone users who do it at least 2 times a week; and advanced – daily users of a computer or a smartphone.

Table 1 Demographic characteristics of the study participants

Data collection and analysis

All the study participants were given individual access to the online questionnaire, which they were asked to fill in (please see Additional file 1 for the questionnaire details). All the patients received a written detailed instruction on how to operate with a questionnaire and the sense of the rating scale.

GNU Octave [33] version 4.0.2 was applied to calculate the statistics of the participants’ general characteristics and user acceptance measurements.

Ethics commission approval

The study was approved by the ethics commission of the committee of healthcare of Saint-Petersburg, Russia. All the study participants were informed in written form about the goals of the study and about the meaning of the questionnaires. We assured in written form every participant of their rights to anonymity and confidentiality. Written consent was obtained from every participant. Every participant was informed in written form about a right to withdraw personal data from the study record for up to 3 months after their approval.

Results

Implementation

The clinical decision support system consists of the following modules, which provide the main features of the system (Fig. 1):

  • A Data extraction system receives data from external sources, such as hospital or laboratory information systems. It checks the syntax validity of data and sends it to a Database

  • A Data base receives and saves facts from an external laboratory information system.

  • A Knowledge base editor (see section “User acceptance”.1) provides an interface to experts to define inference rules that are sent to a Knowledge base.

  • A Knowledge base stores inference rules.

  • Inference engine applies rules from the knowledge base to the facts from the Database to conclude the results and sends them to the explanation system and report generator.

  • Explanation system scrutinizes the sequence of the applied inference rules to demonstrate how the result has been achieved.

  • Report generator creates a readable report from the inference results and sends it to the report storage.

Fig. 1
figure 1

Structural scheme of the decision support system

The example of how the system operates is presented in section “Inference example”.

The knowledge representation language of the system is built upon a first order predicate logic. The scheme of the knowledge base is shown in Fig. 2.

  • The main object that the system processes is a laboratory test configuration that consists a laboratory test object and of a list of direct inference rules that can be related to this object.

  • Laboratory test object is a model that comprises a list of atomic components of the test e.g. a complete blood count test includes 22 atomic components.

  • For each component of a test, we define a list of direct inference rules that have conditions for including this components in the inference. The conditions are represented as comparison operators: =, <>, includes (> = or = <), excludes (> = and = <). Within a rule, the conditions are related by logical operators “and”, “or” and “not”.

  • For each direct rule, an expert can model a list of exclusion rules to exclude a direct rule from an inference process if the exclusion conditions are met.

  • An order the object groups laboratory tests reflect the commercial orders that patients actually make.

Fig. 2
figure 2

Object model of the system

General inference process is divided into the following steps:

  1. 1.

    When the system receives an order bundle from a laboratory information system, the tests in the order are being analyzed to generate a list of tests, configurations for which are available in the knowledge base.

  2. 2.

    Actual test results are loaded to the Database of the system and become available for an inference process.

  3. 3.

    The inference engine receives the list of tests and selects proper direct inference rules and exclusion rules in the proper sequence, which can be applied to the received facts.

  4. 4.

    If a direct rule has been successfully applied and no exclusion rule is effective, the inference engine adds a text artefact to the resulting json file (Fig. 3).

  5. 5.

    After the inference has been completed, the resulting json file is sent to the reporting service to generate a pdf report.

Fig. 3
figure 3

Results of the inference in a json format

The decision support system has been implemented in the Helix laboratory service in Saint-Petersburg, Russia. The system is in commercial production now generating about 20,000 reports a day.

The system has inference algorithms for the following International Statistical Classification of Diseases and Related Health Problems (ICD) 10 groups:

  1. 1.

    Kidney:

    1. 1.1.

      N30 Cystitis

    2. 1.2.

      N04 Nephrotic syndrome

    3. 1.3.

      N39 Other disorders of urinary system

    4. 1.4.

      N10 Acute pyelonephritis

  2. 2.

    Liver:

    1. 2.1.

      K75 Other inflammatory liver diseases

    2. 2.2.

      K72 Hepatic failure, not elsewhere classified

    3. 2.3.

      K71 Toxic liver disease

    4. 2.4.

      K81 Cholecystitis

  3. 3.

    Pancreas:

    1. 3.1.

      K85 Acute pancreatitis

  4. 4.

    Thyroid gland

    1. 4.1.

      E05 Thyrotoxicosis

    2. 4.2.

      E03 Other hypothyroidism

  5. 5.

    Red blood cells:

    1. 5.1.

      D50 Iron deficiency anemia

  6. 6.

    White blood cells:

    1. 6.1.

      D72 Other disorders of white blood cells

  7. 7.

    Prostate:

    1. 7.1.

      N41 Inflammatory diseases of prostate

Inference example

The full json code of rules and artefacts for the blood sugar testis presented in Additional file 2. The input of the inference is a bundle of resources that has been extracted from a laboratory tests database and were added to the decision support system database (Fig. 1):

  1. 1.

    Patient

  2. 2.

    Observation: Concentration of HbA1C, mmol/mol

  3. 3.

    Observation: Concentration of HbA1C, %

  4. 4.

    Observation: Concentration of Hb, mmol/mol

  5. 5.

    Observation: Concentration of Glucose in Plasma, mmol/L

  6. 6.

    Observation: Concentration of C-Peptide, pmol/L

After the system receives the actual values, an inference engine starts building an inference sequence (see Additional file 2. Inference sequence for the rules details and Fig. 4 for the graphical representation of the sequence) based on the available rules from the knowledge base (Fig. 1).

Fig. 4
figure 4

Inference rules sequence

The inference ends up with the conclusion id = 4785 with the artefact id = 4786. The found artefact is added to the generated report, which is then sent to the report generator to create a human readable pdf file. The resulting rules sequence is being visualized by the explanation system (Fig. 1).

User interaction

The decision support system consists of 2 main interfaces: for experts to model knowledge and inference rules and for patients to have access to the test results and their interpretation.

Expert’s interface

We have developed a web knowledge management application that provides the following features:

  • Create and edit inference rules

  • Group inference rules

  • Create and edit artefacts with doctor’s recommendations that form a decision support report as a result of a logical inference.

Fig. 5 shows an inference rule creation screen. A rule consists of several conditions connected by logical operators and a resulting artefact, which represents a text with recommendations to the patient. The artefacts can be created using an interface from Fig. 6.

Fig. 5
figure 5

Create rule interface for an expert

Fig. 6
figure 6

Expert’s interface for recommendations

Patient’s user interface

Patient has access to the test results through a web portal, where a list of available tests is provided. For each test, a patient can have an overview of the results (Fig. 7). The results are presented in the table form with the following columns: Parameter name, My Results and a Reference interval. A patient can click on the “Generate report” button (second button from the left with a doctor icon on the Fig. 7) to open a decision support report (Fig. 8).

Fig. 7
figure 7

Personal space for a patient on the online portal

Fig. 8
figure 8

Report, produced by a decision support system

Evaluation

Correctness

A sample of 1000 reports was independently assessed by two independent pathology experts. The results of assessment for each criterion are shown in Table 2. The experts revealed disagreement in the assessment of 2 reports.

Table 2 Reports’ quality evaluation

Acceptance

The mean values for BI, IM, PEOU, and PU (5.9, 6.2, 5.7, and 5.9 respectively) showed a high acceptance of the decision support system and the reports that it generates (Table 3).

Table 3 Acceptance criteria

Discussion

The paper describes a development of a patient facing clinical decision support system, which provides interpretation of the test results in the natural language.

Notification ethics

We need to be very cautious when providing test results to the patients by e-mail or on a web portal. We should assume that the patients may not fully and properly comprehend the interpretation of the results. So, the capability to deliver results and their interpretation in a manner they are understood by a patient is essential for a motivation to refer to a health care professional, particularly when test results are abnormal. Our decision support system only interprets and sends test results that do not need a humane communication according to the standards of the laboratory service. Test results that can be communicated only in person include positive HIV test, all kinds of hepatitis and all kind of positive cancer tests.

The decision support system never intended to be prescriptive and communicate a single possible clinical decision to a patient. To follow this approach, we have implemented the reports in a way that they are descriptive and informative rather than prescriptive.

Correctness

Rules definition process shall be controlled and always reviewed. The evaluation showed that the correctness of the generated reports is high. Seven mistakes out of 1000 analyzed reports were caused by a human factor. The mistakes that were detected by the experts during the assessment were caused by the inaccuracies of the experts when modelling inference rules. This led to a change of rules’ definition procedure, where we apply 4 eyes principle [34]. Now, when each rule goes to production only after a review and acceptance of a second expert.

Use acceptance

One of the measures of feasibility was the percent of patients who agreed to take part in the testing of the decision support tool. The rate of 86% of patients who agreed to take part in the study shows high interest and motivation, which is supported by quantitative measurements that were done within the study.

The user acceptance of the system was evaluated after 2 months of operation. Acceptance scores were high, all of them above 5.7 out of 7. Among elder users (60+) the results were a little worse in comparison to the younger users. The maximum difference was 0.7 (10%) for the PU. Elder people felt less motivated about storing their medical data in electronic format (5.0 versus 5.7 for the younger participants). Maximum rates were similar for each statement and age group, and a high median value also indicates a positive attitude to the system. Minimum rates of 4 show an encouragement towards the system, as all the rates were in the positive part of the scale. This is true for all age groups. This indicates high acceptance of the solution and the way the notifications are being delivered.

Partial correlations calculated using scales derived for these dimensions suggested that Ease of Use and Usefulness impact one another in a way that enhancements in Ease of Use increase the scores of Usefulness and the other way round. Whereas both Ease of Use and Usefulness steer Satisfaction with Usefulness having comparatively less significant influence. Users are more flexible in their Usefulness scores when they have only reduced experience with a system.

Unfortunately, we could not compare them to the similar studies, as we did not find a patient-oriented decision support system, for which a user acceptance was evaluated. However, we tried to compare the results with similar systems that were not patient oriented.

Acceptance scores were relatively high compared to the results of evaluation results of previous studies [35,36,37,38] management smartphone app. This can be explained by the fact that most of the study participants had average or above average IT skills. This is in a contrast with the previous studies on the patients’ acceptance of decision support tools and can be explained by the increased computer literacy and changing IT habits. Our results mean that the health care providers and EHR developers can move in the direction of electronic notifications of the patients. This will facilitate communication and decrease its costs.

Implications

The results of our study support other literature suggesting that patients want timely and detailed information and they want to be notified of all laboratory test results, even if they are normal [39, 40]. However, our results contradict the previous ones in regards to the patients preferring phone calls and sealed letters to the web based notification methods [40].

The findings of this research have valuable inferences for the design and implementation of patient notification systems. We found that patients in general find the detailed notifications useful, are motivated to use them and don’t face significant difficulties to adopt such solutions. It is very important when designing and implementing patients’ notification systems to make them valid, easy and simple to use. To achieve this, we advise that a pilot application of the decision support system is tested by the experts for verification and validation of the rules and potential users for the user acceptance, so that corrections can be made during the implementation phase to increase the system’s reliability and acceptance.

The results of our study suggest that there is a major impact of patients’ habits on the test results notification utilization. In addition to the unswerving and natural effect of habit on IT use, a habit also functions as a stowed purpose trail to affect behavior. Promotion of electronic test results notifications still demands major communication effort to strengthen both the stowed intention and its relation to behavior.

Legal

It is important to mention that in Russia laboratory ervices are legally obliged to provide results of laboratory tests to patients. Providing not only the results but the explanations of their meaning will enhance the notifications and make them more valuable for patients.

Limitations of the study and future work

Tool’s impact on patients’ decision-making

We did not thoroughly gather data on participants’ decision to follow up or not laboratory tests especially for the tests with abnormal results. This will become a major part of our next study where we will investigate how the patients decide to follow up or not laboratory tests. A systematic review by Callen et al. found that, across 19 published studies, 6.8–62% of lab tests were not followed up on [41, 42]. We think that this rate will increase for the patients that receive detailed information about the test results and their possible implications.

Evolving the decision support system

We are evolving the decision support system every day by adding new inference rules and optimizing its architecture. The next steps would be to add a possibility of working with fuzzy rules [43] to make the inference more flexible. Also, we are redesigning a data storage architecture to move from relational data base to a graph data base, that in our mind is more suitable for modeling knowledge and inference rules.

Mobile application for the patients is also under development now. This can potentially involve younger users to the system.

Conclusions

The findings of the research provide us with a better understanding of how patients experience detailed notification of laboratory tests without health care professional participating in the process. Detailed notification of laboratory service patients with the elements of decision support is significant for laboratory data management, and for patients’ empowerment and safety. We suppose that patients empowered in such way can play a significant role in the process of delivering test results to the physicians, which positively affects the efficiently of a diagnostics and treatment process.

Abbreviations

BI:

Behavioral intention to use

EHR:

Electronic health record

ICD:

International Statistical Classification of Diseases and Related Health Problems

IM:

Intrinsic motivation,

IT:

Information technology

LIS:

Laboratory information system

PC:

Personal computer

PEOU:

Perceived ease-of-use

PU:

Perceived usefulness

References

  1. Semenov I, Kopanitsa G. Development of a clinical decision support system for the patients of a laboratory service. Stud Health Technol Inform. 2016;228:90–4.

    PubMed  Google Scholar 

  2. Arhangelskaya E. Laboratory services, how to enter the business, that grows 20–45% a year. In. https://www.rbc.ru/magazine/2016/09/57bc29549a794702a314361f: rbc 2016.

  3. Ahmadian L, van Engen-Verheul M, Bakhshi-Raiez F, et al. The role of standardized data and terminological systems in computerized clinical decision support systems: literature review and survey. Int J Med Inform. 2011;80:81–93.

    Article  PubMed  Google Scholar 

  4. Jo S, Park HA. Development and evaluation of a smartphone application for managing gestational diabetes mellitus. Healthc Inform Res. 2016;22:11–21.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Kopanitsa G. Standard based multiclient medical data visualization. Stud Health Technol Inform. 2012;180:199–203.

    PubMed  Google Scholar 

  6. Kopanitsa G. Evaluation study for a multi-user oriented medical data visualization method. Stud Health Technol Inform. 2014;200:158–60.

    PubMed  Google Scholar 

  7. Kopanitsa G, Tsvetkova Z, Veseli H. Analysis of metrics for the usability evaluation of EHR management systems. Stud Health Technol Inform. 2012;180:358–62.

    PubMed  Google Scholar 

  8. Kopanitsa G, Tsvetkova Z, Veseli H. Analysis of metrics for the usability evaluation of electronic health record systems. Stud Health Technol Inform. 2012;174:129–33.

    PubMed  Google Scholar 

  9. Lin YL, Guerguerian AM, Tomasi J, et al. Usability of data integration and visualization software for multidisciplinary pediatric intensive care: a human factors approach to assessing technology. BMC Med Inform Decis Mak. 2017;17:122.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Skyttberg N, Vicente J, Chen R, et al. How to improve vital sign data quality for use in clinical decision support systems? A qualitative study in nine Swedish emergency departments. BMC Med Inform Decis Mak. 2016;16

  11. Madkour M, Benhaddou D, Tao C. Temporal data representation, normalization, extraction, and reasoning: a review from clinical domain. Comput Methods Prog Biomed. 2016;128:52–68.

    Article  Google Scholar 

  12. Murff HJ, Gandhi TK, Karson AK, et al. Primary care physician attitudes concerning follow-up of abnormal test results and ambulatory decision support systems. Int J Med Inform. 2003;71:137–49.

    Article  PubMed  CAS  Google Scholar 

  13. Poon EG, Gandhi TK, Sequist TD, et al. "I wish I had seen this test result earlier!": dissatisfaction with test result management systems in primary care. Arch Intern Med. 2004;164:2223–8.

    Article  PubMed  Google Scholar 

  14. Montes A, Francis M, Ciulla AP. Assessing the delivery of patient critical laboratory results to primary care providers. Clin Lab Sci. 2014;27:139–42.

    PubMed  Google Scholar 

  15. Litchfield IJ, Bentham LM, Lilford RJ, et al. Adaption, implementation and evaluation of collaborative service improvements in the testing and result communication process in primary care from patient and staff perspectives: a qualitative study. BMC Health Serv Res. 2017;17:615.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Casalino LP, Dunham D, Chin MH, et al. Frequency of failure to inform patients of clinically significant outpatient test results. Arch Intern Med. 2009;169:1123–9.

    Article  PubMed  Google Scholar 

  17. Sung S, Forman-Hoffman V, Wilson MC, Cram P. Direct reporting of laboratory test results to patients by mail to enhance patient safety. J Gen Intern Med. 2006;21:1075–8.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Matheny ME, Gandhi TK, Orav EJ, et al. Impact of an automated test results management system on patients' satisfaction about test result communication. Arch Intern Med. 2007;167:2233–9.

    Article  PubMed  Google Scholar 

  19. Laxmisan A, Sittig DF, Pietz K, et al. Effectiveness of an electronic health record-based intervention to improve follow-up of abnormal pathology results: a retrospective record analysis. Med Care. 2012;50:898–904.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Edmonds SW, Wolinsky FD, Christensen AJ, et al. The PAADRN study: a design for a randomized controlled practical clinical trial to improve bone health. Contemp Clin Trials. 2013;34:90–100.

    Article  PubMed  Google Scholar 

  21. Main C, Moxham T, Wyatt JC, et al. Computerised decision support systems in order communication for diagnostic, screening or monitoring test ordering: systematic reviews of the effects and cost-effectiveness of systems. England: NHS R & D HTA Programme (Great Britain); National Co-ordinating Centre for HTA (Great Britain), NIHR Health Technology Assessment Programme; 2010. p. 1–227.

  22. Carmona-Cejudo JM, Hortas ML, Baena-García M, et al. DB4US: a decision support system for laboratory information management. Interact J Med Res. 2012;1:e16.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Sepulveda JL, Young DS, Carmona-Cejudo JM, et al. The ideal laboratory information system DB4US: A decision support system for laboratory information management. Arch Pathol Lab Med. 2013;137:1129–40.

    Article  PubMed  Google Scholar 

  24. Wald JS, Burk K, Gardner K, et al. Sharing electronic laboratory results in a patient portal--a feasibility pilot. Stud Health Technol Inform. 2007;129:18–22.

    PubMed  Google Scholar 

  25. Haeckel R, Wosniok W, Arzideh F. Proposed classification of various limit values (guide values) used in assisting the interpretation of quantitative laboratory test results. Clin Chem Lab Med. 2009;47:494–7.

    PubMed  CAS  Google Scholar 

  26. Romatowski J. Problems in interpretation of clinical laboratory test results. J Am Vet Med Assoc. 1994;205:1186–8.

    PubMed  CAS  Google Scholar 

  27. Michalski RS. Pattern recognition as rule-guided inductive inference. IEEE Trans Pattern Anal Mach Intell. 1980;2:349–61.

    Article  PubMed  CAS  Google Scholar 

  28. Kawada T. Sample size in receiver-operating characteristic (ROC) curve analysis. Circ J. 2012;76:768. author reply 769

    Article  PubMed  Google Scholar 

  29. Berry KJ, Johnston JE, Mielke PW, Jr. Weighted kappa for multiple raters. Percept Mot Skills 2008; 107: 837–848.

  30. Wilson EV, Lankton NK. Modeling patients' acceptance of provider-delivered e-health. J Am Med Inform Assoc. 2004;11:241–8.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Davis FD, Bagozzi RP, Warshaw PR. User acceptance of computer technology: a comparison of two theoretical models. Manag Sci. 1989;35:982–1003.

    Article  Google Scholar 

  32. Drinkwater BL. A comparison of the direction-of-perception technique with the Likert method in the measurement of attitudes. J Soc Psychol. 1965;67:189–96.

    Article  PubMed  CAS  Google Scholar 

  33. Rogel-Salazar J. Essential MATLAB and octave. Boca Raton: Taylor & Francis, CRC Press; 2015.

    Google Scholar 

  34. HM RW. Applying the four-eyes principle to management decisions in the manufacturing sector: are large family firms one-eye blind? Manag Res Rev. 2015;38:264–82.

    Article  Google Scholar 

  35. Kuo KM, Liu CF, Ma CC. An investigation of the effect of nurses' technology readiness on the acceptance of mobile electronic medical record systems. BMC Med Inform Decis Mak. 2013;13:88.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Chen IJ, Yang KF, Tang FI, et al. Applying the technology acceptance model to explore public health nurses' intentions towards web-based learning: a cross-sectional questionnaire survey. Int J Nurs Stud. 2008;45:869–78.

    Article  PubMed  Google Scholar 

  37. Guo SH, Lin YH, Chen RR, et al. Development and evaluation of theory-based diabetes support services. Comput Inform Nurs. 2013;31:17–26. quiz 27–18

    Article  PubMed  Google Scholar 

  38. Vallance JK, Courneya KS, Taylor LM, et al. Development and evaluation of a theory-based physical activity guidebook for breast cancer survivors. Health Educ Behav. 2008;35:174–89.

    Article  PubMed  Google Scholar 

  39. Campbell L, Watkins RM, Teasdale C. Communicating the result of breast biopsy by telephone or in person. Br J Surg. 1997;84:1381.

    Article  PubMed  CAS  Google Scholar 

  40. Baldwin DM, Quintela J, Duclos C, et al. Patient preferences for notification of normal laboratory test results: a report from the ASIPS collaborative. BMC Fam Pract. 2005;6:11.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Callen JL, Westbrook JI, Georgiou A, Li J. Failure to follow-up test results for ambulatory patients: a systematic review. J Gen Intern Med. 2012;27:1334–48.

    Article  PubMed  Google Scholar 

  42. Semenov I, Kopanitsa G, Karpov A, et al. Implementation of a clinical decision support system for interpretation of laboratory tests for patients. Stud Health Technol Inform. 2016;224:184–8.

    PubMed  Google Scholar 

  43. Boegl K, Adlassnig KP, Hayashi Y, et al. Knowledge acquisition in the fuzzy knowledge representation framework of a medical consultation system. Artif Intell Med. 2004;30:1–26.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Preliminary results of this work have been presented at a conference at Tomsk Polytechnic University.

Funding

The research is funded from Russian Science Foundation (RSF), The research was at Tomsk Polytechnic University within the framework of Tomsk Polytechnic University Competitiveness Enhancement Program grant.

The funding bodies did not play any roles in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

GK was responsible for a literature review, setting up the concept and methods and writing the manuscript. IS was responsible for the implementation and evaluation of the system. Both authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Georgy Kopanitsa.

Ethics declarations

Ethics approval and consent to participate

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all participants for being included in the study. Written consent for participation in the study has been signed and submitted by two independent experts. The study was approved by the ethics commission of the committee of healthcare of Saint-Petersburg, Russia.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Questionnaire. A Questionnaire to study the acceptance of the sytem by patients. (DOCX 63 kb)

Additional file 2:

Input data for a decision support. Blood Sugar Test Results as an input for decision support. (DOCX 77 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kopanitsa, G., Semenov, I. Patient facing decision support system for interpretation of laboratory test results. BMC Med Inform Decis Mak 18, 68 (2018). https://doi.org/10.1186/s12911-018-0648-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-018-0648-0

Keywords