In this study where we compared a computerized decision support system with manual diagnosis, no major difference between paper and pencil and computer support was found for the easy case. However, for the difficult case, a difference was found in favour of paper and pencil.
It is hard to make any conclusion for this finding other than that traditional decision making is at least as effective as the computer support tested. The prior training and experience in the different methods, paper and pencil and computer support, were not extensive. However, the lack of training time and experience in the computer method were, to some degree, compensated by the instructions the subjects were given in the actual trial. The clinicians were also supported by one of the study leaders in terms of assistance with handling the program. They were given no assistance in the judgement of the different criteria. On the other hand, working 'backwards' in the program, that is regretting and changing, which is rather complicated in the program, caused problems for almost everyone. These functions of regretting and changing might be easier handled in programs with less complex structure like for instance SCAN. However, such functions might not be used so much because coding patients according to "yes" or "no" answers it not too difficult.
Revisiting and changing earlier decisions from judgments in CB-SCID1 on the other hand, in a decision tree, is more complex. Moreover, CB-SCID1 might have a longer learning curve than more straightforward systems.
A common expectation is that computer support results in faster and easier decisions than those made by paper and pencil. The finding that the paper and pencil method was faster for the difficult case can, to some degree, be explained by the fact that the CB-SCID1 is not wholly automatic. The system demands a thinking process that might be harder for a difficult case (with movements back and forth) during the decision process in the program.
The finding that the correct diagnosis 'Depression' (one of the three diagnoses in the difficult case) was found more often with the paper and pencil method than with the computer support could depend on the thinking and navigation processes. Somehow it might be easier to think globally in the paper and pencil situation than in the step-by-step sequential thinking process necessary in the CB-SCID1.
The discovery that the majority found the CB-SCID1 supportive and easy to use while it takes a longer time and yields fewer correct diagnoses than paper and pencil needs some comment.
The CB-SCID1 seems to lend support to structure in its presentation of the next question according to the DSM decision tree, but makes the processing aspect of global thinking and navigation back and forth in the CB-SCID1 difficult. The program might force the thinking to become sequential and the global thinking become difficult. The thinking process might become fragmented and the navigation process in the program even more difficult.
CB-SCID1 might trigger errors due to automation bias, errors of commission, which is following the direction in the program regardless of the correctness of action, or only applying sequential but no global thinking leading to incorrect diagnoses. Missing the diagnosis 'Depression' in the computer situation may be because of automation bias, errors of omission, or merely applying sequential but no global thinking (cf. ). Probably, a free flexible combination of sequential and global thinking, adapted to the demands of the situation, would be more advantageous.
It can also be discussed whether CDSS and paper and pencil methods should be seen as alternative methods or if CDSS should take a complementary role in the ordinary clinical work.
Of course, this is dependent on the degree of automation in the CDSS and type of clinical task to be supported. When comparing these two methods, paper and pencil and computer, it is to some degree a comparison between the brain against the 'intelligence' built in into the CDSS. The human brain is good at global information processing while a computer makes it necessary to follow a logical sequential path and handle one piece of information at a time (in our case decide upon one criterion at a time in the CB-SCID1).
General practitioners in primary care settings have shown interest in the CB-SCID1.
Such a system might be of value to them being responsible for first line help in psychiatric issues in Sweden. The CB-SCID1 might be supportive for general practitioners who often lack the psychiatric domain knowledge.
The use of written text on paper when presenting the cases to be diagnosed has limitations concerning interpretation mode. Some subjects may stick inflexibly to the actual written text. Others may fill in the empty spaces in the text using their clinical experience and imagination. Moreover, there may be frustration at not being able to put follow-up questions, since live patients were not actually used, in order to clarify the picture of the patient. The artificiality of the evaluation conditions, not using real live cases, is a limitation in this study not testing the potential possibility that the software might work better under such conditions.
An other limitation in this study is that just one type of CDSS was studied. CDSS varies in complexity, from categorized information that requires further processing to systems with self-learning capabilities. CB-SCID1 is characterized by deductive inference and automatic generation of diagnosis but requires input judgments for various criteria sequentially presented following a predetermined decision tree in the software.
The fact that very few of the clinicians had tested the CDSS CB-SCID1 before the trial might, of course, influence the outcome of the study. However, very few had also had any type of training in paper and pencil SCID. Furthermore, all subjects were instructed in how to use the CB-SCID1 system before the trial and a CB-SCID1-trained person was available during all sessions to answer any questions regarding the system and its functions.
In this study we defined case complexity as number of diagnoses, easy case had one diagnosis and the difficult case had 3 diagnoses. There are of course other ways of defining case complexity, such as rarity.
Another limitation may be faults in the unit test, for instance, errors in the software and design flaws in software architecture. The design and evaluation processes in CB-SCID1, from requirement analysis to assessment of outcomes, may have some drawbacks. Although the focus of this study is not focused on evaluation of the software part we found some indications of such problems. The follow-up interview revealed, for instance, that at least 15 users rejected the CB-SCID1 diagnostic advice. Some of these rejections may be due to errors in the software, some to unskilled handling of the program or software architecture problems. The type of problems mentioned in the follow-up interview were, for example missing the depressive disorder part, missing the alcohol and other substance-related disorder part, missing brief psychotic disorder, problems with tense in questions, problems with 'over diagnosis'. The CB-SCID1 seems to generate a diagnosis after just one criterion yes-answer, according to some participants in the study, which is for generalized anxiety syndrome and hypochondria.
In summary, the greatest limitation in this study might be the unclear status of CB-SCID1 in terms of the life cycle of information systems. The CB-SCID1 has the status of a mature commercialized product on the market. Yet, one might wonder about the development process from requirement analysis to outcomes assessment. What about architecture design, software programming, unit test and acceptance test ? In this study it is difficult to evaluate the importance of user training and familiarity with CB-SCID1 vis-à-vis probable software problems.