Skip to main content

Table 1 Survey for learning effect evaluation

From: A prototype of knowledge-based patient safety event reporting and learning system

Scenario 1 (report queries)

Q1a. Do you think the query and its top three similar reports were annotated with correct contributing factors?

Q2a. Do you think the scenarios of the three reports were similar to that of the query report?

Scenario 2 (contributing factor queries)

Q1b. Do you think the top three recommended reports were annotated with correct contributing factors?

Q2b. Do you think the scenarios of the three reports matched your expectation?

Both Scenarios

Q3. Do you think the three similar/recommended reports and the solutions provided strong knowledge support for learning purpose?

Q4. Is there any other comment you would like to make?

  1. Q1-Q3 are single-choice questions with four scaled choices: 1) fully agree, 2) mostly agree, 3) mostly disagree, and 4) fully disagree, while Q4 is a subjective question. Participants reviewed the materials and completed the survey individually. A Fleiss’ kappa, a statistical measure for assessing the reliability of agreement among multiple raters, was calculated to the answers of Q1-Q3 from the five participants. To simplify calculation, fully agree and mostly agree were treated as agree, while mostly disagree and fully disagree were treated as disagree