From: Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
Time (week) | Explanation | Explanation: time | |
---|---|---|---|
Satisfaction |
F (2,218) = 126.52 p < 0.001 ηp2 = 0.25 |
F (3,109) = 1.97 p = 0.12 ηp2 = 0.04 |
F (6,218) = 8.01 p < 0.001 ηp2 = 0.06 |
Sufficiency |
F (2,218) = 114.65 p < 0.001 ηp2 = 0.25 |
F (3,109) = 3.38 p = 0.02 ηp2 = 0.06 |
F (6,218) = 8.78 p < 0.001 ηp2 = 0.07 |
Completeness |
F (2,218) = 104.24 p < 0.001 ηp2 = 0.24 |
F (3,109) = 4.85 p = 0.003 ηp2 = 0.08 |
F (6,218) = 6.54 p < 0.001 ηp2 = 0.06 |
Usefulness |
F (2,218) = 110.36 p < 0.001 ηp2 = 0.25 |
F (3,109) = 0.82 p = 0.49 ηp2 = 0.02 |
F (6,218) = 5.06 p < 0.001 ηp2 = 0.05 |
Accuracy |
F (2,218) = 88.26 p < 0.001 ηp2 = 0.20 |
F (3,109) = 9.95 p < 0.001 ηp2 = 0.16 |
F (6,218) = 8.14 p < 0.001 ηp2 = 0.07 |
Trust |
F (2,218) = 64.71 p < 0.001 ηp2 = 0.16 |
F (3,109) = 4.71 p < 0.001 ηp2 = 0.08 |
F (6,218) = 4.10 p < 0.001 ηp2 = 0.04 |