Skip to main content

Only the anxious ones? Identifying characteristics of symptom checker app users: a cross-sectional survey

Abstract

Background

Symptom checker applications (SCAs) may help laypeople classify their symptoms and receive recommendations on medically appropriate actions. Further research is necessary to estimate the influence of user characteristics, attitudes and (e)health-related competencies.

Objective

The objective of this study is to identify meaningful predictors for SCA use considering user characteristics.

Methods

An explorative cross-sectional survey was conducted to investigate German citizens’ demographics, eHealth literacy, hypochondria, self-efficacy, and affinity for technology using German language–validated questionnaires. A total of 869 participants were eligible for inclusion in the study. As n = 67 SCA users were assessed and matched 1:1 with non-users, a sample of n = 134 participants were assessed in the main analysis. A four-step analysis was conducted involving explorative predictor selection, model comparisons, and parameter estimates for selected predictors, including sensitivity and post hoc analyses.

Results

Hypochondria and self-efficacy were identified as meaningful predictors of SCA use. Hypochondria showed a consistent and significant effect across all analyses OR: 1.24–1.26 (95% CI: 1.1–1.4). Self-efficacy OR: 0.64–0.93 (95% CI: 0.3–1.4) showed inconsistent and nonsignificant results, leaving its role in SCA use unclear. Over half of the SCA users in our sample met the classification for hypochondria (cut-off on the WI of 5).

Conclusions

Hypochondria has emerged as a significant predictor of SCA use with a consistently stable effect, yet according to the literature, individuals with this trait may be less likely to benefit from SCA despite their greater likelihood of using it. These users could be further unsettled by risk-averse triage and unlikely but serious diagnosis suggestions.

Trial Registration

The study was registered in the German Clinical Trials Register (DRKS) DRKS00022465, DERR1-https://doi.org/10.2196/34026.

Peer Review reports

Introduction

Symptom checker Apps (SCAs) are eHealth applications designed to support laypeople in assessing their symptoms and receiving recommendations on medically appropriate actions related to their health [1]. Users can input their health-related information into SCAs through a chatbot or search strings, and SCAs retrieve and categorize the input. Some SCAs are advertised as AI-based, and most generate healthcare-related information and recommendations for actions based on user input [2].

Although SCAs are already in use, their impact on healthcare systems remains poorly understood. Recent scoping reviews described ambiguous effects of SCAs [1, 3], indicating that they could both reduce or induce oversupply. The effectiveness of SCAs in delivering adequate and precise information and recommendations must be considered. Additionally, the possible impact of SCAs on healthcare systems depends on several factors, including the characteristics of SCAs and how SCAs are used. Finally, the impact of SCAs on users’ health-related behavior, such as seeking healthcare, must also be considered.

Recent studies have shown that the diagnostic accuracy and triage capabilities of SCAs are highly variable. A recent study reported a triage accuracy for primary conditions varying between 48.8% and 90.1% [1]. Additionally, a significant disparity in diagnostic accuracy between SCAs and emergency physicians has been reported. While SCAs correctly identified the primary diagnosis in only 30% of cases, emergency physicians achieved a much higher accuracy rate, successfully diagnosing 81% of cases [4]. In addition, another study found that medical laypeople still outperformed SCAs [5]. Consequently, SCAs currently struggle to reliably assist patients in navigating healthcare and addressing adequate medical recommendations.

Understanding the impact of SCAs on the healthcare system requires consideration of user demographics, such as (e)health literacy and attitudes toward technology [3, 6, 7]. Research indicates that SCA users are often female, well-educated, Caucasian, with health insurance and a regular healthcare provider [8, 9]. Recent studies showed that health literacy levels in Germany have declined over the years with the reported uncertainty being mainly related to online resources [10]. However, some users found SCAs useful for self-diagnosis and reported positive health effects [11], while others had problems giving and interpreting concrete information on symptom time patterns or severity [12]. Such difficulties may initiate unnecessary healthcare-seeking behavior, although the evidence remains inconclusive [13]. Additionally, increased eHealth literacy may lead to greater subjective trust in SCAs and the ability to critically evaluate their recommendations, but not necessarily to a change in actual trust-based behavior [14]. Lastly, user attitudes toward technology play a significant role, with “tech seekers” being more likely to use SCAs in the future compared to “tech rejectors” and “unsure acceptors” [15]. Concurrently with internet research, the usage of SCAs may also magnify preexisting user characteristics associated with unwarranted healthcare-seeking tendencies rather than operating independently [16]. As an example, SCA may worsen hypochondria, similarly to how internet research is already known to do among vulnerable patient groups [16].

There is a research gap concerning the influence of concepts such as hypochondria, self-efficacy, technology affinity, and health literacy on the use of SCAs. Therefore, the aim of this explorative study was to identify meaningful predictors for SCA use considering user characteristics.

Methods

An explorative cross-sectional survey was conducted. The survey was available online or as a paper and pencil version. The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist [17] was applied.

Measurements

Due to the limited literature on SCAs, pilot interviews with SCA users and SCA experts were conducted to ensure a meaningful concept selection for the survey content. In addition, to identify potential characteristics of the user group, we drew on literature related to the use of health applications.

Thus, the following concepts were selected: eHealth literacy [18], hypochondria [19], self-efficacy [20], and affinity for technology [21]. Table 1 presents a comprehensive overview, detailing the reliability, validity, scale, and scoring of the evaluated scales (General Life Satisfaction Short Scale [22], German Version of the eHealth Literacy Scale [23], Whiteley Index [24, 25], General self-efficacy short scale [26], Ultra-Short Scale for Assessing Affinity for Technology Interaction [27]) used in this study.

Furthermore, the presence of chronic diseases, private screen time (as a potential indicator of smartphone use) were assessed. Sociodemographic variables such as age, gender, and school education were also assessed in this study.

Table 1 Overview, detailing the reliability, validity, scale, and scoring of the evaluated scales used in this study

Recruitment

The survey was conducted from November 2020 to June 2021. The sample comprised different recruiting strands to reach a wide variety of participants and ensure a sufficient number of SCA users for the statistical analysis. In the first strand, n = 50.000 German citizens were contacted via mail to participate in the survey. The intended recipients were representatively selected by an external partner (T + R Dialog Marketing (Berlin, Germany) and Acxiom (Neu-Isenburg, Germany)). Further participants were recruited by mailing lists of the University of Tübingen and the University Hospital of Tübingen, social media and by cooperating GP practices. The second strand aimed to reach SCA users only; therefore, participants were only included if they had SCA experience. Targeted advertisements via social media, the social channels of the University Hospital of Tübingen, the homepage of a German newspaper and the social channels of federal health insurance were conducted to recruit further SCA users.

Data exclusion

We assumed a missing completely at random mechanism (only single values were unplausible or missing, omitted by chance). Participants with missing data on the primary outcome were excluded (n = 2). Furthermore, physicians (n = 19) were excluded due to the assumption that their medical knowledge would have a significant influence on SCA usage.

Statistical analysis

The primary outcome variable was whether participants had already used SCAs. Statistical analyses were conducted in different steps. The first step comprised variable selection using a least absolute shrinkage and selection operator (LASSO [28]) regularized logistic regression analysis considering nine predictors (as listed in Table 2). The second-step model comparison involved an intercept-only model, a full model and a model with the selected predictor using conventional logistic regression. In the third step, we utilized the identified predictors to derive parameter estimates and p-values. This process led to the determination of the main analysis parameters. A post-hoc analysis was conducted in the fourth step.

Propensity score matching

Users and non-users were matched with propensity score matching [29, 30] on an initial set of potential confounders [31]. Confounder covariates included school education and age, as we assumed that we reached a younger and better-educated user population due to our targeted recruiting strategy via social media and university mailing lists. A nearest neighbor matching algorithm [29] was applied. Missing data on the predictors were imputed using a random forest approach [32] that enables the imputation of missing information in mixed-data (categorical and continuous). Out-of-bag errors were considered [32].

Predictor selection using LASSO regularized logistic regression

The participants were divided into training (70%) and test (30%) data sets. The training data set was used to fit a model on the given data, and the test data set was used to evaluate the model [33]. A 0.632 bootstrap estimator [34] was applied as the resampling method for lambda selection. The sensitivity, specificity, Positive Predictive Value (PPV), Negative Predictive Value (NPV) and accuracy rate were calculated by fitting to the test data set. An overview of the predictors included can be found in Table 2.

Model comparison of an intercept only, a full model and a model with the identified predictor

A conventional logistic regression was fit on the complete matched data set to derive odds ratios (ORs) and confidence intervals (CIs). To identify potential multicollinearity, we employed the Variance Inflation Factor (VIF), which assesses the variance of a coefficient within the full model in comparison to its variance when modeled independently [33]. A VIF value exceeding 5 were considered as indicative of significant collinearity [33]. The Akaike information criterion (AIC) of a full model, an intercept-only model and the model with the LASSO selected predictors was compared to assess model performance. The smaller the AIC, the better the performance of the model [33].

Parameter estimators, CI and p-values of the models, including the selected predictors

Parameter estimators were derived from conventional logistic regression. Two sensitivity analyses were conducted to ensure the robustness of the ORs, CIs and p values considering different sample compositions. We applied a different matching algorithm (full optimal matching [29]) for the sensitivity analysis. Additionally, we used the whole sample without matching for sensitivity in the second analysis.

Post hoc Analysis: categorization of the WI

Finally, a post hoc analysis was conducted considering a predictor identified in step 3. The variable was dichotomized to identify clinically relevant persons, and a Pearson’s χ2 test was conducted.

Data Processing

Data processing and statistical analyses were conducted with R Version 4.1.1 [35, 36] and R Studio Version 1.4 [37].

Results

A total of 869 participants (n = 116 paper-pencil, n = 753 online) completed the survey. As participants were matched 1:1 and 67 users finished the survey, the final analysis included n = 134 participants. The median age of the population was 31 (IQR 24–49), and 67% were female. The matched variables (age and school education) were well balanced between the user and non-user groups (Love Plot Supplemental Fig. 1).

Table 1 describes the matched sample stratified for SCA use, including all predictors used in the LASSO regression. Univariate analyses were conducted for all predictors. In addition to subjective rated health, hypochondria and self-efficacy showed a significant association with SCA use.

Table 2 Overview of the potential predictors stratified for SCA use and univariate analysis

Identification of meaningful predictors

The training data set comprised 93 participants. The test data set comprised 41 participants. Nine variables were initially considered for predictor selection, as detailed in Table 2. The selection process, which involved a LASSO regularized logistic regression, identified two variables with nonzero coefficients: hypochondria (WI) and self-efficacy (ASKU). Consequently, these two variables were chosen as predictors in the conventional logistic regression model. The LASSO coefficient profiles against log (λ) are shown in Fig. 1, as is the bootstrapped ROC curve for the regularization parameter λ. Figure 2 shows the LASSO coefficient profiles against log (λ), Lambda = 0.112 when the error of the model is minimized, and 2 variables were selected. Sensitivity, specificity, Negative / Positive Predictive Values (NPV / PPV) and balanced accuracy can be found in Table 3.

Table 3 Model evaluation of the LASSO regression of the test data set
Fig. 1
figure 1

Bootstrapped ROC curve for λ

Fig. 2
figure 2

LASSO coefficient profiles against log (λ), Lambda = 0.112 when the error of the model is minimized, and 2 variables were selected

Model comparisons

The AIC of the full model was 184.43, and the intercept-only model derived an AIC of 187.76. The logistic regression model based on the results of the LASSO variable selection (WI and ASKU-S) had the lowest AIC of 172.21 and therefore showed an improved performance compared to the full and intercept-only model. The VIF = 1.035 showed no considerable multicollinearity.

Parameter estimators and predictor robustness

Table 4 shows the odds ratios, confidence intervals of the odds ratios, and p values of the logistic regression and its sensitivity analyses comprising the previously identified predictors, hypochondria and self-efficacy. The OR of the predictor hypochondria (WI) showed a similar value (1.24–1.26) for all three models and a significant p value (P <.001). The OR size corresponds to a small effect [38]. The ORs for self-efficacy, measured across the three models, were not statistically significant (P >.05). Additionally, the variation inflation factor for all models was low (VIF < 1.06), indicating no considerable multicollinearity.

Table 4 Results of the conducted logistic regression and the sensitivity analysis

Post hoc analysis

Pearson’s χ2 test revealed a significant difference between non-users and users among participants with clinically relevant levels of hypochondria on the WI.

Over half of the SCA users had a WI sum score higher than the cut-off of five, indicating clinically relevant hypochondria (Table 5).

Table 5 Post hoc analysis with categorized WI stratified for the user group

Discussion

In this exploratory study, we identified WI-assessed hypochondria as a reliable predictor for SCA use. This predictor consistently affected all analyses, including the two sensitivity analyses. Furthermore, lower values of self-efficacy assessed with the ASKU-S were identified as a positive predictor for SCA use in the main analysis. The sensitivity analyses did not replicate the effect of this variable; thus, its role remains unclear due to the rather moderate sample size.

Comparison with prior work

Hypochondria was identified as a predictor for SCA use and revealed a stable effect throughout our analyses. Over half of the SCA users had a WI sum score higher than the cut-off of five, indicating clinically relevant hypochondria (Table 4). This level of anxiety may affect a patient’s ability to adequately handle action recommendations and symptom classifications. Thus, these SCA users might be susceptible to the negative effects of SCA use. Hypochondria in the context of SCAs can be classified as cyberchondria, considering the working definition of Vismara [39]. A 2020 study discouraged self-diagnosis using SCAs among cyberchondriac patients and emphasized adjusting expectations accordingly when accessing health information online [40]. Another recent study revealed that some people with high WI (hypochondria) scores felt worse after online symptom checking, while others with low scores felt better [41]. Given this literature and our findings, it appears that patients with health anxiety are less likely to benefit from SCAs, despite being more inclined to use them. The transferability of results from online health-related use to SCAs is important to consider, as they suggest that prolonged use is associated with increased functional impairment and anxiety both before and after checking [41]. The impact of using SCAs on health-anxious patients remains unclear and warrants further investigation.

Furthermore, we examined self-efficacy as a meaningful predictor since a recent study indicated an association between self-efficacy and the adoption of SCA use [20]. The results of the predictor self-efficacy were ambiguous with differing effect sizes in our analyses. It is still uncertain how much self-efficacy contributes to determining the usage of SCAs.

Affinity for technology was another variable we considered since the literature indicated a potential association [15]. A study that examined SCA user profiles with a latent class analysis revealed that the latent class of “tech seekers” showed the highest odds of using SCA [15]. However, the results in our rather moderate sample do not suggest an association between affinity for technology and SCA use. Reasons for the discrepancy might be the different operationalization of the concepts (e.g., a scale rather than profiles) or the different study populations.

The broad use of SCAs can lead to individual- and systemic-level effects. SCAs could lead to a misuse of health care resources [4], such as users visiting emergency departments too early or too often. As a result, these users in nonurgent conditions put further strain on the health system by possibly increasing costs and taking resources from patients who need emergency care [3, 42]. To mitigate these risks, software developers should provide transparent information about the potential dangers of using SCAs. This information could be presented in the form of an instruction leaflet, available after downloading or using an SCA in a browser. The instruction should clearly state that using SCAs may increase health anxiety. The language used in the instruction should be concise and easy to understand so that users can absorb the information and take appropriate action.

The existing knowledge about SCAs should be used to improve SCA design in the best possible way and implement improvements to minimize the negative effects and strengthen the potential positive effects. Physicians should be trained to consider pre-informed patients and promote dialog. It is necessary to better understand the relationships between cyberchondria, hypochondria, and e-health literacy in the context of SCA use to derive recommendations for systemic interventions and plan targeted and helpful interventions.

Strengths and limitations

In this study, we conducted an industry-independent investigation of SCA users. Furthermore, our research did not limit itself to a single SCA application; instead, we examined the usage patterns across various types of SCA applications, enhancing the generalizability of our findings. Additionally, by matching users and non-users based on age and education, we controlled for these variables, thereby strengthening the reliability of our analysis.

A limitation of this study is the cross-sectional design we employed. This approach restricts our ability to infer causal relationships between variables, as it only provides a snapshot in time, thereby limiting our understanding of the dynamics and directionality of the relationships observed. Additionally, the recruitment of this study lead to younger and better-educated individuals introduces a potential selection bias. Our study’s moderate sample size, while adequate for exploratory purposes, may not capture the full spectrum of SCA usage characteristics. Conducting this research on a larger scale would be beneficial to validate our findings and identify more nuanced predictors of SCA use. Moreover, our approach of double targeting SCA users to ensure a higher response rate might lead to response bias, potentially resulting in an overrepresentation of the views and behaviors of more engaged or interested users. In light of these limitations, future research should consider longitudinal studies involving more diverse and larger samples.

Conclusions

Hypochondria emerged as a significant predictor of SCA use in our sample, with a consistently stable effect. Over half of the SCA users had clinically relevant hypochondria considering their values on the WI, which may impact their ability to handle SCAs effectively. According to the literature, persons with hypochondria are less likely to benefit from SCA. These users could be further unsettled by risk-averse triage and unlikely but serious diagnosis suggestions. Software developers should provide transparent information about the potential dangers of using SCAs, including that SCA use may increase health anxiety. Individuals with higher levels of health anxiety (hypochondria) might experience increased anxiety or functional impairment due to SCA use. Users should be cautious of over-relying on SCAs for health information and diagnosis. For healthcare professionals, training in addressing patient concerns arising from SCA use may be beneficial, particularly for managing individuals with high health anxiety. Further, the widespread use of SCAs may potentially lead to the misuse of healthcare resources, with nonurgent cases increasing the burden on emergency services.

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

SCAs:

Symptom Checker Apps

STROBE:

Strengthening the Reporting of Observational Studies in Epidemiology

DRKS:

German Clinical Trials Register

L1:

General Life Satisfaction Short Scale

G-eHeals:

German Version of the eHealth Literacy Scale

WI:

Whiteley Index

ASKU-S:

General self-efficacy short scale

ATI-S:

Ultra-Short Scale for Assessing Affinity for Technology Interaction

LASSO:

Least Absolute Shrinkage and Selection Operator

VIF:

Variance Inflation Factor

PPV:

Positive Predictive Value

NPV:

Negative Predictive Value

OR:

Odds Ratio

CI:

Confidence Interval

References

  1. Wallace W, Chan C, Chidambaram S, Hanna L, Iqbal FM, Acharya A, et al. The diagnostic and triage accuracy of digital and online symptom checker tools: a systematic review. NPJ Digit Med. 2022;5(1):118.

    Article  PubMed  PubMed Central  Google Scholar 

  2. GmbH AH. For better health 2023. Available from: https://ada.com/app/.

  3. Pairon A, Philips H, Verhoeven V. A scoping review on the use and usefulness of online symptom checkers and triage systems: how to proceed? Front Med (Lausanne). 2022;9:1040926.

    Article  PubMed  Google Scholar 

  4. Vuillaume LA, Turpinier J, Cipolat L, Dumontier T, Peschanski N, Kieffer Y, et al. Exploratory study: evaluation of a symptom checker effectiveness for providing a diagnosis and evaluating the situation emergency compared to emergency physicians using simulated and standardized patients. PLoS ONE. 2023;18(2):e0277568.

    Article  Google Scholar 

  5. Schmieding ML, Mörgeli R, Schmieding MAL, Feufel MA, Balzer F. Benchmarking triage capability of symptom checkers against that of medical laypersons: survey study. J Med Internet Res. 2021;23(3):e24475.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Aboueid S, Meyer S, Wallace JR, Mahajan S, Chaurasia A. Young adults’ perspectives on the Use of Symptom checkers for self-triage and Self-Diagnosis: qualitative study. JMIR Public Health and Surveillance. 2021;7(1):e22637.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Kopka M, Scatturin L, Napierala H, Furstenau D, Feufel MA, Balzer F, et al. Characteristics of users and nonusers of Symptom checkers in Germany: cross-sectional survey study. J Med Internet Res. 2023;25:e46231.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Morse KE, Ostberg NP, Jones VG, Chan AS. Use characteristics and triage acuity of a digital symptom checker in a large integrated health system: population-based descriptive study. J Med Internet Res. 2020;22(11):e20549.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Carmona KA, Chittamuru D, Kravitz RL, Ramondt S, Ramirez AS. Health information seeking from an intelligent web-based symptom checker: cross-sectional questionnaire study. J Med Internet Res. 2022;24(8):e36322.

    Article  Google Scholar 

  10. Hurrelmann K, Klinger J, Schaeffer D. Gesundheitskompetenz der bevölkerung in Deutschland im zeitvergleich der jahre 2014 und 2020. Gesundheitswesen [Internet]. 2022.

  11. Meyer AND, Giardina TD, Spitzmueller C, Shahid U, Scott TMT, Singh H. Patient perspectives on the usefulness of an artificial intelligence-assisted symptom checker: cross-sectional survey study. J Med Internet Res. 2020;22(1):e14679.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Marco-Ruiz L, Bønes E, de la Asunción E, Gabarron E, Aviles-Solis JC, Lee E, et al. Combining multivariate statistics and the think-aloud protocol to assess human-computer interaction barriers in symptom checkers. J Biomed Inform. 2017;74:104–22.

    Article  PubMed  Google Scholar 

  13. Winn AN, Somai M, Fergestrom N, Crotty BH. Association of use of online symptom checkers with patients’ plans for seeking care. JAMA Netw Open. 2019;2(12):e1918561.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Kopka M, Schmieding ML, Rieger T, Roesler E, Balzer F, Feufel MA. Determinants of laypersons’ trust in medical decision aids: randomized controlled trial. JMIR Hum Factors. 2022;9(2):e35219.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Aboueid S, Meyer SB, Wallace J, Chaurasia A. Latent classes associated with the intention to use a symptom checker for self-triage. PLoS ONE. 2021;16(11):e0259547.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Starcevic V, Berle D, Arnáez S. Recent insights into Cyberchondria. Curr Psychiatry Rep. 2020;22(11).

  17. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495–9.

    Article  Google Scholar 

  18. Tennant B, Stellefson M, Dodd V, Chaney B, Chaney D, Paige S, et al. eHealth literacy and web 2.0 health information seeking behaviors among baby boomers and older adults. J Med Internet Res. 2015;17(3):e70.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Jungmann SM, Brand S, Kolb J, Witthöft M. Do Dr. Google and health apps have (comparable) side effects? An experimental study. Clin Psychol Sci. 2020;8(2):306–17.

    Article  Google Scholar 

  20. Balapour A, Reychav I, Sabherwal R, Azuri J. Mobile technology identity and self-efficacy: implications for the adoption of clinically supported mobile health apps. J Inf Manag. 2019;49:58–68.

    Article  Google Scholar 

  21. Aboueid S, Liu RH, Desta BN, Chaurasia A, Ebrahim S. The use of artificially intelligent self-diagnosing digital platforms by the general public: scoping review. JMIR Med Inform. 2019;7(2):e13445.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Nießen D, Groskurth K, Rammstedt B, Lechner CM. General Life Satisfaction Short Scale (L-1). Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS). 2020.

  23. Soellner R, Huber S, Reder M. The concept of eHealth literacy and its measurement: German translation of the eHEALS. J Media Psychol. 2014;26(1):29–38.

    Article  Google Scholar 

  24. Glöckner-Rist A, Barenbrügge J, Rist F. Deutsche Version des Whiteley Index (WI-d). Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS). 2014.

  25. Speckens AE, Spinhoven P, Sloekers PP, Bolk JH, van Hemert AM. A validation study of the Whitely Index, the illness attitude scales, and the Somatosensory amplification scale in general medical and general practice patients. J Psychosom Res. 1996;40(1):95–104.

    Article  CAS  PubMed  Google Scholar 

  26. Beierlein C, Kovaleva A, Kemper C, Rammstedt B, editors. Allgemeine Selbstwirksamkeit Kurzskala (ASKU). Zusammenstellung Sozialwissenschaftlicher Items Skalen (ZIS); 2014.

  27. Wessel D, Attig C, Franke T. ATI-S - An Ultra-Short Scale for Assessing Affinity for Technology Interaction in User Studies. Proceedings of Mensch und Computer. 2019; Sep; Hamburg, Germany: Association for Computing Machinery; 2019. p. 147– 54.

  28. Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc B (Methodol). 1996;58(1):267–88.

    Google Scholar 

  29. Ho D, Imai K, King G, Stuart E, Whitworth A. Package ‘MatchIt’. Version[Google Scholar]. 2018.

  30. Ho DE, Imai K, King G, Stuart EA. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Anal. 2007;15(3):199–236.

    Article  Google Scholar 

  31. VanderWeele TJ. Principles of confounder selection. Eur J Epidemiol. 2019;34:211–9.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Stekhoven DJ, Buhlmann P. MissForest–non-parametric missing value imputation for mixed-type data. Bioinformatics. 2012;28(1):112–8.

    Article  CAS  PubMed  Google Scholar 

  33. James G, Witten D, Hastie T, Tibshirani R. An introduction to statistical learning: Springer; 2013.

  34. Efron B. Estimating the error rate of a prediction rule: improvement on cross-validation. J Am Stat Assoc. 1983;78(382):316–31.

    Article  Google Scholar 

  35. R Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2016.

    Google Scholar 

  36. R Core Team. R: a language and environment for statistical computing. 4.0.5 ed. Vienna, Austria: R Foundation for Statistical Computing; 2021.

    Google Scholar 

  37. RStudio Team. RStudio: integrated development for R. PBC. Boston: RStudio; 2021.

    Google Scholar 

  38. Chen H, Cohen P, Chen S. How big is a big odds ratio? Interpreting the magnitudes of odds ratios in epidemiological studies. Commun Stat Simul Comput. 2010;39(4):860–4.

    Article  Google Scholar 

  39. Vismara M, Caricasole V, Starcevic V, Cinosi E, Dell’Osso B, Martinotti G, et al. Is cyberchondria a new transdiagnostic digital compulsive syndrome? A systematic review of the evidence. Compr Psychiatry. 2020;99:152167.

    Article  PubMed  Google Scholar 

  40. Starcevic V, Berle D, Arnáez S. Recent insights into cyberchondria. Curr Psychiatry Rep. 2020;22(11):56.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Arsenakis S, Chatton A, Penzenstadler L, Billieux J, Berle D, Starcevic V, et al. Unveiling the relationships between cyberchondria and psychopathological symptoms. J Psychiatr Res. 2021;143:254–61.

    Article  PubMed  Google Scholar 

  42. Turner J, Knowles E, Simpson R, Sampson F, Dixon S, Long J, et al. Impact of NHS 111 online on the NHS 111 telephone service and urgent care system: a mixed-methods study. Health Serv Delivery Res. 2021;9(21):1–148.

    Article  Google Scholar 

Download references

Acknowledgements

We thank Maximilian Pilz and Fabian H. Klopfer for their advice. Further we would like to thank the participants of the cross-sectional survey. We acknowledge the assistance of OpenAI’s ChatGPT language model for providing support with the revision of certain sections of this manuscript.

Funding

This study was funded by Federal Ministry of Education and Research, abbreviated BMBF, is a cabinet-level ministry of Germany [grant number: 01GP1907A]. The funder played no role in the study design, data collection, data analysis and interpretation, or manuscript writing.

The work of the Institute of Occupational and Social Medicine and Health Services Research Tübingen is supported by an unrestricted grant from the employers´ association of the metal and electric industry Baden-Württemberg (Südwestmetall).

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Contributions

JW, RK and SJ designed the study and the study materials. JW collected the data, and performed the analyses. RK verified the data and contributed to the interpretation of the results. JW also wrote the first draft of the manuscript, with critical input from RK, SJ, MR, MK, RM. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Anna-Jasmin Wetzel.

Ethics declarations

Ethics approval and consent to participate

Ethical approval for this study was obtained from the ethics committee of the University of Tübingen (ID: 464/2020BO). All data was assessed anonymously, and participants provided informed consent before starting the survey. The recent study was conducted in accordance with the Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary Material 2

Supplementary Material 3

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wetzel, AJ., Klemmt, M., Müller, R. et al. Only the anxious ones? Identifying characteristics of symptom checker app users: a cross-sectional survey. BMC Med Inform Decis Mak 24, 21 (2024). https://doi.org/10.1186/s12911-024-02430-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-024-02430-5

Keywords