Skip to main content

Validation of the German eHealth impact questionnaire for online health information users affected by multiple sclerosis



Persons with multiple sclerosis (MS) are confronted by an overwhelming amount of online health information, which can be valuable but also vary in quality and aim. Therefore, it is of great importance for developers and providers of eHealth information to understand its impact on the users. The eHealth Impact Questionnaire (eHIQ) has been developed in the United Kingdom to measure the potential effects of health and experimental information websites. This contains user’s general attitudes towards using the internet to gain health information and attitudes towards a specific health related website. The self-complete questionnaire is divided into two independently administered and scored parts: the 11-item eHIQ part 1 and the 26-item eHIQ part 2. This study aimed to validate the psychometric properties of the German version of the eHealth Impact Questionnaire (eHIQ-G).


162 people with multiple sclerosis browsed one of two possible websites containing information on MS and completed an online survey. Internal consistency was assessed by Cronbach’s alpha and structural validity by Confirmatory Factor Analysis. Construct validity was examined by assessing correlations with the reference instruments eHealth Literacy Questionnaire and the General Self-Efficacy Scale measuring related, but dissimilar constructs. Moreover, we investigated the mean difference of the eHIQ-G score between the two websites. Data were analyzed using SPSS and AMOS software.


The eHIQ-G subscales showed high internal consistency with Cronbach’s alpha from 0.833 to 0.885. The 2-factor model of eHIQ part 1 achieved acceptable levels of goodness-of-fit indices, whereas the fit for the 3-factor model of eHIQ part 2 was poor and likewise for the alternative modified models. The correlations with the reference instruments were 0.08–0.62 and as expected. Older age was related with lower eHIQ part 1 score, whereas no significant effect was found for education on eHIQ part 1. Although not significant, the website ‘AMSEL’ reached higher mean scores on eHIQ part 2.


The eHIQ-G has good internal consistency, and sufficient structural and construct validity. This instrument will facilitate the measurement of the potential impact of eHealth tools.

Peer Review reports


With increased access to the Internet and technology, many easy access, and low cost opportunities for utilising electronic health (eHealth), or technology-delivered health information and services for the prevention and management of chronic diseases arise [1, 2]. Among chronically ill people, persons with multiple sclerosis (MS) are among the most frequent Internet users [3]. Thus, they are confronted by an overwhelming amount of online health information differing in quality and aim [3,4,5]. Methods to evaluate eHealth tools, such as mobile apps, online-portals for patients, and other Internet-based software or programs used to help patients monitor and manage their health have recently started to emerge [6]. A framework has been proposed by Allison et al. suggesting to evaluate website attributes, such as usability, content, web design, and functionality [7]. Currently, there is limited but increasing research looking at the potential impact of websites presenting individual patient experiences [8], since patient experiences are increasingly exchanged between patients and more frequently incorporated into health information websites [9]. It is crucial for developers and providers of online health information, especially experiential information, to understand its potential impact on knowledge, feeling of being supported, preparation for health decisions and behavioral outcomes on the users [10]. Therefore, instruments appropriate for assessing websites with different types of material, which can help to rate these dimensions from a user perspective are urgently needed [11].

In 2013, the eHealth Impact Questionnaire (eHIQ) was developed to measure self-reported impact of eHealth tools on the user [10] and validated as an English version in 2015 [12]. To our knowledge, only a validated Dutch and Hebrew version of the eHIQ exists [13, 14]. The eHIQ measures patient’s general attitudes towards using the Internet to obtain health information. Furthermore, it assesses impacts on patients after viewing a specific website containing different types of material (e.g., patients’ experiences, and scientific facts and figures). The impacts refer to 1) the extent to which patients gain confidence in discussing their health with others and the ability to identify with others; 2) the perceived ease of use, ease of understanding, trustworthiness and appropriateness of website content; and 3) the extent to which the patients better understand their health condition, feel reassured, and motivated to take action [10,11,12]. The eHIQ has already been used in several studies to evaluate eHealth tools, such as an online patient platform for communicating laboratory test results [15], a website to support families of burn-injured children [16], and a website providing narratives on prostate, breast and colorectal cancer [17]. The portal for laboratory test results achieved a high score on the usability scale of the eHIQ, but the portal could help the patients only slightly to take action in managing their own health [15]. The website providing burn-specific information was rated very positive and slightly better than the former portal [16]. The information on the website providing narratives was considered valuable and trustworthy by the majority of participants [17].

This study aimed to validate the eHIQ in a German population of eHealth users with MS by using confirmatory factor analysis and comparing the ratings for two websites of different complexity.


Recruitment and procedure

As we intended to use the scale in a clinical setting [18], participants were German-speaking persons with MS or people with suspected MS who were aged ≥ 18 years, and who had access to the Internet. The necessary sample size for validating questionnaires is contentious [19]. In accordance with our resources and the COSMIN Guidelines (Consensus-based Standards for the selection of health status Measurement Instruments) [20], we aimed to reach a sample size of 150.

Open recruitment took place from November 2019 to Mai 2020 through newsletters of the MS day hospital at the University Medical Center Hamburg-Eppendorf and as part of regular newsletters of four regional associations from the German Multiple Sclerosis Society (DMSG): Hamburg, Baden-Württemberg, Schleswig–Holstein, and Lower Saxony.

Persons with MS were invited to access an anonymous online survey by clicking on an electronic link. After reading the patient information, and giving informed consent online, patients were asked to fill out the following measures: the eHealth Literacy Questionnaire (eHLQ), eHIQ-part 1 and the General Self-Efficacy Scale (GSE). Afterwards, participants were directed to spend at least 10 minutes browsing either the section ‘living with MS’ of the website of the DMSG Baden-Württemberg, called AMSEL [21] or the whole website of the DMSG Hamburg [22]. In addition to factual information, the AMSEL website also contained explanatory films about living with MS from patients and health professionals. The DMSG Hamburg website contained only factual information at the time of the study. The websites were chosen to test if there is variation in the items of the eHIQ-G part 2 when rating websites with different types of health information such as facts, figures and personal experiences. After browsing one of the websites, patients had to return to the online survey and answer the eHIQ-part 2, and demographic as well as MS-related questions.


The eHIQ is divided into two parts. The 11-item eHIQ-part 1 must be completed before accessing the website to be evaluated. It consists of two subscales (1) attitudes towards online health information and (2) attitudes towards sharing health experiences online. The 26-item eHIQ-part 2 measures the impact of using a specific health-related website on three subscales: (1) confidence and identification, (2) information and presentation, and (3) understanding and motivation. Response options range from 1 (‘strongly disagree’) to 5 (‘strongly agree’). The eHIQ part 2 must be administered after accessing the website to be evaluated. The scores were converted to a 0–100 metric. The total eHIQ score for part 1 and 2 was calculated as the sum of all subscale scores, divided by the number of subscales. Higher scores correspond with more positive responses [10, 12]. The translation of the eHIQ into German was performed in the context of a medical dissertation [23]. The translation was carried out according to the TRAPD (Translation, Review, Adjudication, Pretesting, and Documentation) team translation model in accordance with the Cross-Cultural Survey Guidelines [24]. Three staff members of the Institute of General Practice Göttingen produced three full translations of the eHIQ [23]. In the review phase, a team of five staff members of the same institute agreed on a single common translation. The translation of the individual questions either corresponded to one of the available translation suggestions or represented a new variant. This new version was submitted for a backward translation to a translator who had not been involved in any of the previous steps and was not familiar with the original English version of the eHIQ. The comparison of the original questionnaire with the backward translation led to further changes in the German translation resulting in a preliminary version of the eHIQ-G. Afterwards, the eHIQ-G was pretested in a convenience sample of 25 participants. The German version of the eHIQ can be found in the dissertation [23].

The eHLQ is a validated measure of eHealth literacy in English and Danish language covering user interaction with a given eHealth system and the user’s experience of engaging with it [25]. The eHLQ is valuable for the characterization and understanding of digital health literacy in a broad range of target groups. It contains 35 items in seven domains: (1) using technology to process health information, (2) understanding of health concepts and language, (3) ability to actively engage with digital services, (4) feel safe and in control, (5) motivated to engage with digital services, (6) access to digital services that work, and (7) digital services that suit individual needs [25]. Response options for all items range from 1 (strongly disagree) to 4 (strongly agree). The subscale scores are calculated summing up the scores of each item and dividing it by the number of items [25]. We used the eHLQ from the German eHLQ-validation study after back- and forward translation and a qualitative pre-test. The study has not yet been published.

The 10-item GSE scale was developed and validated to assess a general sense of perceived self-efficacy. Responses are made on a 4-point scale from 1 (not at all true) to 4 (exactly true). The total score is calculated by summing up all item scores. The total score ranges from 10 to 40, with a higher score indicating more self-efficacy [26].

Demographic data such as sex, age, educational level, and highest professional qualification were collected as well as MS-related information, e.g. the disease course, years since diagnosis, and the 9-item- ‘Patient Determined Disease Steps’ (PDDS), which asks for the patient-reported walking ability and disability (from 0 = normal to 8 = bedridden) [27].

Data analysis

The analysis was performed in SPSS (version 25.0; IBM Corp.) and SPSS Amos (version 26.0; IBM Corp.) software. All analyses were carried out on complete cases. For sample description, continuous variables are described using mean and standard deviation (SD), and categorical items are presented as counts and percentages. To examine the internal consistency reliability of the five subscales, Cronbach’s alpha (α) was estimated. A Cronbach’s alpha value of > 0.7 was considered adequate [20].

Confirmatory factor analysis (CFA) was applied to investigate construct validity. The Full Information Maximum Likelihood estimation was used to estimate model parameters and to examine goodness-of-fit of all the CFA models with: the Root Mean Square Error of Approximation (RMSEA) ≤ 0.06, Standardized Root Mean Square (SRMR) ≤ 0.08, Tucker-Lewis-Index (TLI) ≥ 0.95, and Comparative Fit Index (CFI) ≥ 0.95 judged as adequate. Additionally, the minimum discrepancy (chi-square) per degree of freedom (CMIN/DF) ≤ 3 rule was also used [28, 29]. For inadequate model fit of the eHIQ-G, modification indexes were assessed [30] and an exploratory factor analysis (EFA) using Oblimin rotation and principal component analysis [31] was run to investigate an alternative to the original structure.

Moreover, convergent validity was assessed by testing hypotheses about expected relationships with eHLQ and GSE by calculating Pearson correlation coefficients. Correlations with instruments measuring related, but dissimilar constructs (eHLQ, GSE) should be 0.30–0.50 [20]. Convergent validity was considered adequate if at least 75% of the correlations were as expected. P values less than 0.05 are interpreted as statistically significant.

Hypothesis 1: Particular subscales of the eHIQ-G correlate with subscales of the eHLQ and with the GSE, which measure related, but dissimilar constructs such as the user’s interaction and experience with a given eHealth tool [13] and the perceived self-efficacy.

  • The eHIQ-G part 1 subscale (1) ‘attitudes towards online health information’ correlates positively with the eHLQ subscales (1), (3), (5), (6), and (7).

  • The eHIQ-G part 1 subscale (2) ‘attitudes towards sharing health experiences online’ correlates positively with the eHLQ subscales (1), (3), (5), (6), and (7).

  • The eHIQ-G part 2 subscale (1) ‘confidence and identification’ correlates positively with the eHLQ subscales (5) and (7).

  • The eHIQ-G part 2 subscale (2) ‘information and presentation’ correlates positively with the eHLQ subscale (2) and (4).

  • The eHIQ-G part 2 subscale (3) ‘understanding and motivation’ correlates positively with the eHLQ subscales (2) and (5) as well as with the GSE score.

  • The differences of the eHIQ-G according to characteristics of the participants were compared using t test, analysis of variance (ANOVA) and analysis of covariance (ANCOVA) to demonstrate convergent and discriminant validity.

Hypothesis 2: Higher educational levels predict higher scores on the eHIQ part 1 as persons with lower education seek health information online less likely [32].

Hypothesis 3: Younger persons are more likely to search for health-related information on the Internet [32]. Therefore, younger age predicts higher scores on the eHIQ part 1.

Hypothesis 4. The website of AMSEL, which contains factual and experiential information gets a higher sum index score on eHIQ part 2 than the website of DMSG Hamburg, which shows only factual information.


Sample characteristics

163 persons with MS were enrolled in the study (Table 1). The response rate (59.3%) was calculated as the number of returned questionnaires (N = 163) divided by the total sample who logged in the survey (N = 275) (Fig. 1).

Table 1 Demographic and clinical characteristics of the study sample (N = 163)
Fig. 1
figure 1

Participant flow chart

Most of the participants were female (71.0%). The mean age was 51 years. The level of education was high with 65.4% reporting a high school degree. Nearly all participants had a definite MS diagnosis, while 4.9% were suspected having MS. 54% of those with definite MS had a relapsing–remitting MS course. The patients had lived with the disease on average for 14 years and had on average a PDDS of 3.1 (gait disability). Most participants (65.4%) decided to spend time on the website of the DMSG Hamburg, while 34.6% browsed the website of the AMSEL. Regarding the use of those websites before participation in this study, 77.4% had already used the DMSG Hamburg website, while 80.4% had already used the AMSEL website.

Internal reliability

All subscales showed good internal consistency with Cronbach’s α from 0.833 to 0.885 (Table 2). Overall internal consistency for the entire eHIQ-G was 0.926.

Table 2 Parameter estimates of the confirmatory factor analysis (N = 149)

Structural validity

The CFA was run on 149 complete cases that were only participants for whom we had no missing data on the eHIQ-G scale (Fig. 1). The analysis was performed on the five factors and 37 items (Fig. 2).

Fig. 2
figure 2

Path model showing relationship among latent variables and manifest variables of the original eHIQ-G

All items had a standardized factor load (β) of ≥ 0.50 and a critical ratio value (C.R.) of ≥ 1.96 (p < 0.05) indicating a good model identification [33] except for eHIQ part 2 item 10 and 25 (Table 2). Results of the CFA on 149 complete cases for the whole eHIQ-G including part 1 and 2 suggested poor fit to the data (RMSEA = 0.09, SRMR = 0.09, CFI = 0.75, TLI = 0.73). Solely CMIN/DF = 2.26 showed an acceptable fit.

Additionally, two CFAs were run separately for each part of the eHIQ-G. The goodness-of-fit indices of the eHIQ-G part 1 model were as follows: RMSEA = 0.12, SRMR = 0.07, CFI = 0.91, TLI = 0.88, CMIN/DF = 2.63. All the fit indices ranged from satisfactory to poor. An EFA was run to investigate an alternative to the original structure of the eHIQ-G part 1. The Kaiser–Meyer–Olkin (KMO) measure verified the sampling adequacy for the analysis, KMO = 0.87. The EFA resulted in the same two factors with the same items (predictors) as the original model.

The fit for the eHIQ-G part 2 model was poor (RMSEA = 0.12, SRMR = 0.10, CFI = 0.74, TLI = 0.71, CMIN/DF = 3.00). Inspecting the modification index output (Additional file 1), which showed covariances that could be incorporated into a re-specified model to obtain superior goodness-of-fit, covariances were found between items 14 and 15 as well as between items 20 and 23. After removing items 15, 20 and incorporating item 10 into the subscale eHIQ 2.3 the fit for the alternative model (Additional file 2) was better, but still not acceptable (RMSEA = 0.12, SRMR = 0.10, CFI = 0.77, TLI = 0.75, CMIN/DF = 2.75). The EFA for the eHIQ-G part 2 suggested five factors, which were not clearly interpretable and had many items that had double loadings. Furthermore, we applied the alternative Dutch 3-factor structure [13] (Additional file 3) in our CFA and resulted in better, but also unacceptable fit indices (RMSEA = 0.11, SRMR = 0.10, CFI = 0.76, TLI = 0.74, CMIN/DF = 2.81). The path diagrams (Fig. 2, Additional files 2 and 3) showed partly high correlations between the subscales (latent factors). As this was already found by the English validation study [12], we have decided to stay with the original 3-factor model of the English eHIQ part 2.

Convergent and discriminant validity

Descriptive data for the GSE, eHLQ subscales, and eHIQ-G part 1 are shown in Table 3. Participants considered their knowledge and skills related to eHealth literacy to be moderate. The mean score for participants’ general attitudes towards using the Internet to access health information and their ease with using online (experiential) information for learning and gaining support (eHIQ-G part 1) was medium. Despite the concentration of eHIQ-G scores at the positive end of the construct, the distributions were sufficiently symmetric. The eHLQ subscales (with data of the subscale 1 concentrating on the positive end, and eHLQ subscale 4 on the negative end) as well as the GSE were approximately normally distributed.

Table 3 Descriptive data of outcome measures

Figure 3 illustrates the eHIQ-G part 2 subscale scores and the sum index score of each website. Data were displayed in the boxplot by the minimum, the maximum, the median, the mean, the first, and third quartiles, and the outliers (dots). Both websites gained moderate sum index scores on the eHIQ-G part 2. The websites achieved the highest score in subscale 2.2, which reflects users’ trust and suitability of the website content. Participants’ confidence to discuss their health condition with others after browsing the websites and their ability to identify with the websites (eHIQ-G 2.1) was moderate. The score on subscale 2.3, which reflects the understanding and learning about relevant health information and motivation to act was moderate as well. The t test indicated no significant difference in mean in eHIQ-G part 2 sum index score between the websites of DMSG Hamburg and AMSEL (t 149 = − 0.45, p = 0.35, mean difference = − 0.40, 95% CI = − 6.08–4.04). No significant differences in means of the subscale 2.1 (t 154 = − 1.00, p = 0.32, mean difference = − 3.01, 95% CI = − 9.08–3.00), subscale 2.2 (t 156 = 0.95, p = 0.35, mean difference = 2.35, 95% CI = − 2.56–7.25) and subscale 2.3 (t 129.55 = − 1.08, p = 0.28, mean difference = − 3.05, 95% CI = − 8.96–2.87) between DMSG Hamburg and AMSEL could be proven. However, descriptively the average mean eHIQ-G part 2 sum index score as well as in subscales 2.1, and 2.3 for the AMSEL website were descriptively higher than the average sum index score for DMSG Hamburg.

Fig. 3
figure 3

Comparison of the eHIQ-G part 2 score for websites of DMSG Hamburg and AMSEL (N = 151)

Moreover, relationships (Pearson’s correlation coefficients) between eHIQ-G subscales and the selected reference measures were examined to assess construct validity. Results confirmed our expectations that almost all scales are significantly related: a large positive correlation was found between the eHIQ-G (1.1) and eHLQ (1), and (5) as well as a positive moderate to small correlation between the eHIQ-G (1.2) and eHLQ (3), (6), and (7). Correlations between eHIQ-G (2.1) and eHLQ (5), and (7) were small. There were either small (r < 0.20) or no significant (p = 0.17) correlations between eHIQ-G (2.2) and eHLQ (2), and (4). Only a positive small correlation was found between eHIQ-G (2.3) and eHLQ (5). The eHIQ-G (2.3) did not correlate significantly (p = 0.32) with the GSE score (Table 4).

Table 4 Pearson correlations among eHIQ-G, eHLQ and GSE

Older age was related with lower eHIQ-G part 1 sum index score per 5 years [F (1, 154) = 10.00, B = − 2.29, p < 0.001, partial Eta Squared = 0.06]. On the other hand, multilevel ANCOVA did not reveal influence of education on the eHIQ-G part 1 [F (2, 154) = 1.14, p = 0.32, partial Eta Squared = 0.02]. Thus, discriminant validity could be shown at least partially.


Web-based health information and interventions are a substantially emerging area of providing treatment for various conditions or supporting self-management [34,35,36] which has been even accelerated in the COVID-19 pandemic [37]. However, critical appraisal of eHealth information is a challenge and the real impact on health behaviour and health state is a matter of emerging research [11]. As a valid and reliable instrument was lacking in Germany, this study aimed to examine the structural validity, internal consistency, and construct validity of the eHIQ-G in a sample of persons with MS. The results show that the German eHIQ has sufficient structural validity, internal consistency, and construct validity as well as conditionally sufficient structural validity.

The CFA analyses showed satisfactory fit indices for the original eHIQ-G part 1. Similar results have been found for the Dutch version of the original model. The fit for the eHIQ-G part 2 was poor and likewise for the alternative 3-factor model. Neijenhuijs et al. reported bad fit indices for the original model of the Dutch eHIQ part 2, too. An alternative 3-factor structure was investigated and resulted in a good model fit [13]. No model-fit-indices were reported for the Hebrew version [14]. We have investigated an alternative model based on the modification index output as well as on the alternative Dutch factor structure which resulted in better, but still unacceptable fit indices. The developers of the eHIQ found that 12 items of the eHIQ part 2 were loading on more than one subscale [12]. However, items were allocated to the subscale on which they loaded most highly and made conceptually sense [12]. To have comparable results with other countries and to avoid a shift in the model’s meaning from a theoretical standpoint, we decided to stick to the original factor structure of eHIQ part 2.

Evidence for sufficient internal consistency of the eHIQ-G subscales was indicated by a Cronbach’s alpha of > 0.70 [20]. The internal consistency found in this study was comparable to the internal consistency found in previous studies [12,13,14].

As no validated reference inventory in German language exists, which measures the self-reported impact of eHealth tools on the user, we selected related, but dissimilar construct measurement tools such as the eHLQ [25] and the GSE [26]. Predominantly all expectations towards hypothesis one were confirmed. However, many of the correlations were small. Our results are difficult to compare with the results of the other validation studies [12, 13], since others did use other reference measures. In these studies authors report finding small to acceptable correlations for convergent and divergent validity [12, 13]. We were not able to use these questionnaires as they were neither available in German nor validated. We could only partially confirm discriminant validity (hypotheses 3–4): older age was related with lower eHIQ-G part 1 score, whereas no significant effect was found for education on eHIQ-G part 1. Kelly et al. found no significant difference for age [12]. No significant difference was found for the browsed websites among all subscale scores of the eHIQ-G part 2 (hypothesis 4). This did not match our assumption that the AMSEL being richer in different types of information, e.g., providing MS patients with videos on how to manage their daily lives, would get higher scores on the average mean eHIQ-G part 2 sum index score.


This study has several limitations: as with most web–based studies, a selection bias related to recruitment of volunteer participants is also present in our study [38]. Almost two-thirds of the participants were highly educated and therefore, evidence for low-educated persons is limited. In fact, individuals with low level of education are less likely to participate in health examination surveys [39, 40] and focussed efforts are necessary to include low-educated persons in evaluations of eHealth tools. Second, although the sample size was adequate according to KMO, other sources [31] say that more data are needed to perform CFA. In this study, only persons with MS were chosen to represent the target population of the eHIQ-G. Like other people with especially chronic conditions, persons with MS increasingly search for health information on the Internet [41]. Among chronically ill people, they are among the most frequent Internet users as they are young, now mostly digital natives, mostly not substantially impaired in the early phase of the disease [42, 43]. Therefore, persons with MS represent a prototypic, ideal eHealth population. However, other patient groups should be included to test whether the eHIQ-G is applicable to a range of different conditions [11]. Besides, we could show discriminant validity only based on differences of the eHIQ-G scale performance at different ages. We did not assess discriminant validity with reference inventories to be regarded as measuring distinct constructs [44], as we were lacking scales measuring related constructs based on theory or prior empirical observations. Finally, we used an anonymous survey for pragmatic and data protection reasons while collecting data. Therefore, we did not test for test–retest reliability to examine the consistency over time.


The eHIQ-G is a reliable and valid inventory with acceptable psychometric properties assessed in a group of patients with MS. However, the subscales of the eHIQ-G part 2 and the corresponding items have conceptional limitations. Lastly, this is the first study evaluating the German version of eHIQ in MS patients and suggesting a 37-item model, which is possibly able to describe different important aspects of health websites with various styles of information from a user perspective. The eHIQ-G is proposed for usage in future studies on the impact of websites containing various styles of health information. The eHIQ-G can be used to reflect on user’s general attitudes towards health-related websites (eHIQ part 1) and on the website’s design, credibility, reputation, and the possible impacts (eHIQ part 2). Therefore, it can be used to test and improve websites with health information. Additionally, the inventory is promoting research and comparison among different websites, since it has already been translated in several languages and validated in several groups. Despite the limitations of the eHIQ-G part 2, we recommend considering the eHIQ-G in future studies assessing the impact of eHealth tools on its users.

Availability of data and materials

The data are not publicly available due to privacy issues but will be available from the corresponding author on reasonable request.



Analysis of covariance


Analysis of variance


Confirmatory factor analysis


Comparative fit index


Minimum discrepancy (chi-square) per degree of freedom


German multiple sclerosis society


Exploratory factor analysis


EHealth impact questionnaire


EHealth literacy questionnaire


General self-efficacy




Multiple sclerosis


Patient determined disease steps


Patient experiences


Root mean square error of approximation


Standardized root mean square




  1. Madrigal L, Escoffery C. Electronic health behaviors among US adults with chronic disease: cross-sectional survey. J Med Internet Res. 2019;21(3):e11240.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Rogers MA, Lemmen K, Kramer R, Mann J, Chopra V. Internet-delivered health interventions that work: systematic review of meta-analyses and evaluation of website availability. J Med Internet Res. 2017;19(3):e90.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Brigo F, Lochner P, Tezzon F, Nardone R. Web search behavior for multiple sclerosis: an infodemiological study. Mult Scler Relat Disord. 2014;3(4):440–3.

    Article  PubMed  Google Scholar 

  4. Synnot AJ, Hill SJ, Garner KA, Summers MP, Filippini G, Osborne RH, Shapland SDP, Colombo C, Mosconi P. Online health information seeking: How people with multiple sclerosis find, assess and integrate treatment information to manage their health. Health Expect Int J Public Particip Health Care Health Policy. 2016;19(3):727–37.

    Article  Google Scholar 

  5. Beckett JM, Bird ML, Pittaway JK, Ahuja KD. Diet and multiple sclerosis: scoping review of web-based recommendations. Interact J Med Res. 2019;8(1): e10050.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Lancaster K, Abuzour A, Khaira M, Mathers A, Chan A, Bui V, Lok A, Thabane L, Dolovich L. The use and effects of electronic health tools for patient self-monitoring and reporting of outcomes following medication use: systematic review. J Med Internet Res. 2018;20(12): e294.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Allison R, Hayes C, McNulty CAM, Young V. A comprehensive framework to evaluate websites: literature review and development of GoodWeb. JMIR Form Res. 2019;3(4): e14372.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Drewniak D, Glässel A, Hodel M, Biller-Andorno N. Risks and benefits of web-based patient narratives: systematic review. J Med Internet Res. 2020;22(3): e15772.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Pulman A. A patient centred framework for improving LTC quality of life through Web 2.0 technology. Health Inform J. 2010;16(1):15–23.

    Article  Google Scholar 

  10. Kelly L, Jenkinson C, Ziebland S. Measuring the effects of online health information for patients: item generation for an e-health impact questionnaire. Patient Educ Couns. 2013;93(3):433–8.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Ziebland S, Powell J, Briggs P, Jenkinson C, Wyke S, Sillence E, Harris P, Perera R, Mazanderani F, Martin A, Locock L, Kelly L, Booth M, Gann B, Newhouse N, Farmer A. Examining the role of patients’ experiences as a resource for choice and decision-making in health care: a creative, interdisciplinary mixed-method study in digital health. Programme Grants Appl Res. 2016;4(17):1–214.

    Article  Google Scholar 

  12. Kelly L, Ziebland S, Jenkinson C. Measuring the effects of online health information: scale validation for the e-Health impact questionnaire. Patient Educ Couns. 2015;98(11):1418–24.

    Article  PubMed  Google Scholar 

  13. Neijenhuijs KI, van der Hout A, Veldhuijzen E, Scholten-Peeters GGM, van Uden-Kraan CF, Cuijpers P, Verdonck-de Leeuw IM. Translation of the eHealth impact questionnaire for a population of dutch electronic health users: validation study. J Med Internet Res. 2019;21(8): e13408.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Zigdon A, Zigdon T, Moran DS. Attitudes of nurses towards searching online for medical information for personal health needs: cross-sectional questionnaire study. J Med Internet Res. 2020;22(3): e16133.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Talboom-Kamp E, Tossaint-Schoenmakers R, Goedhart A, Versluis A, Kasteleyn M. Patients’ attitudes toward an online patient portal for communicating laboratory test results: real-world study using the eHealth impact questionnaire. JMIR Form Res. 2020;4(3): e17060.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Heath J, Williamson H, Williams L, Harcourt D. Supporting children with burns: developing a UK parent-focused peer-informed website to support families of burn-injured children. Patient Educ Couns. 2019;102(9):1730–5.

    Article  PubMed  Google Scholar 

  17. Engler J, Adami S, Adam Y, Keller B, Repke T, Fügemann H, Lucius-Hoene G, Müller-Nordhorn J, Holmberg C. Using others’ experiences. Cancer patients’ expectations and navigation of a website providing narratives on prostate, breast and colorectal cancer. Patient Educ Couns. 2016;99(8):1325–32.

    Article  PubMed  Google Scholar 

  18. Barabasch A, Riemann-Lorenz K, Kofahl C, Scheiderbauer J, Eklund D, Kleiter I, Kasper J, Köpke S, Lezius S, Zapf A, Rahn AC, Heesen C. Impact of a multimedia website with patient experiences of multiple sclerosis (PExMS) on immunotherapy decision-making: study protocol for a pilot randomised controlled trial in a mixed-methods design. Pilot Feasibility Stud. 2021;7(1):16.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Front Public Health. 2018;6:149.

    Article  Google Scholar 

  20. Prinsen CAC, Mokkink LB, Bouter LM, Alonso J, Patrick DL, de Vet HCW, Terwee CB. COSMIN guideline for systematic reviews of patient-reported outcome measures. Qual Life Res Int J Qual Life Asp Treat Care Rehabil. 2018;27(5):1147–57.

    Article  CAS  Google Scholar 

  21. AMSEL. MS Videos. Leben mit MS: AMSEL, Aktion Multiple Sklerose Erkrankter, Landesverband der DMSG in Baden-Württemberg e.V.; 2021 [Available from: Accessed 4 Apr 2020.

  22. DMSG-Hamburg. Die DMSG Hamburg ist eine gemeinnützige Selbsthilfeorganisation und unabhängige Interessenvertretung für Menschen mit Multipler Sklerose.: DMSG Hamburg e.V.; 2020 [Available from: Accessed 4 Apr 2020.

  23. Schierholz H. Krankheitserfahrungen als Informationsquelle und Hilfe im Umgang mit Krankheit: evaluation der Internetseite [Med. Dissertation]. Göttingen2016 [Available from: Accessed 11 Apr 2022.

  24. Mohler PD, Brita; de Jong, Julie; Hu, Mengyao. Tranlations. Guidelines for best practice in cross-cultural surveys survey research center, institute for social research, University of Michigan2016 [Available from: Accessed 1 Mar 2022.

  25. Kayser L, Karnoe A, Furstrand D, Batterham R, Christensen KB, Elsworth G, Osborne RH. A multidimensional tool based on the eHealth literacy framework: development and initial validity testing of the eHealth literacy questionnaire (eHLQ). J Med Internet Res. 2018;20(2): e36.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Schwarzer R, Jerusalem M. Generalized self-efficacy scale. In: Weinman J, Wright S, Johnston M, editors. Measures in health psychology: a user’s portfolio causal and control beliefs. NFER-NELSON: Windsor; 1995. p. 35–7.

    Google Scholar 

  27. Learmonth YC, Motl RW, Sandroff BM, Pula JH, Cadavid D. Validation of patient determined disease steps (PDDS) scale scores in persons with multiple sclerosis. BMC Neurol. 2013;13:37.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Kyriazos TA. Applied psychometrics: writing-up a factor analysis construct validation study with examples. Psychology. 2018;09:2503–30.

    Article  Google Scholar 

  29. Irwing PB, Booth T, Hughes DJ. The Wiley handbook of psychometric testing: a multidisciplinary reference on survey, scale and test development. New York: John Wiley & Sons Ltd; 2018.

    Book  Google Scholar 

  30. Byrne BM. Structural equation modeling with AMOS, EQS, and LISREL: comparative approaches to testing for the factorial validity of a measuring instrument. Int J Test. 2001;1(1):55–86.

    Article  Google Scholar 

  31. Field A. Discovering statistics using SPSS. London: SAGE Publications Ltd.; 2009.

    Google Scholar 

  32. Bach RL, Wenz A. Studying health-related internet and mobile device use using web logs and smartphone records. PLoS ONE. 2020;15(6): e0234663.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  33. Hoyle RH. Structural equation modeling: concepts, issues, and applications. Thousand Oaks: Sage; 1995.

    Google Scholar 

  34. Marziniak M, Brichetto G, Feys P, Meyding-Lamadé U, Vernon K, Meuth SG. The use of digital and remote communication technologies as a tool for multiple sclerosis management: narrative review. JMIR Rehabil Assist Technol. 2018;5(1): e5.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Giunti G, Guisado Fernández E, Dorronzoro Zubiete E, Rivera RO. Supply and demand in mHealth apps for persons with multiple sclerosis: systematic search in app stores and scoping literature review. JMIR Mhealth Uhealth. 2018;6(5): e10512.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Lavorgna L, Brigo F, Moccia M, Leocani L, Lanzillo R, Clerico M, Abbadessa G, Schmierer K, Solaro C, Prosperini L, Tedeschi G, Giovannoni G, Bonavita S. e-Health and multiple sclerosis: an update. Mult Scler J. 2018;24(13):1657–64.

    Article  Google Scholar 

  37. Budd J, Miller BS, Manning EM, Lampos V, Zhuang M, Edelstein M, Rees G, Emery VC, Stevens MM, Keegan N, Short MJ, Pillay D, Manley E, Cox IJ, Heymann D, Johnson AM, McKendry RA. Digital technologies in the public-health response to COVID-19. Nat Med. 2020;26(8):1183–92.

    Article  CAS  PubMed  Google Scholar 

  38. Greenacre ZA. The importance of selection bias in internet surveys. Open J Stat. 2016;06(03):397–404.

    Article  Google Scholar 

  39. Spitzer S. Biases in health expectancies due to educational differences in survey participation of older Europeans: It’s worth weighting for. Eur J Health Econ. 2020;21(4):573–605.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Reinikainen J, Tolonen H, Borodulin K, Härkänen T, Jousilahti P, Karvanen J, Koskinen S, Kuulasmaa K, Männistö S, Rissanen H, Vartiainen E. Participation rates by educational levels have diverged during 25 years in Finnish health examination surveys. Eur J Public Health. 2017;28(2):237–43.

    Article  Google Scholar 

  41. Colombo C, Mosconi P, Confalonieri P, Baroni I, Traversa S, Hill SJ, Synnot AJ, Oprandi N, Filippini G. Web search behavior and information needs of people with multiple sclerosis: focus group study and analysis of online postings. Interact J Med Res. 2014;3(3):e12.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Holstiege J, Steffen A, Goffrier B, Bätzing-Feigenbaum J. Epidemiologie der Multiplen Sklerose - eine populationsbasierte deutschlandweite Studie. Versorgungsatlas-Report; 2018;17(09):1–16.

  43. Köpke S, Solari A, Khan F, Heesen C, Giordano A. Information provision for people with multiple sclerosis. Cochrane Database Syst Rev. 2014.

    Article  PubMed  Google Scholar 

  44. Rönkkö M, Cho E. An updated guideline for assessing discriminant validity. Organ Res Methods. 2020.

    Article  Google Scholar 

Download references


We are grateful to all persons with MS who have made this research possible.


Open Access funding enabled and organized by Projekt DEAL. This research was funded by Roche Pharma AG, Grant No.: 29623. The sponsor was kept apprised of the process but had no role in the study design or conduction to uphold the scientific integrity and quality of the study.

Author information

Authors and Affiliations



AS and CH designed the study with support from KR-L, JP and SL Acquisition and analysis of data was conducted by AS. The authors KR-L, JP, CH, RW, KD, EMB, CH and SL contributed to analysis discussions. AS wrote the main manuscript text. All authors critically reviewed the manuscript, contributing important intellectual content and approved the final manuscript for publication.

Corresponding author

Correspondence to Anna Sippel.

Ethics declarations

Ethical approval and consent to participate

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Hamburg Chamber of Physicians (protocol code PV5770, date of approval 22/05/2018). Informed consent was obtained from all subjects.

Consent for publication

Not applicable.

Competing interests

A.S. has received funding from Roche Pharma and salaries for talks from Novartis. C.H. has received research grants, congress travel compensations and salaries for talks from Biogen, Genzyme, Sanofi-Aventis, Bayer Healthcare, Merck, Teva Pharma, Roche Pharma, and Novartis. K.R-L., J.P., R.W., C.H., K.D., E.M.B., and S.L. declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Modification Indices—Residual Covariances between eHIQ-G part 2 items.

Additional file 2:

Path model of factors and indicators of the alternative model of the eHIQ-G part 2.

Additional file 3:

Path model of factors and indicators of the Dutch alternative model of the eHIQ-G part 2.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sippel, A., Riemann-Lorenz, K., Pöttgen, J. et al. Validation of the German eHealth impact questionnaire for online health information users affected by multiple sclerosis. BMC Med Inform Decis Mak 22, 219 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • eHealth
  • Empowerment
  • Patient information
  • Psychometrics
  • Factor analysis
  • Multiple sclerosis