Skip to main content
Journal of Korean Medical Science logoLink to Journal of Korean Medical Science
. 2026 Jan 7;41(3):e42. doi: 10.3346/jkms.2026.41.e42

Reliability and Validity of the Korean Version of the Digital Health Literacy Instrument

Jeahyung Lee 1,*, Chulwoo Ahn 2,*, Jisun Park 3, Sooyoung Kim 4,, Yerim Kim 2, Jiwon Park 3, Jaeyong Shin 5,6,
PMCID: PMC12815901  PMID: 41555804

Abstract

Background

As digital transformation accelerates, individuals increasingly engage with digital health services and online medical resources, and digital health literacy (DHL) becomes increasingly important. While the Digital Health Literacy Instrument (DHLI) has been validated internationally, a Korean version applicable across all adult age groups remains limited. This study develops and validates the Korean DHLI, assessing its reliability and validity for broader applicability among Korean adults.

Methods

The DHLI was translated, culturally adapted, and reviewed by expert panels following World Health Organization guidelines. A cross-sectional nationwide web survey was conducted with 524 adults aged 19–69. A follow-up survey was administered to 134 participants after four weeks to assess test-retest reliability. Internal consistency and test-retest reliability were assessed. Construct validity was examined using confirmatory factor analysis (CFA) to determine the most appropriate factor structure in the Korean population. Convergent and discriminant validity were assessed, and hypothesis-testing construct validity was evaluated through correlations with age, health status, and eHealth literacy.

Results

The Korean version of the DHLI (K-DHLI) exhibited excellent psychometric properties. Internal consistency was high (α = 0.91), and test-retest reliability was deemed acceptable (intraclass correlation coefficient = 0.70). CFA demonstrated a 7-factor model, which showed a superior model fit (χ2/degrees of freedom = 2.14, comparative fit index = 0.966, root mean square error of approximation = 0.047) as compared to the 5-factor model. Convergent and discriminant validity were confirmed, with AVE values ranging from 0.7218 to 0.8947 and construct reliability values between 0.8837 and 0.9622. Hypothesis-testing construct validity was supported by hypothesized correlations between K-DHLI scores and age (r = −0.254, P < 0.001), health status (r = 0.252, P < 0.001), and eHealth literacy (r = 0.688, P < 0.001).

Conclusion

The K-DHLI is a reliable and valid instrument for assessing DHL in the Korean population. The findings confirm that the 7-factor structure demonstrates superior model fit and further support its applicability across diverse adult groups. This study provides a standardized instrument for assessing DHL in Korea and facilitates its use in national surveys, clinical screening, and research. It may help identify individuals with limited DHL, thereby supporting targeted interventions to reduce health disparities and improve access to care.

Keywords: Digital Health Literacy, Psychometrics, Instrument Adaptation, Digital Health, Health Literacy, K-DHLI, Korean Version

Graphical Abstract

graphic file with name jkms-41-e42-abf001.jpg

INTRODUCTION

As digital transformation rapidly advances, the healthcare sector is experiencing profound changes. The way individuals seek health information has shifted from traditional media to online platforms,1,2,3,4 alongside the widespread adoption of health management applications, web-based resources, and digital therapeutics.5 The coronavirus disease 2019 (COVID-19) pandemic has further intensified this shift, heightening reliance on online health information and accelerating the adoption of telemedicine.6,7 As digital health services continue to expand beyond the pandemic, digital health literacy (DHL) is becoming an essential competency for navigating modern healthcare systems.8,9

A lack of DHL can exacerbate health disparities. During the COVID-19 pandemic, the rapid spread of misinformation led the World Health Organization (WHO) to warn of an ‘infodemic,’ where misinformation not only threatened individual health but also undermined public health strategies.10,11,12,13,14 With the growing dependence on digital health technologies, insufficient DHL may exacerbate health inequalities, especially among vulnerable populations with limited access to reliable digital resources. Studies have shown that DHL is closely associated with patient satisfaction in telemedicine and improved healthcare accessibility, further highlighting its importance in contemporary health systems.7,15,16

Assessing DHL requires a psychometrically valid and reliable instrument. The eHealth Literacy Scale (eHEALS), the most widely used measure internationally, has been criticized for failing to capture the complexities of modern digital health environments.1,17,18 To address these limitations, the Digital Health Literacy Instrument (DHLI) was developed in 2017 to assess not only basic information-seeking skills but also interactive and privacy-related competencies in digital health contexts.19 The DHLI has been translated and validated in various countries, including Japan, China, Brazil, and Türkiye.20,21,22,23

However, research on the adaptation of the DHLI in Korea remains limited. Previous studies in Korea have focused on specific subgroups or adapted versions of the DHLI specific to the COVID-19.24,25 Notably, a prior study on Korean older adults identified a 5-factor structure, which differs from the 7-factor model reported in original DHLI study.19,24 This discrepancy raises concerns about whether differences in factor structure result from age-related variations in DHL or cultural differences in engagement with digital health resources. Consequently, the validation of a standardized Korean version of the DHLI (K-DHLI) for the general adult population remains insufficient.

This study addresses this gap by translating and culturally adapting the DHLI into Korean and evaluating its reliability and validity in adults aged 19 to 69. The findings will contribute to the development of a validated measure for DHL and improve understanding of its underlying structure across diverse populations.

METHODS

Translation and cultural adaptation of DHLI

The DHLI was translated and culturally adapted based on WHO guidelines for instrument translation and adaptation, with permission from the original developers.26 This process consisted of sequential steps: forward translation, expert panel review, back-translation and discussion, pre-testing and cognitive interviewing, and finalization. Fig. 1 illustrates the detailed procedure and description of each step. From K-DHLI version 1.0 to the finalized version 3.0, the instrument was iteratively refined, and specific modifications were introduced at each stage of translation and adaptation. Supplementary Table 1 presents the original English DHLI items alongside the evolving Korean versions developed throughout each version. The final K-DHLI is provided in Supplementary Table 2.

Fig. 1. Translation and cultural adaptation process of the DHLI.

Fig. 1

DHLI = Digital Health Literacy Instrument, K-DHLI = Korean version of the Digital Health Literacy Instrument.

Participants

A minimum sample size of 200 participants is required for confirmatory factor analysis (CFA).27 For test-retest reliability, the required sample size is five times the number of items,28 which corresponds to 105 participants for the 21-item DHLI. To account for potential item nonresponse, the target sample size for the retest was set at 130. In previous DHLI validation studies, the main survey sample has typically been set at 2 to 4 times the size of the retest sample.19,21,24 Based on a prior study that utilized an online survey and applied a fourfold ratio, a sample size of 520 was targeted for the main survey.

A nationwide web-based and mobile survey was conducted by Gallup Korea for a cross-sectional study among South Korean residents aged 19 to 69. To enhance national representativeness, proportional stratified sampling was applied based on gender, age, and region as of November 2023. The initial survey was conducted from December 5 to December 13, 2023. The survey originally recruited 531 participants who completed the 21-item DHLI. However, 7 participants (1.3%) were excluded due to missing responses, resulting in a final sample of 524. Four weeks later, a follow-up survey was distributed to participants from the main survey and was conducted until a sample size exceeding 130 was obtained. As a result, 134 participants completed the second assessment for test-retest reliability.

Measures

Sociodemographic and internet usage characteristics

Sociodemographic variables included gender, age, education level, residential area, occupation, and monthly household income. Internet usage characteristics were assessed through questions on the primary means of Internet access and frequency of Internet use.

DHL

DHL was assessed using the K-DHLI. It consists of 21 items, consistent with the original DHLI study. Based on the structure proposed in the original study, it comprises 7 factors: Operational Skills (D1–D3), Information Searching (D4–D6), Evaluating Reliability (D7–D9), Determining Relevance (D10–D12), Navigation Skills (D13–D15), Adding Self-generated Content (D16–D18), and Protecting Privacy (D19–D21). Responses were recorded on a 4-point Likert scale. Reverse coding was applied, with total score calculated as the mean of all items and subscale scores as the mean of each factor’s three items. The total score ranges from 1 to 4, with higher scores indicating higher DHL.

Health status

To assess hypothesis-testing construct validity, health status was measured using the Korean version of the RAND 36-Item Health Survey.29 This five-item scale employs a 5-point Likert format, which were transformed to a 0–100 scale and then averaged. Higher scores indicate more positive health perceptions.

eHealth literacy

To assess hypothesis-testing construct validity, eHealth literacy was measured using the Korean version of the eHealth Literacy Scale (K-eHEALS), an eight-item scale rated on a 5-point Likert scale (total score: 8–40).30 Higher scores indicate greater self-perceived eHealth literacy.

Statistical analysis

Descriptive statistics summarized all variables, with categorical variables presented as frequencies and percentages and continuous variables as means and standard deviations. In addition, characteristics of retest participants and nonparticipants were compared using independent t-tests and χ2 tests to examine potential attrition bias.

The distributional properties of the K-DHLI and its subscales were assessed using mean, standard deviation, skewness, and kurtosis. As the sample size exceeded 300, normality was determined based on absolute skewness and kurtosis, with non-normality defined as skewness exceeding 2 or kurtosis exceeding 7.31,32 Ceiling and floor effects were defined as proportion of the highest and lowest scores, with values below 30% considered acceptable.33,34

Internal consistency was assessed using Cronbach’s α, with values between 0.70 and 0.95 considered acceptable.35 Corrected item-total correlation exceeding 0.30 was considered satisfactory.36 Test-retest reliability was evaluated using the intraclass correlation coefficient (ICC) with a 2-way random effects model and absolute agreement.37 ICC values were classified as poor (< 0.40), fair (0.40–0.59), good (0.60–0.74), and excellent (≥ 0.75).38

Content validity was assessed using the Item-Level Content Validity Index (I-CVI) and Scale-Level Content Validity Index/Average (S-CVI/Ave). A panel of six experts, comprising two medical and four nursing professors, rated each item on a 4-point Likert scale. Items rated 3 or 4 by at least 78% of experts met the I-CVI threshold, while an S-CVI/Ave of ≥ 0.90 indicated acceptable content validity.39,40

CFA using maximum likelihood estimation was performed to assess construct validity, including model fit, convergent validity, and discriminant validity. Model fit was evaluated using χ2 divided by degrees of freedom (χ2/DF), goodness-of-fit index (GFI), comparative fit index (CFI), Tucker-Lewis index (TLI), root mean square error of approximation (RMSEA) with a 90% confidence interval (CI), and standardized root mean square residual (SRMR). Model fit was considered acceptable if χ2/DF was below 3, GFI, CFI, and TLI exceeded 0.90, and RMSEA and SRMR were ≤ 0.08.41,42 Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) were used for model selection, with differences of 10 or more indicating significant model superiority.43 Convergent validity was confirmed when standardized factor loadings and average variance extracted (AVE) exceeded 0.50, and construct reliability (C.R.) exceeded 0.70. Discriminant validity was established if each factor’s AVE was greater than the squared correlations between factors and if the 95% CI for construct correlations did not include 1.44,45

To assess hypothesis-testing construct validity, Pearson correlation coefficients were computed between K-DHLI scores and theoretically related variables: age, health status, and eHealth literacy. Based on prior research,5,19,21,24 a small to moderate negative correlation with age (−0.10 to −0.30), a low to moderate positive correlation with health status (0.10 to 0.30), and a strong positive correlation with eHealth literacy (0.50 to 0.70) were hypothesized.

All statistical analyses were conducted using SPSS 29.0 and AMOS 29.0 (IBM Corp., Armonk, NY, USA). A 2-tailed P value < 0.05 was considered statistically significant.

Ethics statement

This study was approved by the Institutional Review Board (IRB) of Yonsei University Health System, Severance Hospital (IRB No. 4-2023-1333). Informed consent was obtained from all participants following a detailed explanation. Participants received 1,500 KRW for completing the first survey and 3,000 KRW for the follow-up survey.

RESULTS

Characteristics of the participants

This study included 524 participants (mean age: 45.62 ± 13.18 years) with a nearly equal gender distribution (49.6% male, 50.4% female). The majority had a high level of education (72.7% attended college, 13.4% had graduate education). Most participants accessed the Internet via mobile phones (74.4%), and 71.2% used it more than 10 times per day. The test-retest group (n = 134) showed comparable characteristics to the overall study sample (Table 1). To examine the potential for attrition bias, characteristics were compared between participants who completed the retest (n = 134) and those who did not (n = 390), and no statistically significant differences were observed between the 2 groups (Supplementary Table 3).

Table 1. Characteristics of participants in the main study and test-retest groups.

Characteristics Main study (n = 524) Test-retest (n = 134)
Gender
Male 260 (49.6) 65 (48.5)
Female 264 (50.4) 69 (51.5)
Age group, yr
19–29 84 (16.0) 22 (16.4)
30–39 92 (17.6) 24 (17.9)
40–49 116 (22.1) 29 (21.6)
50–59 129 (24.6) 33 (24.6)
60–69 103 (19.7) 26 (19.4)
Age, yr 45.62 ± 13.18 45.51 ± 13.69
Education level
High school or less 73 (13.9) 16 (11.9)
College or college students 381 (72.7) 100 (74.6)
Graduate school students or graduates 70 (13.4) 18 (13.4)
Residential area
Capital 97 (18.5) 29 (21.6)
Metropolitan city 156 (29.8) 45 (33.6)
Other areas 271 (51.7) 60 (44.8)
Occupation
White-collar 289 (55.2) 65 (48.5)
Blue-collar 46 (8.8) 10 (7.5)
Other 39 (7.4) 14 (10.4)
Unemployed 150 (28.6) 45 (33.6)
Monthly household income (KRW)
≤ 2.99 million 86 (16.4) 26 (19.4)
3.00–4.99 million 162 (30.9) 36 (26.9)
≥ 5.00 million 276 (52.7) 72 (53.7)
Primary means of internet access
Mobile phone 390 (74.4) 93 (69.4)
Laptop 38 (7.3) 12 (9.0)
Desktop 92 (17.6) 29 (21.6)
Tablet 4 (0.8) 0 (0.0)
Frequency of internet use
Less than 1 time a day 3 (0.6) 1 (0.7)
1–5 times a day 68 (13.0) 22 (16.4)
6–9 times a day 80 (15.3) 19 (14.2)
More than 10 times a day 373 (71.2) 92 (68.7)
Health status 52.79 ± 16.00 51.94 ± 4.29
eHealth literacy 30.78 ± 4.63 30.75 ± 4.29

Values are presented as mean ± standard deviation or number (%).

Distributional properties and reliability

The total mean score was 3.23 ± 0.38, with subscale means ranging from 2.85 ± 0.59 (Evaluating Reliability, DHL 3) to 3.66 ± 0.47 (Operational Skills, DHL 1). The total and subscale scores met the normality assumption (Table 2). However, a ceiling effect was observed for DHL 1, with 59.4% (311/524) of respondents selecting the highest score, whereas no floor effects were detected in any subscale.

Table 2. Distributional properties and reliability of the K-DHLI.

Subscalea Mean ± SD Skewness Kurtosis Corrected item-total correlationb Cronbach’s α ICC P value
Total 3.23 ± 0.38 0.044 0.213 0.37–0.68 0.91 0.70 < 0.001***
DHL 1 3.66 ± 0.47 −1.260 1.479 0.74–0.79 0.86 0.50 < 0.001***
DHL 2 3.29 ± 0.53 −0.125 −0.630 0.69–0.74 0.85 0.52 < 0.001***
DHL 3 2.85 ± 0.59 0.105 −0.461 0.66–0.71 0.82 0.66 < 0.001***
DHL 4 2.91 ± 0.57 0.057 −0.194 0.65–0.69 0.82 0.48 < 0.001***
DHL 5 3.45 ± 0.45 −0.368 −0.656 0.47–0.61 0.72 0.50 < 0.001***
DHL 6 3.06 ± 0.59 −0.139 −0.244 0.69–0.74 0.85 0.61 < 0.001***
DHL 7 3.39 ± 0.53 −0.730 0.665 0.48–0.65 0.75 0.44 < 0.001***

K-DHLI = Korean version of the Digital Health Literacy Instrument, SD = standard deviation, ICC = intraclass correlation coefficient, DHL = digital health literacy.

aDHL 1 = Operational Skills (D1–3), DHL 2 = Information Searching (D4–6), DHL 3 = Evaluating Reliability (D7–9), DHL 4 = Determining Relevance (D10–12), DHL 5 = Navigation Skills (D13–15), DHL 6 = Adding Self–generated Content (D16–18), DHL 7 = Protecting Privacy (D19–21).

bRange from the first item to the last item.

***P < 0.001.

Cronbach’s α was 0.91, which confirms high internal consistency. Corrected item-total correlation ranged from 0.37 to 0.68, demonstrating acceptable reliability. Test-retest reliability was good, with an ICC of 0.70 (P < 0.001) for the total score. Subscale ICC values ranged from 0.44 to 0.66, indicating fair to good reliability (Table 2).

Content validity

Content validity was assessed by a panel of six experts using the Content Validity Index (CVI). All items, except for D19 (I-CVI = 0.83), attained an I-CVI of 1.0. The S-CVI/Ave was 0.99, exceeding the acceptable threshold of 0.90. These results support the content validity of the K-DHLI.

CFA

CFA was conducted to assess construct validity. The seven-factor model from the original DHLI study demonstrated good model fit (χ2/DF = 2.14, P < 0.001, GFI = 0.936, CFI = 0.966, TLI = 0.957, RMSEA = 0.047, SRMR = 0.0492), meeting all established criteria (Fig. 2, Table 3).

Fig. 2. Confirmatory factor analysis model of the K-DHLI with 7 factors.

Fig. 2

K-DHLI = Korean version of the Digital Health Literacy Instrument, DHL = digital health literacy.

Table 3. Model fit indices from confirmatory factor analysis of the K-DHLI (N = 524).

Model χ2 DF P value GFI CFI TLI RMSEA (90% CI) SRMR
Seven-factor model 359.809 168 < 0.001*** 0.936 0.966 0.957 0.047 (0.040–0.053) 0.0492
Five-factor modela 823.113 179 < 0.001*** 0.842 0.884 0.864 0.083 (0.077–0.089) 0.0737

K-DHLI = Korean version of the Digital Health Literacy Instrument, DF = degrees of freedom, GFI = goodness-of-fit index, CFI = comparative fit index, TLI = Tucker-Lewis index, RMSEA = root mean square error of approximation, CI = confidence interval, SRMR = standardized root mean square residual.

aFive-factor Model: Operation Skills (D1–3), Information Searching (including Evaluating Reliability, and Determining Relevance, D4–12), Navigation Skills (D13–15), Adding Self-generated Content (D16–18), Protecting Privacy (D19–21).

***P < 0.001.

CFA was also performed on the 5-factor model proposed in a prior Korean DHLI study among older adults, where items 4 to 12 were grouped into a single factor.24 This model showed poor fit, with SRMR being the only index that met the threshold (Table 3). These findings support the 7-factor model as a superior representation of the dataset.

Further model comparisons using AIC and BIC reinforced this conclusion. The 7-factor model had significantly lower AIC (527.809) and BIC (535.186) values compared to the 5-factor model (AIC = 969.113, BIC = 975.524). These results provide empirical evidence that the 7-factor model more accurately represents the K-DHLI structure in this study sample.

Convergent and discriminant validity

Convergent validity was assessed using three criteria. Standardized factor loadings (β) ranged from 0.56 to 0.88, exceeding the threshold of 0.50 (Fig. 2). AVE values ranged from 0.7218 to 0.8947, and C.R. ranged from 0.8837 to 0.9622, both surpassing the respective cutoffs of 0.50 and 0.70 (Table 4).

Table 4. Convergent and discriminant validity of the K-DHLI (N = 524).

Subscalea Correlation coefficient (p2) [95% CI] AVE C.R.
DHL 1 DHL 2 DHL 3 DHL 4 DHL 5 DHL 6 DHL 7
DHL 1 1 0.8947 0.9622
DHL 2 0.583 (0.340) 1 0.8400 0.9403
[0.556–0.610]
DHL 3 0.254 (0.065) 0.660 (0.436) 1 0.7649 0.9069
[0.229–0.279] [0.623–0.697]
DHL 4 0.299 (0.089) 0.661 (0.437) 0.863 (0.748) 1 0.7770 0.9125
[0.275–0.323] [0.624–0.698] [0.820–0.906]
DHL 5 0.567 (0.321) 0.532 (0.283) 0.385 (0.148) 0.361 (0.130) 1 0.7411 0.8944
[0.545–0.589] [0.507–0.557] [0.361–0.409] [0.339–0.383]
DHL 6 0.391 (0.153) 0.675 (0.456) 0.646 (0.417) 0.638 (0.407) 0.551 (0.304) 1 0.8099 0.9274
[0.364–0.418] [0.636–0.714] [0.605–0.687] [0.599–0.677] [0.524–0.578]
DHL 7 0.361 (0.130) 0.358 (0.128) 0.299 (0.089) 0.263 (0.069) 0.502 (0.252) 0.333 (0.111) 1 0.7218 0.8837
[0.337–0.385] [0.329–0.387] [0.268–0.330] [0.236–0.290] [0.478–0.526] [0.302–0.364]

aDHL 1 = Operational Skills (D1–3), DHL 2 = Information Searching (D4–6), DHL 3 = Evaluating Reliability (D7–9), DHL 4 = Determining Relevance (D10–12), DHL 5 = Navigation Skills (D13–15), DHL 6 = Adding Self–generated Content (D16–18), DHL 7 = Protecting Privacy (D19–21).

K-DHLI = Korean version of the Digital Health Literacy Instrument, CI = confidence interval, DHL = digital health literacy, AVE = average variance extracted, C.R. = construct reliability.

Discriminant validity was evaluated based on 2 criteria. First, the squared correlations between latent variables (0.065–0.748) were lower than their respective AVE values (0.7218–0.8947), satisfying the necessary condition. Second, the 95% CIs of factor correlations did not include 1, further confirming discriminant validity (Table 4).

Hypothesis-testing construct validity

Table 5 presents the Pearson correlation coefficients between the total K-DHLI score and related variables. Age had a small negative correlation with DHL (r = −0.254, P < 0.001), which suggests that older individuals tend to have lower DHL. In contrast, health status (r = 0.252, P < 0.001) and eHealth literacy (r = 0.688, P < 0.001) showed small and strong positive correlations, respectively. These results indicate that better health status and higher eHealth literacy are associated with higher DHL. The findings align with the hypothesized correlations and are consistent with prior studies.19,21,24

Table 5. Pearson correlations between the K-DHLI and theoretically related variables (N = 524).

Variables Coefficient P value
Age −0.254 < 0.001***
Health status (RAND-36) 0.252 < 0.001***
eHealth Literacy (eHEALS) 0.688 < 0.001***

K-DHLI = Korean version of the Digital Health Literacy Instrument, RAND-36 = RAND 36-Item Health Survey, eHEALS = eHealth Literacy Scale.

***P < 0.001.

DISCUSSION

This study translated and culturally adapted the DHLI into Korean and evaluated its psychometric properties in adults aged 19 to 69. The findings confirm that the K-DHLI retains the original seven-factor structure and demonstrates strong reliability and validity, with all psychometric criteria satisfactorily met.

The Operational Skills subscale had the highest mean score (3.66 ± 0.47), with a ceiling effect observed, suggesting that most adults do not struggle with basic digital tasks. As the survey was administered online, respondents were likely to have a baseline level of digital proficiency. However, previous research has shown that individuals aged 50 and older, along with those in rural areas, are more susceptible to digital skill deficits.46 Therefore, retaining this subscale is essential to ensure that the DHLI remains applicable to populations where fundamental digital skills may still be limited. This finding is consistent with previous DHLI validation studies, in which both the original DHLI and the Japanese version showed a ceiling effect in this subscale but retained it for similar reasons.19,20

The DHLI was initially validated with a seven-factor structure, but an exploratory factor analysis (EFA) of the Korean version in older adults suggested a 5-factor structure.19,24 Since both prior studies proposed factor structures through EFA, this study conducted confirmatory factor analysis to assess whether the same structures could be replicated in this study sample. Given the differing factor structures reported in prior research, it was necessary to determine which model best fit this population and to assess whether these discrepancies stemmed from cultural differences or variations in sample age distribution.

The CFA results showed that the seven-factor model met all model fit criteria, while the five-factor model did not satisfy most of them. Furthermore, comparisons of AIC and BIC provided additional support for the 7-factor model as the more appropriate structure. These findings suggest that differences in DHLI factor structures are more likely due to variations in sample age rather than cultural differences. This interpretation is further supported by the Chinese version of the DHLI, which was tested among older adults and identified a factor structure where items 4 to 12 loaded onto a single factor, matching the structure observed in the Korean study of older adults.21,24 In contrast, the original DHLI study and the Japanese version, which included a broader age range of adults, consistently confirmed the 7-factor structure.19,20

However, this study has several limitations. As the survey was conducted online, there may be a selection bias toward individuals with higher levels of digital proficiency. In addition, adults aged 70 and older were excluded due to the constraints of the web-based format. These groups—older individuals and those with limited digital access or skills—may exhibit different levels and patterns of DHL, which could influence the measurement properties of the DHLI. Therefore, although the K-DHLI demonstrated strong psychometric validity and reliability in the 19–69 age group, caution should be exercised in generalizing the findings to populations beyond this demographic. Future studies should consider broader sampling strategies, including in-person or telephone-based surveys, to reach underrepresented groups. Including these populations is essential to validate the K-DHLI’s applicability across diverse user groups. Nevertheless, this study improved sample representativeness by employing proportional stratified sampling by sex, age, and region through a professional survey agency, and by securing sufficiently large samples for both the main survey and the retest. This study extends the applicability of the K-DHLI to younger and middle-aged adults, an age group underrepresented in previous Korean validation studies.

The validated K-DHLI has broad potential for national surveillance, clinical screening, and research. It may be used in large-scale population surveys to track DHL and identify vulnerable groups. For example, Health Plan Korea, a national initiative led by the Ministry of Health and Welfare, currently includes health literacy as a key indicator. As digital transformation progresses, this framework could expand to address DHL more specifically, with the K-DHLI serving as a suitable measure in national surveys such as KNHANES or the Social Survey. In clinical settings, the K-DHLI can be used to screen for individuals who may face difficulties with digital health tools, especially as telemedicine and digital therapeutics become more common. Targeted education or alternative care models may then be provided. This approach may help reduce digital inequities that contribute to disparities in health outcomes and access to care. Finally, the K-DHLI may be used in future studies to assess the stability of its factor structure and its relationships with behavioral, demographic, or clinical variables. These applications can support evidence-based public health strategies and clarify the role of DHL in improving patient engagement. The full K-DHLI is available in the Supplementary Table 2.

In conclusion, the K-DHLI is a psychometrically valid and reliable instrument for assessing DHL in Korean adults. By confirming the superiority of the 7-factor structure, this study advances the measurement of DHL and provides empirical support for its application. The K-DHLI holds practical value as a standardized tool for national health surveys, clinical screening, and research. Its use in public health monitoring and digital health services can inform data-driven policy development and contribute to reducing disparities in digital health access and engagement.

ACKNOWLEDGMENTS

We express our sincere gratitude to Research Assistant Professor Mingee Choi, Dr. Junbok Lee, Researcher Sungkyung Park, and Researcher Jimin Chung for their invaluable guidance on the study design and data analysis. We also thank the eight professors who participated in the expert review and evaluation during the translation and cultural adaptation of the DHLI.

Footnotes

Funding: This work was supported by “Research on the development of evaluation model and commercialization of digital health services for global markets” (RS-2024-00432987) funded by the Ministry of Trade Industry & Energy (MOTIE, Korea).

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:
  • Conceptualization:Lee J, Kim Y, Park J, Shin J.
  • Data curation:Lee J, Ahn C.
  • Formal analysis:Lee J, Ahn C.
  • Investigation:Lee J, Ahn C, Park J, Kim S, Kim Y, Park J.
  • Methodology:Lee J, Ahn C, Park J, Kim S.
  • Project administration:Shin J.
  • Supervision:Shin J.
  • Validation:Lee J, Ahn C.
  • Visualization:Lee J.
  • Writing - original draft:Lee J.
  • Writing - review & editing:Lee J, Ahn C, Park J, Kim S, Kim Y, Park J, Shin J.

SUPPLEMENTARY MATERIALS

Supplementary Table 1

Item-level translation and adaptation process of the K-DHLI

jkms-41-e42-s001.pdf (163.5KB, pdf)
Supplementary Table 2

The Korean version of the Digital Health Literacy Instrument (K-DHLI)

jkms-41-e42-s002.pdf (97.5KB, pdf)
Supplementary Table 3

Comparison of characteristics between retest participants and nonparticipants

jkms-41-e42-s003.pdf (119.1KB, pdf)

References

  • 1.Lee J, Lee EH, Chae D. eHealth literacy instruments: systematic review of measurement properties. J Med Internet Res. 2021;23(11):e30644. doi: 10.2196/30644. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Kim K, Shin S, Kim S, Lee E. The relation between eHealth literacy and health-related behaviors: systematic review and meta-analysis. J Med Internet Res. 2023;25:e40778. doi: 10.2196/40778. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Gasparyan AY, Kumar AB, Yessirkepov M, Zimba O, Nurmashev B, Kitas GD. Global health strategies in the face of the COVID-19 pandemic and other unprecedented threats. J Korean Med Sci. 2022;37(22):e174. doi: 10.3346/jkms.2022.37.e174. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Choung JT, Lee YS, Jo HS, Shim M, Lee HJ, Jung SM. What factors impact consumer perception of the effectiveness of health information sites? An investigation of the Korean National Health Information Portal. J Korean Med Sci. 2017;32(7):1077–1082. doi: 10.3346/jkms.2017.32.7.1077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Estrela M, Semedo G, Roque F, Ferreira PL, Herdeiro MT. Sociodemographic determinants of digital health literacy: a systematic review and meta-analysis. Int J Med Inform. 2023;177:105124. doi: 10.1016/j.ijmedinf.2023.105124. [DOI] [PubMed] [Google Scholar]
  • 6.Li S, Cui G, Kaminga AC, Cheng S, Xu H. Associations between health literacy, eHealth literacy, and COVID-19–related health behaviors among Chinese college students: cross-sectional online study. J Med Internet Res. 2021;23(5):e25600. doi: 10.2196/25600. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Dopelt K, Avni N, Haimov-Sadikov Y, Golan I, Davidovitch N. Telemedicine and eHealth literacy in the era of COVID-19: a cross-sectional study in a peripheral clinic in Israel. Int J Environ Res Public Health. 2021;18(18):9556. doi: 10.3390/ijerph18189556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.World Health Organization. Future of digital health systems: report on the WHO symposium on the future of digital health systems in the European region. 2019. [Updated 2019]. [Accesses March 3, 2024]. https://www.who.int/publications-detail/future-of-digital-health-systems .
  • 9.Ban S, Kim Y, Seomun G. Digital health literacy: a concept analysis. Digit Health. 2024;10:20552076241287894. doi: 10.1177/20552076241287894. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Choukou MA, Sanchez-Ramirez DC, Pol M, Uddin M, Monnin C, Syed-Abdul S. COVID-19 infodemic and digital health literacy in vulnerable populations: a scoping review. Digit Health. 2022;8:20552076221076927. doi: 10.1177/20552076221076927. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.World Health Organization. Coronavirus disease (COVID-19) situation report – 13. 2020. [Updated February 2, 2020]. [Accesses March 3, 2024]. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf?sfvrsn=195f4010_6 .
  • 12.Dadaczynski K, Okan O, Messer M, Leung AYM, Rosário R, Darlington E, et al. Digital health literacy and web-based information-seeking behaviors of university students in Germany during the COVID-19 pandemic: cross-sectional survey study. J Med Internet Res. 2021;23(1):e24097. doi: 10.2196/24097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Chiou H, Voegeli C, Wilhelm E, Kolis J, Brookmeyer K, Prybylski D. The future of infodemic surveillance as public health surveillance. Emerg Infect Dis. 2022;28(13):S121–S128. doi: 10.3201/eid2813.220696. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Choi M, Lee HJ, Yu SY, Kim J, Park J, Ryoo S, et al. Two years of experience and methodology of Korean COVID-19 living clinical practice guideline development. J Korean Med Sci. 2023;38(23):e195. doi: 10.3346/jkms.2023.38.e195. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Cheng J, Arora VM, Kappel N, Vollbrecht H, Meltzer DO, Press V. Assessing disparities in video-telehealth use and eHealth literacy among hospitalized patients: cross-sectional observational study. JMIR Form Res. 2023;7:e44501. doi: 10.2196/44501. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Latulippe K, Hamel C, Giroux D. Social health inequalities and eHealth: a literature review with qualitative synthesis of theoretical and empirical studies. J Med Internet Res. 2017;19(4):e136. doi: 10.2196/jmir.6731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Norman CD, Skinner HA. eHEALS: the eHealth Literacy Scale. J Med Internet Res. 2006;8(4):e27. doi: 10.2196/jmir.8.4.e27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Norman C. eHealth literacy 2.0: problems and opportunities with an evolving concept. J Med Internet Res. 2011;13(4):e125. doi: 10.2196/jmir.2035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.van der Vaart R, Drossaert C. Development of the digital health literacy instrument: measuring a broad spectrum of health 1.0 and health 2.0 skills. J Med Internet Res. 2017;19(1):e27. doi: 10.2196/jmir.6709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Miyawaki R, Kato M, Kawamura Y, Ishikawa H, Oka K. Developing a Japanese version of the Digital Health Literacy Instrument. Nippon Koshu Eisei Zasshi. 2024;71(1):3–14. doi: 10.11236/jph.23-021. [DOI] [PubMed] [Google Scholar]
  • 21.Xie L, Hu H, Lin J, Mo PKH. Psychometric validation of the Chinese digital health literacy instrument among Chinese older adults who have internet use experience. Int J Older People Nurs. 2024;19(1):e12568. doi: 10.1111/opn.12568. [DOI] [PubMed] [Google Scholar]
  • 22.Barbosa MCF, Baldiotti ALP, Braga NS, Lopes CT, Paiva SM, Granville-Garcia AF, et al. Cross-cultural adaptation of the Digital Health Literacy Instrument (DHLI) for use on Brazilian adolescents. Braz Dent J. 2023;34(5):104–114. doi: 10.1590/0103-6440202305346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Çetin M, Gümüş R. Research into the relationship between digital health literacy and healthy lifestyle behaviors: an intergenerational comparison. Front Public Health. 2023;11:1259412. doi: 10.3389/fpubh.2023.1259412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Kim H, Yang E, Ryu H, Kim HJ, Jang SJ, Chang SJ. Psychometric comparisons of measures of eHealth literacy using a sample of Korean older adults. Int J Older People Nurs. 2021;16(3):e12369. doi: 10.1111/opn.12369. [DOI] [PubMed] [Google Scholar]
  • 25.Chun H, Park EJ, Choi SK, Yoon H, Okan O, Dadaczynski K. Validating the digital health literacy instrument in relation to COVID-19 information (COVID-DHL-K) among South Korean undergraduates. Int J Environ Res Public Health. 2022;19(6):3437. doi: 10.3390/ijerph19063437. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.World Health Organization. Process of translation and adaptation of instruments. 2009. [Updated 2009]. [Accesses March 3, 2024]. https://www.who.int/substance_abuse/research_tools/translation/en/
  • 27.Kyriazos TA. Applied psychometrics: sample size and sample power considerations in factor analysis (EFA, CFA) and SEM in general. Psychology (Irvine) 2018;9(8):2207–2230. [Google Scholar]
  • 28.Park MS, Kang KJ, Jang SJ, Lee JY, Chang SJ. Evaluating test-retest reliability in patient-reported outcome measures for older people: a systematic review. Int J Nurs Stud. 2018;79:58–69. doi: 10.1016/j.ijnurstu.2017.11.003. [DOI] [PubMed] [Google Scholar]
  • 29.Koh SB, Chang SJ, Kang MG, Cha BS, Park JK. Reliability and validity on measurement instrument for health status assessment in occupational workers. Korean J Prev Med. 1997;30(2):251–266. [Google Scholar]
  • 30.Chung S, Park BK, Nahm ES. The Korean eHealth Literacy Scale (K-eHEALS): reliability and validity testing in younger adults recruited online. J Med Internet Res. 2018;20(4):e138. doi: 10.2196/jmir.8759. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Kim HY. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis. Restor Dent Endod. 2013;38(1):52–54. doi: 10.5395/rde.2013.38.1.52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Hwang H, Lee T, Lee W, Kim KM, Heo K, Chu MK. Validity and reliability of the Korean version of reduced morningness–eveningness questionnaire: results from a general population-based sample. J Korean Med Sci. 2024;39(38):e257. doi: 10.3346/jkms.2024.39.e257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Kane R, editor. Understanding Health Care Outcomes Research. 2nd ed. Burlington, MA, USA: Jones & Bartlett Learning; 2006. [Google Scholar]
  • 34.Seo YJ, Kwak EM, Jo M, Ko AR, Kim SH, Oh H. Reliability and validity of the Korean version of short-form health literacy scale for adults. J Korean Acad Community Health Nurs. 2020;31(4):416–426. [Google Scholar]
  • 35.Tavakol M, Dennick R. Making sense of Cronbach’s alpha. Int J Med Educ. 2011;2:53–55. doi: 10.5116/ijme.4dfb.8dfd. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Nunnally JC, Bernstein IH. Psychometric Theory. 3rd ed. New York, NY, USA: McGraw-Hill; 1994. [Google Scholar]
  • 37.Kong KA. Statistical methods: reliability assessment and method comparison. Ewha Med J. 2017;40(1):9–16. [Google Scholar]
  • 38.Cicchetti DV, Sparrow SA. Developing criteria for establishing interrater reliability of specific items: applications to assessment of adaptive behavior. Am J Ment Defic. 1981;86(2):127–137. [PubMed] [Google Scholar]
  • 39.Lynn MR. Determination and quantification of content validity. Nurs Res. 1986;35(6):382–385. [PubMed] [Google Scholar]
  • 40.Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006;29(5):489–497. doi: 10.1002/nur.20147. [DOI] [PubMed] [Google Scholar]
  • 41.Kline RB. Principles and Practice of Structural Equation Modeling. 4th ed. New York, NY: Guilford Publications; 2023. [Google Scholar]
  • 42.Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999;6(1):1–55. [Google Scholar]
  • 43.Burnham KP, Anderson DR. Model Selection and Inference: A Practical Information-Theoretic Approach. New York, NY, USA: Springer; 1998. Practical use of the information-theoretic approach; pp. 75–117. [Google Scholar]
  • 44.Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18(1):39–50. [Google Scholar]
  • 45.Anderson JC, Gerbing DW. Structural equation modeling in practice: a review and recommended two-step approach. Psychol Bull. 1988;103(3):411–423. [Google Scholar]
  • 46.Noh IK. Digital literacy status in the era of digital transformation [Internet] [Updated 2023]. [Accesses March 3, 2024]. https://kostat.go.kr/board.es?mid=a90104010301&bid=12306&tag=&act=view&list_no=428621 .

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Table 1

Item-level translation and adaptation process of the K-DHLI

jkms-41-e42-s001.pdf (163.5KB, pdf)
Supplementary Table 2

The Korean version of the Digital Health Literacy Instrument (K-DHLI)

jkms-41-e42-s002.pdf (97.5KB, pdf)
Supplementary Table 3

Comparison of characteristics between retest participants and nonparticipants

jkms-41-e42-s003.pdf (119.1KB, pdf)

Articles from Journal of Korean Medical Science are provided here courtesy of Korean Academy of Medical Sciences

RESOURCES