Skip to main content
. 2019 Nov 18;14(11):e0224557. doi: 10.1371/journal.pone.0224557

Table 3. Content validity by ten experts (n = 10).

Item Relevance of the questions
Number of ratings
of 3 or 4
I-CVIa Pcb K*c Evaluationd
1 9 0.90 0.010 0.90 ****
2 9 0.90 0.010 0.90 ****
3 7 0.70 0.117 0.66 ***
4 10 1.00 0.001 1.00 ****
5 7 0.70 0.117 0.66 ***
6 7 0.70 0.117 0.66 ***
7 8 0.80 0.044 0.79 ****
8 10 1.00 0.001 1.00 ****
9 8 0.80 0.044 0.79 ****
10 10 1.00 0.001 1.00 ****
11 7 0.70 0.117 0.66 ***
12 10 1.00 0.001 1.00 ****
13 9 0.90 0.010 0.90 ****
14 9 0.90 0.010 0.90 ****
15 9 0.90 0.010 0.90 ****
16 7 0.70 0.117 0.66 ***
17 9 0.90 0.010 0.90 ****
18 9 0.90 0.010 0.90 ****
19 8 0.80 0.044 0.79 ****
20 9 0.90 0.010 0.90 ****
21 9 0.90 0.010 0.90 ****
22 9 0.90 0.010 0.90 ****
23 8 0.80 0.044 0.79 ****
24 7 0.70 0.117 0.66 ***
25 8 0.80 0.044 0.79 ****
26 9 0.90 0.010 0.90 ****
27 5 0.50 0.246 0.34 **
28 8 0.80 0.044 0.79 ****
29 8 0.80 0.044 0.79 ****
30 9 0.90 0.010 0.90 ****
31 7 0.70 0.117 0.66 ***
32 8 0.80 0.044 0.79 ****
33 9 0.90 0.010 0.90 ****
S-CVI/Avee 0.83
S-CVI/UAf 0.12

aI-CVI (Item-level content validity index) = number of experts providing a rating of 3 or 4/number of experts

bPc (Probability of chance occurrence) = [N!A!(N-A)!] * 0.5N, N = number of experts; A = number of experts agreeing on a rating of 3 or 4

cK* (Modified kappa) = (I-CVI- Pc)(1- Pc)

dEvaluation criteria for the level of content validity; relationship between I-CVI and K*; excellent validity = I-CVI≥0.78 and K*>0.74(****); good validity I-CVI < 0.78 and ≥0.60 and K*≤0.74 (***); fair validity I-CVI < 0.6 and ≥0.40 and K*≤0.59(**); and poor validity I-CVI < 0.4 and K*<0.40(*)

eS-CVI/Ave = Scale-level content validity index/average

fS-CVI/UA = Scale-level content validity index/universal agreement.