Skip to main content
. 2022 Jun 8;12(6):183. doi: 10.3390/bs12060183

Table 2.

The relevance ratings on the item scale by ten experts and content validity index calculation.

Item No. Experts Relevance Ratings Expert Agreement Item-Level Content
Validity Index
(I-CVI)
Universal Agreement (UA) Modified Kappa Agreement
1 2 3 4 5 6 7 8 9 10
Q-1 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-2 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-3 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-4 1 1 1 1 1 1 1 1 1 0 9 0.90 0 0.90
Q-5 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-6 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-7 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-8 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-9 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-10 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-11 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-12 1 0 1 1 1 1 1 1 1 1 9 0.90 0 0.90
Q-13 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-14 0 1 1 0 1 0 1 1 1 1 7 0.70 0 0.69
Q-15 1 1 1 1 1 1 1 1 1 1 10 1 1 1
Q-16 1 1 1 0 1 1 1 1 1 0 8 0.80 0 0.80
Average Item relevance 0.94 0.94 1 0.88 1 0.94 1 1 1 0.88 9.56 S-CVI
= 0.96
S-CVI/UA = 0.75 0.96
Average relevance across the 10 experts = 0.96

Q = question. I-CVI = Item-level content validity index = average of expert rating for each item [53,56]. Universal Agreement (UA): rating with all 1 = 1, and any rating with 0 = 0. S-CVI = scale-level content validity index = average of I-CVI. S-CVI/UA = the average of universal agreement scores across all items. Pc = probability of chance agreement (PC = [N/A (N − A)]* 0.5N), where N = number of experts in a panel and A = number of panelists who agree that the item is relevant [53,56]. Modified Kappa agreement (K) = (I-CVI − Pc)/(1 − Pc) [55].