Skip to main content
. 2016 Sep 6;16(1):237. doi: 10.1186/s12909-016-0759-1

Table 3.

Summary of consistency of agreement for participants ratings for completeness of the test study’s reporting for items in the GREET checklist with consensus criterion ratings

The GREET checklist item n Agreement with consensus criterion ratings
Agreementa Partiala agreement Noa agreement
n (%) n (%) n (%)
1. Title 31 21 (68) 8 (26) 2 (6)
2. Theory 31 18 (58) 7 (23) 6 (19)
3. Learning objectives 31 18 (58) 11 (35) 2 (6)
4. Steps of EBP 31 6 (19) 18 (58) 7 (23)
5. Materials 31 8 (26) 20 (65) 3 (9)
6. Learning strategies 31 22 (71) 8 (26) 1 (3)
7. Incentives 31 24 (78) 1 (3) 6 (19)
8. Instructors 31 25 (81) 6 (19) 0 (0)
9. Delivery 31 16 (52) 10 (32) 5 (16)
10. Environment 31 15 (49) 9 (28) 7 (23)
11. Schedule 31 16 (52) 14 (45) 1 (3)
12. Face to face time 31 18 (58) 8 (26) 5 (16)
13. Adaptations 31 16 (52) 9 (28) 6 (20)
14. Modifications 31 26 (84) 1 (3) 4 (13)
15. Attendance 31 20 (64) 7 (23) 4 (13)
16. Planned delivery 31 29 (94) 0 (0) 2 (6)
17. Actual schedule 31 25 (81) 4 (13) 2 (6)
Reliability: ICC (95 % CI), p=
Criterion validity ICC (95 % CI), p= 0.73 (.51–.88), p < .0001
Inter-rater reliability ICC (95 % CI), p= 0.96 (.93–.98), p < .0001

ICC intra class correlation coefficient

aThe participants’ ratings for completeness of reporting were (1) Yes- fully reported, (2) Yes- partially reported, (3) No- not reported or No- not clear. Consistency of agreement with the consensus criterion ratings was defined as: Agreement (both the participant ratings of agreement and the consensus criterion rating were in the same category), Partial agreement (one category of difference between the participant rating and the consensus criterion rating) or No agreement (two categories of difference between the participants rating with the consensus criterion rating)