Skip to main content
. 2023 Sep 25;13(19):3044. doi: 10.3390/diagnostics13193044

Table 3.

Inter-rater agreement of different image sets.

Reader 1 vs. Reader 2 T1WIO T1WIDL T2WIO T2WIDL 3D FSPGR
κ κ κ κ κ
overall image quality 0.7 0.8 0.8 0.8 0.8
noise 0.8 0.7 0.7 0.8 0.7
contrast 0.9 0.8 0.8 0.7 0.7
artifact 0.8 0.8 0.9 0.8 0.6
septal cartilage 0.8 0.6 0.9 0.7 0.7
upper lateral cartilage 0.7 0.7 0.9 0.8 0.7
lower lateral cartilage 0.7 0.6 0.6 0.7 0.6

Note. The inter-rater agreement of image quality indices was evaluated by kappa statistics, κ = Kappa values, 0.00–0.20, 0.21–0.40, 0.41–0.60, 0.61–0.80, and 0.81–1.00 indicated poor, fair, moderate, good, and excellent agreement, respectively. T1WIO = original T1-weighted FSE images, T1WIDL = deep learning–reconstructed T1-weighted FSE images, T2WIO = original T1-weighted FSE images, T2WIDL = deep learning–reconstructed T2-weighted FSE images, 3D FSPGR = three-dimensional fast spoiled gradient-recalled images.