Skip to main content
. Author manuscript; available in PMC: 2009 Mar 23.
Published in final edited form as: J Am Acad Dermatol. 2007 Feb 5;56(5):774–780. doi: 10.1016/j.jaad.2006.12.013

Table II.

Interrater reliability for stringency scales and individual items for 10 state laws

Scale/item Two raters
Three rates
Agreement, % Item reliability* Item reliability, ICC for scales§
Minors’ access .91
 Age prohibited 100 1.00 1.00
 Age accompanied 50 .19 .28
 Age must provide parental consent 100 1.00 1.00
Customer notification of risks .95
 Warning signs 70 .60 .80
 Written warnings 40 .38 .69
 Customer acknowledgment 60 .75 .88
 Label of exposure schedule 100 1.00 .76
Customer UV exposure control .89
 Limitations on frequency and duration 80 .76 .89
Equipment standards .89
 Timer system 60 .15 .75
 Timer shut-off 100 1.00 1.00
 Timer testing 80 .47 .81
 Physical barriers 90 .74 .81
 Bulb replacement 80 .91 .90
 Eye protection 70 .47 .70
 Stand-up booth safety 80 .41 .58
Facility operations .97
 Licensing/registration 90 .74 .81
 License for each location 80 .60 .58
 Records of customer session 90 .93 .96
 Incident reporting 100 1.00 .95
 Restrictions on advertising 100 1.00 1.00
 Restrictions on price packages 80 .41 .62
Operator training and responsibilities .92
 Presence of trained operator 100 1.00 .76
 Operation of timer 100 1.00 .96
 Extent of training 60 .74 .89
 Proof of training 100 1.00 .85
Sanitation regulations .87
 Eyewear 100 1.00 .76
 Floors 70 .29 .25
 Towels 100 1.00 1.00
 Bed/booth 90 .80 .72
Enforcement/legal issues .60
 Enforcement authority 70 .61 .51
 Funding for enforcement 70 .40 .46
 Inspections 50 .49 .82
 Complaint investigation 70 - -
 Facility liability 90 .78 .86
Penalties for violations .80
 Penalties/fines for violations 80 .71 .80
Overall .95

ICC, Intraclass correlation; UV, ultraviolet.

*

Kappas are reported for nominal (dichotomous) items and weighted kappas for ordinal items.

The 3 raters are the two independent raters and the project consensus group.

Kappas are reported for nominal items, and Kendall’s coefficient of concordance is reported for ordinal ratings.

§

Raters and states were considered random variables (two-way random effects model). Single measure reliability was used. For the two single item subscales, Kendall’s coefficient of concordance is used instead of the ICC.

Kappa could not be computed because there was no variation for one rater.