Skip to main content
Journal of Applied Behavior Analysis logoLink to Journal of Applied Behavior Analysis
. 1981 Winter;14(4):479–489. doi: 10.1901/jaba.1981.14-479

The effects of instructions and calculation procedures on observers' accuracy, agreement, and calculation correctness

Ronald A Boykin 1, Rosemery O Nelson 1
PMCID: PMC1308235  PMID: 16795650

Abstract

Although the quality of observational data is generally evaluated by observer agreement, measures of both observer agreement and accuracy were available in the present study. Videotapes with a criterion protocol were coded by 16 observers. All observers calculated agreement scores both on their own and their partner's data and on a contrived data set misrepresented as data collected by other observers. Compared with agreement scores calculated by the experimenter, observers erroneously inflated their own agreement scores and deflated the agreement scores on the contrived data. Half of the observers (n = 8) had been given instructions emphasizing the importance of accuracy during observation while the other half had been given instructions emphasizing interobserver agreement. Accuracy exceeded agreement for the former group, whereas agreement exceeded accuracy for the latter group. The implications are that agreement should be calculated by the experimenter and that the accuracy-agreement relationship can be altered by differential observer instructions.

Keywords: observational data, observer bias, observer agreement, observer accuracy, observer instructions

Full text

PDF
479

Selected References

These references are in PubMed. This may not be the complete list of references from this article.

  1. Kazdin A. E. Artifact, bias, and complexity of assessment: the ABCs of reliability. J Appl Behav Anal. 1977 Spring;10(1):141–150. doi: 10.1901/jaba.1977.10-141. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Kelly M. B. A review of the observational data-collection and reliability procedures reported in The Journal of Applied Behavior Analysis. J Appl Behav Anal. 1977 Spring;10(1):97–101. doi: 10.1901/jaba.1977.10-97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Kent R. N., O'Leary K. D., Diament C., Dietz A. Expectation biases in observational evaluation of therapeutic change. J Consult Clin Psychol. 1974 Dec;42(6):774–780. doi: 10.1037/h0037516. [DOI] [PubMed] [Google Scholar]
  4. O'leary K. D., Kent R. N., Kanowitz J. Shaping data collection congruent with experimental hypotheses. J Appl Behav Anal. 1975 Spring;8(1):43–51. doi: 10.1901/jaba.1975.8-43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Repp A. C., Deitz D. E., Boles S. M., Deitz S. M., Repp C. F. Differences among common methods for calculating interobserver agreement. J Appl Behav Anal. 1976 Spring;9(1):109–113. doi: 10.1901/jaba.1976.9-109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Repp A. C., Roberts D. M., Slack D. J., Repp C. F., Berkler M. S. A comparison of frequency, interval, and time-sampling methods of data collection. J Appl Behav Anal. 1976 WINTER;9(4):501–508. doi: 10.1901/jaba.1976.9-501. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Romanczyk R. G., Kent R. N., Diament C., O'leary K. D. Measuring the reliability of observational data: a reactive process. J Appl Behav Anal. 1973 Spring;6(1):175–184. doi: 10.1901/jaba.1973.6-175. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of Behavior

RESOURCES