Skip to main content
Journal of Applied Behavior Analysis logoLink to Journal of Applied Behavior Analysis
. 1977 Spring;10(1):141–150. doi: 10.1901/jaba.1977.10-141

Artifact, bias, and complexity of assessment: the ABCs of reliability

Alan E Kazdin 1,1
PMCID: PMC1311161  PMID: 16795543

Abstract

Interobserver agreement (also referred to here as “reliability”) is influenced by diverse sources of artifact, bias, and complexity of the assessment procedures. The literature on reliability assessment frequently has focused on the different methods of computing reliability and the circumstances under which these methods are appropriate. Yet, the credence accorded estimates of interobserver agreement, computed by any method, presupposes eliminating sources of bias that can spuriously affect agreement. The present paper reviews evidence pertaining to various sources of artifact and bias, as well as characteristics of assessment that influence interpretation of interobserver agreement. These include reactivity of reliability assessment, observer drift, complexity of response codes and behavioral observations, observer expectancies and feedback, and others. Recommendations are provided for eliminating or minimizing the influence of these factors from interobserver agreement.

Keywords: methodology, observational procedures, observational code, observer bias, expectancies, feedback, reliability, artifact

Full text

PDF
144

Selected References

These references are in PubMed. This may not be the complete list of references from this article.

  1. AZRIN N. H., HOLZ W., ULRICH R., GOLDIAMOND I. The control of the content of conversation through reinforcement. J Exp Anal Behav. 1961 Jan;4:25–30. doi: 10.1901/jeab.1961.4-25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bijou S. W., Peterson R. F., Ault M. H. A method to integrate descriptive and experimental field studies at the level of data and empirical concepts. J Appl Behav Anal. 1968 Summer;1(2):175–191. doi: 10.1901/jaba.1968.1-175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Hartmann D. P. Considerations in the choice of interobserver reliability estimates. J Appl Behav Anal. 1977 Spring;10(1):103–116. doi: 10.1901/jaba.1977.10-103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Kent R. N., O'Leary K. D., Diament C., Dietz A. Expectation biases in observational evaluation of therapeutic change. J Consult Clin Psychol. 1974 Dec;42(6):774–780. doi: 10.1037/h0037516. [DOI] [PubMed] [Google Scholar]
  5. O'leary K. D., Kent R. N., Kanowitz J. Shaping data collection congruent with experimental hypotheses. J Appl Behav Anal. 1975 Spring;8(1):43–51. doi: 10.1901/jaba.1975.8-43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. doi: 10.1901/jaba.1975.8-460. [DOI] [PMC free article] [Google Scholar]
  7. Redfield J., Paul G. L. Bias in behavioral observation as a function of observer familiarity with subjects and typicality of behavior. J Consult Clin Psychol. 1976 Feb;44(1):156–156. doi: 10.1037//0022-006x.44.1.156. [DOI] [PubMed] [Google Scholar]
  8. Romanczyk R. G., Kent R. N., Diament C., O'leary K. D. Measuring the reliability of observational data: a reactive process. J Appl Behav Anal. 1973 Spring;6(1):175–184. doi: 10.1901/jaba.1973.6-175. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of Behavior

RESOURCES