Abstract
This special issue of Perspective on Behavior Science is a productive contribution to current advances in the use and documentation of single-case research designs. We focus in this article on major themes emphasized by the articles in this issue and suggest directions for improving professional standards focused on the design, analysis, and dissemination of single-case research.
Keywords: Single-case research design, Methods standards, Analysis advances
The application of single-case research methods is entering a new phase of scientific relevance. Researchers in an increasing array of disciplines are finding single-case methods useful for the questions they are asking and the clinical needs in their fields (Kratochwill et al., 2010; Maggin et al., 2017; Maggin & Odom, 2014; Riley-Tillman, Burns, & Kilgus, 2020). With this special issue the editors have challenged authors to articulate the advances in research design and data analysis that will be needed if single-case methods are to meet these emerging expectations. Each recruited article delves into a specific avenue of concern for advancing the use of single-case methods. The purpose of this discussion is to integrate themes identified by the authors and offer perspective for advancing the application and use of single-case methods. We provide initial context and then focus on the unifying messages the authors provide for both interpreting single-case research results and designing studies that will be of greatest benefit.
Context
A special issue of Perspectives on Behavior Science focused on methodological advances needed for single-case research is a timely contribution to the field. There are growing efforts to both articulate professional standards for single-case methods (Kratochwill et al., 2010; Tate et al., 2016), and advance new procedures for analysis and interpretation of single-case studies (Manolov & Moeyaert, 2017; Pustejovsky et al., 2014; Riley-Tillman et al., 2020). Foremost among these trends is the goal of including single-case methods in the identification of empirically validated clinical practices (Slocum et al., 2014). Often labeled “evidence-based practices” the emerging message is that federal, state, and local agencies will join with professional associations in advancing investment in practices that have empirically documented effectiveness, efficiency, and safety. This movement depends on each discipline defining credible protocols for identifying empirically validate procedures, and in the present context, the use of single-case methods to achieve this goal.
This special issue comes to the field following recent publication of the What Works Clearinghouse 4.1 standards for single-case design (Institute of Education Sciences, 2020). At this time, the repeated demonstrations that single-case methods are useful, valid, and increasingly well-defined holds great promise. For single-case methods to achieve the impact they promise, however, there remains a need for (1) professional acceptance of research design standards, (2) agreement on data analysis standards (both interpreting individual studies, and for larger meta-analyses), and (3) incorporation of these standards in journal review protocols, grant review protocols, and university training programs targeting research design. This special issue offers a useful foundation for advancing the field in each of these areas.
A Role for Experimental Single-Case Designs
One important theme across the articles is recognition that the scientific community is unifying in acceptance that the core features of experimental single-case designs allow credible documentation of functional relations (experimental control). This is a large message, and one that needs to be more overtly noted across disciplines where single-case methods are less often used. Of special value is the distinction between rigorous single-case experimental designs and clinical case studies, or formal descriptive time-series analyses. The iterative collection of data across time with periodic experimenter-manipulation of treatments is useful both as a clinical tool and, when this approach is linked with designs that control for threats to internal validity, to the advancement of science.
Combining Visual Analysis and Statistical Analysis
Another major message from the recruited articles is that interpretation of single-case research designs will benefit from (even require) incorporation of statistical tools. Single-case researchers have used visual analysis as the initial step in examining evidence (Parsonson & Baer, 1978; Ledford & Gast, 2018; Kazdin, 2021; Riley-Tillman et al., 2020). Rigorous use of visual analysis involves (1) examining the data from each phase of the study to define within phase patterns, (2) comparing data patterns of adjacent phases, (3) comparing data patterns of similar phases, (4) examining the full set of data within a design to assess if the design has been effective at controlling threats to internal validity, and if there are at least three demonstrations of effect (each at a different point in time), and (5) determining if there are instances of noneffect or contra-indicated effect.
When assessing a single phase (or similar phases) of a study, the researcher considers (1) number of data points, (2) level (mean) score, (3) variability of scores, and (4) within phase trend(s). When comparing adjacent phases, the researcher examines if there is a change in the pattern of data following manipulation of the independent variable. Phase comparisons are done by simultaneously assessing (1) change in level, (2) change in variability, (3) change in trend, (4) immediacy of any change in pattern, (5) degree of overlap in data between the two phases, and (6) similarity in the patterns of data from similar phases (e.g., two baseline phases).
When assessing the overall design, the researcher looks at all the data to determine if an effect (e.g., change in the pattern of the dependent variable following manipulation of the independent variable) is observed at least three different times, each at a different point in time. The researcher also examines if there are manipulations of the independent variable where change in the dependent variable did not occur or occurred in the opposite direction expected by the hypothesis under consideration.
At present there is active discussion about the need for visual analysis as a component in the analysis protocol with single-case studies (Institute of Education Sciences, 2020). It is clear that the number of data points per phase, mean of these points, variability, and within phase trend are all easily calculatable. As the authors of articles in this issue note, there also are creative approaches to examining if there is change in the data patterns across adjacent phases. We view these approaches as major advances, and positive assets to the task of interpreting single-case evidence. We also recognize, however, that none of the proposed statistical options simultaneously examine the full set of variables traditionally used to guide visual analysis (level, trend, variability, immediacy, overlap, similarity of pattern across similar phases), nor do they include protocols for adjusting the weight given to each variable when assessing an effect (e.g., level is weighted differently in phases with stable data patterns than in phases with strong trends). Most important, visual analysis offers a more nuanced interpretation of data patterns. The role of outliers, within phase shifts in data patterns, and shifts in data patterns at similar times (within a multiple baseline design) are more apparent via visual analysis, and useful sources of information for assessing the stability and clinical relevance of effects. At this point we continue to see visual analysis as the appropriate first step in assessment of single-case studies, but strongly support the addition of statistical tools that yield valuable quantitative summaries of specific aspects of the analysis.
Align Data Analysis with Research Purpose
A theme that emerges in this special issue is the importance of aligning the aspects of the analysis that are quantified with the purposes of the study. We are fortunate to see in this special issue a variety of quantitative summaries that are tailored to meet a variety of purposes. There are methods helpful in estimating the size of the average treatment effect, and these vary depending on whether the focus is on quantifying in a standardized way the change in level (Cox et al., this issue) or a change in slope or variability (Manolov et al., this issue-a). In addition, there are methods to quantify the consistency of effects across replications (Manolov et al., this issue-b) and other methods to summarize the degree to which the size of effects relates to characteristics of the participants (Moeyaert et al., this issue). There are also estimates of the probability of the observed difference occurring in the absence of a treatment effect (Friedel et al., this issue; Manolov et al., this issue-b), methods that rely on a series of probability estimates to aid in the interpretation of FA (Kranak & Hall, this issue), and summaries used to identify overselectivity (Mason et al., this issue). In each case a strong rationale is available to support specific conditions where the proposed analysis would be useful. The important message is that no one analysis is applicable to all conditions and clarifying the purpose and structure of a specific study is critical when deciding which analysis to implement.
In addition to the need for aligning the statistical analysis with the study purpose, is the need for alignment of the logic and assumptions underlying the quantitative summary with the design and data from the single-case study. For example, the interpretation of a change in level as a measure of the size of the effect is made more meaningful when the experimental design controls for threats to internal validity and a visual analysis reveals an absence of trends, an absence of level shifts that are not coincident with intervention, and a problematic level of baseline responding. Probabilities based on randomization tests (Manolov et al., this issue-b) are more meaningfully interpreted when the design incorporates randomization and the data permutations are restricted to the possible random assignments, whereas probabilities based on Monte Carlo resampling methods (Friedel et al., this issue) are based on an assumption of exchangeability, and thus more meaningful when the time series are stable.
Single-case researchers will increasingly be expected to integrate statistical analyses in their reporting of results. The number of statistical options will continue to expand, and the analyses will become increasingly easy to implement through software applications. For the field to capitalize on these advancements, it will be important for single-case researchers to be flexible, selecting quantifications that are well matched to their purposes, study design, data, and visual analyses. Because single-case researchers cannot routinely rely on one specific quantification, efforts have begun to provide guidance in selecting among quantitative options (Fingerhut et al., 2020; Manolov & Moeyaert, 2017; Manolov et al., this issue-a). These efforts will need to be extended so they include the techniques developed and illustrated by authors of this special issue, as well as methods that will be developed in the future to meet the varied needs of single-case researchers.
Computer Applications Supporting Analysis of Single-Case Designs
We also acknowledge the value of computer applications that can assist in analysis of data from single-case designs, and make use of statistical tools more accessible. The ExPRT application developed by Joel Levin and colleagues is one such program that provides rapid interpretation of single-case designs that have incorporated randomization criteria (Gafurov & Levin, 2021). The logic used by ExPRT is consistent with the approach to analysis of alternating treatment designs (ATDs) proposed by Manolov et al. (this issue-b) and is likely to prompt single-case researchers to consider incorporation of randomization options in the design of future experiments. The value of computer applications also is apparent in the Automated Nonparametric Statistical Analysis (ANSA) app offered by Kranak and Hall (this issue) as a tool for facilitating the interpretation of functional analysis data using alternating treatment designs. In addition, many of the effect sizes discussed by Manolov et al. (this issue-a) can be readily computed using computer applications, such as the Single-Case Effect Size Calculator (Pustejovsky & Swan, 2018) and SCDHLM (Pustejovsky et al., 2021). We anticipate an increasing number of computer applications for interpreting single-case data will become available as statistical strategies gain acceptance.
Falligant et al. (this issue) extend this theme by reviewing emerging statistical strategies for improving analysis of time series data. They summarize data analytic methods that will both benefit experimental studies and be especially useful in interpretation of clinical data (e.g., with designs that may not meet experimental requirements for control of threats to internal validity). Their message is joined by Cox et al. (this issue) in emphasizing the value of collecting rigorous time series data in clinical context even when experimental designs are contraindicated. The consistent message is that combining visual analysis with supplemental statistical assessment has value both for clinical decision making and advancing the science within a discipline. The improved array of statistical options, and the increasing ease with which they can be applied to time series data, make integration of visual and statistical analysis a likely standard for the future.
Implications for Designing Single-Case Research
The articles in this special issue emphasize innovative approaches to analysis of single-case research data. But the authors also offer important considerations for research designs. Two articles report procedures for identifying the role of intervention components (Cox et al., this issue; Mason et al., this issue). Too little emphasis has been given to the role of single-case designs to examine moderator variables, interaction effects, intervention components and sustained impact. Few interventions are effective across all population groups, all contexts, and all challenges. Effective research designs need to allow identification not only of the impact of an intervention in a specific context, but identification of conditions where the intervention is not effective. Likewise, a growing number of behavioral interventions include multiple procedures. Identifying the respective value of each procedural component and the most efficient combination of components is a worthy challenge for researchers and the creative application of single-case designs.
Single-case studies designed to examine component interactions or setting specificity may benefit from use of complex single-case designs that combine multiple baseline, alternating treatment and/or reversal elements (Kazdin, 2021). In other cases, analysis approaches may be helpful to both document effects and guide future studies. Cox et al. (this issue) offer examples for separating the independent and combined effects of behavioral interventions and medication on reduction of problem behavior for individuals with intellectual disabilities. Mason et al. (this issue) likewise document how statistical modeling can be used to isolate elements of stimulus control and document with greater precision the presence of stimulus overselectivity.
Research Protocols
Three articles in this issue focus on research protocols that will facilitate the inclusion of single-case research in larger meta-analyses documenting evidence-based practices. Aydin and Yassikaya (this issue) focus on the need for transforming graphic data into spreadsheets that can be used for statistical analysis. They report on the value of a PlotDigitizer application for extracting graphic data and provide documentation of the validity and reliability of this tool for delivering the data in a format needed for supplemental statistical analysis. Manolov et al. (this issue-a, b) propose procedures for both selecting and reporting the measures employed in any study to avoid measurement bias and misinterpretation, and Dowdy et al. (this issue) likewise encourage procedures to identify possible publication bias. These authors promote the value of research plan prepublication as a growing option that is both practical and valuable for maximizing rigorous and ethically implemented research protocols.
Summary
The major message from articles in this special issue is that single-case research designs are available and functional for advancing both our basic science and clinical technology. The efforts over the past 15 years to define professional design and analysis standards for single-case methods have been successful. But as the articles in this special issue show, single-case research methods are continuing to evolve. Innovative statistical procedures are improving the precision and credibility of single-case research analysis and posing important considerations for novel research design options. These innovations will continue to challenge prior assumptions, and open new opportunities. Each innovation will receive its own critical review, but collectively, the field is benefiting from the creative recommendations exemplified by the authors of this special issue.
Declarations
Conflict of interest
We have no known conflict of interest to disclose.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Aydin, O., & Yassikaya, M. Y. (this issue). Validity and reliability analysis of the Plot Digitizer Software Program for data extraction from single-case graphs. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614.021.00284-0 [DOI] [PMC free article] [PubMed]
- Cox, A., Pritchard, D., Penney, H., Eiri, L., & Dyer, T. (this issue). Demonstrating an analyses of clinical data evaluating psychotropic medication reductions and the ACHIEVE! Program in adolescents with severe problem behavior. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-020-00279-3 [DOI] [PMC free article] [PubMed]
- Dowdy, A., Hantula, D., Travers, J. C., & Tincani, M. (this issue). Meta-analytic methods to detect publication bias in behavior science research. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-021-00303-0 [DOI] [PMC free article] [PubMed]
- Falligant, J., Kranak, M., & Hagopian, L. (this issue). Further analysis of advanced quantitative methods and supplemental interpretative aids with single-case experimental designs. [DOI] [PMC free article] [PubMed]
- Fingerhut, J., Marbou, K., & Moeyaert, M. (2020). Single-case metric ranking tool (Version 1.2) [Microsoft Excel tool]. 10.17605/OSF.IO/7USBJ
- Friedel, J., Cox A., Galizio, A., Swisher, M., Small, M., & Perez S. (this issue). Monte Carlo analyses for single-case experimental designs: An untapped resource for applied behavioral researchers and practitioners. [DOI] [PMC free article] [PubMed]
- Gafurov, B. S., & Levin, J. R. (2021, June). ExPRT (Excel Package of Randomization Tests): Statistical analyses of single-case intervention data (Version 4.2.1) [Computer software]. https://ex-prt.weebly.com
- Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, & What Works Clearinghouse. (2020). What Works Clearinghouse Procedures Handbook (Vers. 4.1). https://ies.ed.gov/ncee/wwc/Docs/referenceresources/WWC-Procedures-Handboo v4-1508.pdf
- Kazdin AE. Single-case research designs: Methods for clinical and applied settings. 3. Oxford University Press; 2021. [Google Scholar]
- Kranak, M., & Hall, S. (this issue) Implementing automated nonparametric statistical analysis on functional analysis data: A guide for practitioners and researchers. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-021-00290-2 [DOI] [PMC free article] [PubMed]
- Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation. What Works Clearinghouse. https://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf
- Ledford J, Gast D. Single case research methodology: Applications in special education and behavioral sciences. 3. Taylor & Francis; 2018. [Google Scholar]
- Maggin D, Odom S. Evaluating single-case research data for systematic review: A commentary for the special issue. Journal of School Psychology. 2014;52(2):237–241. doi: 10.1016/j.jsp.2014.01.002. [DOI] [PubMed] [Google Scholar]
- Maggin DM, Pustejovsky JE, Johnson AH. A meta-analysis of school-based group contingency interventions for students with challenging behavior: An update. Remedial & Special Education. 2017;38(6):353–370. doi: 10.1177/0741932517716900. [DOI] [Google Scholar]
- Manolov R, Moeyaert M. Recommendations for choosing single-case data analytical techniques. Behavior Therapy. 2017;48(1):97–114. doi: 10.1016/j.beth.2016.04.008. [DOI] [PubMed] [Google Scholar]
- Manolov, R., Moeyaert, M., & Fingerhut, J. (this issue-a) A priori justification for effect measures in single-case experimental designs. Perspectives in Behavior Science. Advance online publication. 10.1007/s40614.021-00282-2 [DOI] [PMC free article] [PubMed]
- Manolov, R., Tanious, R., & Onghena, P. (this issue-b). Quantitative techniques and graphical representations for interpreting results from alternating treatment design. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-021-00289-9 [DOI] [PMC free article] [PubMed]
- Mason, L., Otero, M., & Andrews, A. (this issue). Cochran’s Q test of stimulus overselectivity within the verbal repertoire of children with autism. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-021-00315-2 [DOI] [PMC free article] [PubMed]
- Moeyaert, M., Yang, P., & Xu, X. (this issue). The power to explain variability in intervention effectiveness in single-case research using hierarchical line modeling. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-021-00304-z [DOI] [PMC free article] [PubMed]
- Parsonson B, Baer D. The analysis and presentation of graphic data. In: Kratochwill TR, editor. Single subject research. Elsevier; 1978. pp. 101–166. [Google Scholar]
- Pustejovsky, J. E., & Swan, D. M. (2018). Single-case effect size calculator (Version 0.5.1) Web application. https://jepusto.shinyapps.io/SCD-effect-sizes/
- Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. Journal of Educational & Behavioral Statistics, 39(5), 368–393. 10.3102/1076998614547577
- Pustejovsky, J. E., Chen, M., & Hamilton, B. (2021). scdhlm: A web-based calculator for between-case standardized mean differences (Version 0.5.2) Web application. https://jepusto.shinyapps.io/scdhlm
- Riley-Tillman, T. C., Burns, M. K., & Kligus, S. (2020). Evaluating educational interventions: Single-case design for measuring response to intervention. Guilford Press.
- Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The evidence-based practice of applied behavior analysis. The Behavior Analyst, 37, 41–56. 10.1007/s40614-014-0005-2 [DOI] [PMC free article] [PubMed]
- Tate, R. L., Perdices, M., Rosenkoetter, U., Shadish, W., Vohra, S., Barlow, D. H., Horner, R., Kazdin, A., Kratochwill, T., McDonald, S., Sampson, M., Shamseer, L., Togher, L., Albin, R., Backman, C., Douglas, J., Evans, J. J., Gat, D., Manolov, R., et al. (2016). The single-case reporting guideline in Behavioural Interventions (SCRIBE) 2016 statement. Physical Therapy, 96(7), e1–e10. 10.2522/ptj.2016.96.7.e1 [DOI] [PubMed]
