Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jul 1.
Published in final edited form as: Pediatr Phys Ther. 2017 Jul;29(Suppl 3 IV STEP 2016 CONFERENCE PROCEEDINGS):S57–S63. doi: 10.1097/PEP.0000000000000380

Research Design Options for Intervention Studies

Michele A Lobo 1, Sarah H Kagan 2, John D Corrigan 3
PMCID: PMC5488684  NIHMSID: NIHMS851161  PMID: 28654478

Introduction

The purpose of this article is to report on the proceedings from the Research Design Plenary Session from the IV STEP 2016 Conference on Prevention, Prediction, Plasticity, and Participation. The Academies of Pediatric and Neurologic Physical Therapy recognize that research in the field of rehabilitation would benefit from using a greater breadth of research designs and invited the authors to present three types of research designs that could advance rehabilitation science: 1) single-case, 2) observational, and 1) qualitative.

As the profession of physical therapy engages in evidence-based practice, a medical research model with a focus on randomized controlled trials (RCTs) has been supported to justify and understand the cost and impact of interventions.1 In the common hierarchy of research designs, RCTs and meta-analyses of RCTs, are at the top of a pyramidal structure, suggesting they are superior to the designs below in the structure or to designs that are not in the structure (Figure 1).2, 3

Figure 1.

Figure 1

The common hierarchy of research design positions randomized controlled trials (RCTs) and studies analyzing and reviewing these trials at the top. Single case designs and qualitative research designs are not depicted in this hierarchy and observational designs are positioned lower on the hierarchy, incorrectly suggesting these categories of studies cannot be as rigorous and informative as RCTs.

Although RCTs can be useful for answering many questions in rehabilitation especially related to pharmaceutical trials, there are other research questions that can best be answered using less common research methods and designs. Publications have increased for disciplines outside of health sciences such as education and psychology that use these less common designs. Federal funding agencies like the Institute of Education Sciences (U.S. Department of Education) have an objective to support the development of improved statistical analysis techniques for these less commonly used methods, recognizing their value in helping us understand how to best optimize therapeutic outcomes. Measures to report and evaluate the quality of research using these designs have been developed.46

We define single-case, observational, and qualitative research designs and describe the utilization of these designs in order to inform researchers and clinicians in rehabilitation science professions. Table 1 lists recommended readings.

Table 1.

Suggested Readings

Carter, R., Lubinsky, J. Rehabilitation research: Principles and applications, 5th edition. St. Louis, MO: Elsevier, Inc.; 2016.
Creswell, J.W. Qualitative inquiry and research design: Choosing among five approaches. Washington, DC: Sage Publications, Inc.; 2013.
Creswell, J.W. Research design: Qualitative, quantitative, and mixed methods approaches. Washington, DC: Sage Publications, Inc.; 2013.
Gast, D.L., Ledford, J.R. Single case research methodology: Applications in special education and behavioral sciences, second edition. New York, NY: Routledge; 2014.
Horn S.D., Corrigan J.D., Dijkers, M.P. Traumatic brain injury rehabilitation comparative effectiveness research: Introduction to the traumatic brain injury-practice based evidence archives supplement. Arch Phys Med Rehabil. 2015;96(8):S173–S177.
Kratochwill, T.R., Levin, J.R. Single-case intervention research. Washington, DC: American Psychological Association; 2014.

Single-case Research Designs

Single-case research designs (SCDs; single-subject, single-participant, n=1, intra-subject designs) include the repeated observation of outcomes (dependent variables) across time under different levels of at least one intervention (manipulated independent variable.7 These studies include phases that can vary in length from minutes to months or longer during which the intervention condition manipulated. Repeated measurements occur throughout each phase and at least one of the phases is a baseline. Case studies and case series are not examples of SCDs because they do not typically involve controls for the levels of intervention across time.8, 9

SCDs should involve more than one participant to improve external validity, the ability to generalize the results to a larger population.8, 9 For this reason, statisticians and methodologists support the use of SCD rather than terms suggesting suggest a single participant as a key feature.

SCDs can be designed to test a causal relationship between an intervention and an outcome.8, 9 Therefore, SCDs have external validity and internal validity. Causality is demonstrating through repeatability. For instance, when the effects of an intervention are observed immediately after intervention and behavior reverts to baseline when the intervention is removed, repeatability is supported. Repetition can be demonstrated using multiple phase designs such as ABA, ABAB, ABABA, where A signifies a baseline phase (no intervention) and B signifies an intervention phase. This design is useful for studies using feedback, task modification, environmental supports, or technologies (e.g., mobility devices, robotics, exoskeletons) when rapid improvements during intervention with function reverting to baseline after removing the intervention are both expected (Figure 2).

Figure 2.

Figure 2

A fictional A1B1A2B2 single-case study design that tracks distance travelled by an individual with movement impairments in the community with and without a powered mobility device. A1 represents the first and A2 the second baseline phase (distance travelled without the device); B1 represents the first and B2 the second intervention phase (distance travelled with the device). A clear difference is observed in the level/amount of community mobility observed between periods of device use and nonuse, suggesting the change in amount of mobility was caused by the use of the device.

For educational and procedural interventions immediate effects are not typically observed and return to baseline function at the end of the intervention phase is not commonly observed nor is it desirable. Repeatability and determination of causality in these interventions can be achieved when the effects of the intervention are repeated across multiple participants.8, 9 For example, causality is not determined from an AB design with one participant. If similar effects are observed across multiple participants using this design, causality is strengthened. While participants are not randomly assigned to groups, they can be randomly assigned to intervention levels. A causal relationship can be inferred from a multiple-participant AB design if participants are randomly assigned to interventions during which baseline length or start time of intervention phase are varied.10 This may be important when improvements in function could be attributed to experience, maturity, or recovery. For example, an intervention to improve sitting with a baseline from 4–6 months of age and an intervention beginning at 6 months of age is confounded by sitting independence typically emerging at 6 months of age. The sitting intervention study could randomly assign participants to begin baseline at 4 months and begin intervention at 5, 6, or 7 months (Figure 3). Similarly, natural recovery, rather than the intervention, may account for changes in outcomes for people in a stroke rehabilitation study if the baseline is from 0–30 days after stroke and the intervention begins at day 31. Here the baseline could begin immediately after stroke with participants randomly assign to begin intervention at 1, 2, or 3 months post stroke.

Figure 3.

Figure 3

A fictional multiple baseline AB single-case study design that tracks independent sitting time for infants at risk for delays. A represents the baseline phase when no intervention is provided; B represents the intervention phase when the sitting intervention is provided. Participants began the study at 4 months of age, but Participant 1 began the intervention at 5 months, Participant 2 at 6 months, and Participant 3 at 7 months of age. Independent sitting typically emerges around 6 months of age. Duration of independent sitting changed with respect to the onset of intervention rather than in relation to age, strengthening the suggestion that the intervention caused the change in sitting behavior.

A variety of statistical techniques can be used to analyze data from SCDs. The U.S. Department of Education funded the development of improved analysis techniques for these designs.8, 9, 11, 12 An example of a more recent analysis technique includes randomization analysis in which p values can be determined for individual participants and then analyzed using meta-analysis techniques across multiple participants in the same study.1315

The key characteristics of RCTs can be incorporated into SCDs. These include randomization of participants to conditions, comparison of intervention outcomes with control outcomes, identification of causal relations between interventions and outcomes, generalization of the outcomes.8, 10, 16 SCDs can address confounding variables, a key limitation of RCTs and other group designs.8, 9 When designing group studies, researchers must identify confounding variables that may impact outcome. These could be a long list of variables such as strength, age, time from injury, gender, social support, medication use, and cognitive level of participants. Researchers aim to control these confounding variables by randomizing participants to groups and large samples with the assumption that levels of confounding variables are evenly represented among the groups. It is not always possible to identify all confounding variables or to have sufficient sample sizes. An important benefit to SCDs is that each participant serves as his/her own control. Most confounding variables will therefore be the same across conditions for each participant. SCDs require fewer participants and resources to answer rehabilitation questions. They support high quality, controlled research studies in real-world natural and clinical settings. SCDs also reduce the gap between research and practice by reporting individual changes, where therapists serve people, rather than reporting group averages that may not represent any one individual. SCDs provide an details of process and change in relation to interventions by collecting data more frequently on fewer participants. These designs offer an ethical intervention design for people when withholding an intervention compromises safety or ethics.8, 9

Observational Research Designs

There are a variety of observational research designs. Observational studies make comparisons of differences among people with a condition when observed at a single point in time (i.e., cross-sectional designs), change over time among people with a condition (i.e., case series or, with large datasets, longitudinal designs), change over time among people with and without the condition of interest (i.e., retrospective or prospective cohort designs), and differences between people with a condition and those selected for specific similarities who do not have the condition (i.e., case control). These types of studies can take advantage of large datasets.

One difference between observational designs and experimental designs (e.g., RCTs) is that causality cannot be concluded from relationships described in observational studies. Observed associations may be due to a variable that was not measured but is related to both predictor and outcome. Randomization in experimental designs accounts for this possibility in a way that observational designs cannot. However, causality can be suspected when an association has specific characteristics: a relationship is found consistently in multiple studies with varying samples, methods and predictors; a monotonic relationship is observed between a variable and a hypothesized effect (sometimes called a “dose relationship” because the greater the magnitude of the predictor, the greater the effect on the outcome); or, a study includes all known or suspected covariates in an analysis of the relationship, thus, ruling them out as sources of the association. However, some people do not believe that causality can be established without an experimental design.

This decade has seen the creation and development of multiple large datasets with outcomes for populations with neurological conditions, including traumatic brain injury (TBI). Two of these datasets were specifically designed to evaluate rehabilitation outcomes following TBI: the TBI Model Systems National Database (TBIMS-NDB)17 and the TBI Practice Based Evidence (TBI-PBE) Study.18 Both datasets examine rehabilitation outcomes using observational designs, yet differ in structure and composition and, as a result, the observational methods used to examine treatment outcomes.

The TBIMS-NDB is funded by the National Institute on Disability, Independent Living and Rehabilitation Research (NIDILRR), U.S. Department of Health and Human Services since 1987.17 It houses data on participants older than 16 years of age with moderate to severe TBI treated for inpatient rehabilitation at one of the participating centers. Data are collected about pre-injury status, injury and inpatient rehabilitation, and longitudinal progress at 1, 2, 5 years and every 5 years thereafter. As of March 31, 2016, the TBIMS-NDB included approximately 15,000 participants giving 50,000 interviews, with some interviews 25 years post-injury. Critical to its utility as a longitudinal dataset, the TBIMS-NDB has follow-up rates exceeding 82% across all years.19 Standardized protocols for data collection result in very high test, re-test reliabilities for follow-up measures.20

The TBIMS-NDB yielded hundreds of peer-reviewed articles. One important contribution are the longitudinal analyses. Analytic techniques, including individualized growth curve analysis, support a different understanding of long-term recovery after TBI. As the TBIMS-NDB gains cases at the 15-, 20- and 25-year follow-up years, more insight is expected into the interaction with co-morbidities as well as the effects of aging. These discoveries are aided by increasingly sophisticated analytic techniques for longitudinal data.

Another TBI database, the TBI-PBE, uses a “Practice-Based Evidence” approach to observational research that was first used by Horn in acute hospital care and nursing homes.18 PBE studies take advantage of naturally occurring variations in treatment patterns.18, 21 Data are analyzed regarding the affect of interventions on outcomes taking into account a myriad of patient differences. PBE studies differ from other observational studies in that they include detailed information about the participants, their disease states and the interventions. This research technique was first extended to rehabilitation in a study of stroke;22 which was followed by a study of spinal cord injury23 and joint replacement.24

The TBI-PBE study was funded in 2007 to collect data on participants over 13 years of age with TBI treated at 10 rehabilitation centers in the United States and Canada.25 The dataset contains information about treatment during rehabilitation. Point-of-care (POC) forms capture details of each treatment episode, each day, provided by each member of the rehabilitation team. There is an average of 165 POC forms per patient; nearly 6 forms for each day of stay. The TBI-PBE dataset has yielded many results, and the dataset continues to be analyzed.

The Propensity Score Analysis26 from the TBI-PBE study is an analytic technique that is gaining attention. A propensity score is calculated for each case in the dataset and is the probability that a given individual would receive a specific treatment. The score is derived from prediction models applied to baseline characteristics known prior to the initiation of the treatment. There are multiple approaches to matching the samples for their propensity score that, when successful, allow the researcher to examine outcomes for a group of participants who received a treatment when compared to a group that had an equal chance of receiving the treatment but did not. Propensity Score Analysis is evolving and requires skilled statisticians to assist the researcher. However, the potential to simulate randomization in historical datasets could be a powerful tool.

The TBIMS-NDB and TBI-PBE studies illustrate the application of observational techniques to the study of rehabilitation outcomes. Their strengths are different: the detailed treatment data in the TBI-PBE study provides opportunities for understanding inpatient TBI rehabilitation interventions while the extensive longitudinal scope of the TBIMS-NDB is an unprecedented window into the long-term effects of TBI. A characteristic of both datasets is the rigor used during data collection. As datasets become more available with the electronic capture of clinical records, we have greater opportunities to apply observational methods. However, our methods will be only as good as the data captured. Greater prospective quality control in record keeping may be necessary to take full advantage of the electronic medical record for research.

Qualitative Research Designs

Designs in qualitative research are similar to those employed in Positivist (the scientific philosophy of measurable verification) quantitative research but those familiar designs are used differently.16 Post-positivist qualitative research presumes and privileges multiple subjective truths in human experience as opposed to the objectivist single truth of positivist quantitative research. Hence, within group variation of an experience and subjective reflection on that experience are important in determining nature and scope of investigation.

The terms design and methodology are sometimes used synonymously, typically obscuring the full range of methods from design through analysis. Data collection techniques are sometimes misrepresented as a method, for example, reference to focus group technique as a stand-alone qualitative method.28 However, such use obscures understanding overarching methods employed from approach through design to specific methodological traditions and techniques, limiting substantiation of scientific quality.27 Critically, the intent of a study, along with specific research questions to be investigated, must guide selection of study design as well as a specific method. Data collection techniques are matched to the method rather than to the design. Sources of text or visual data such as videos or photographs include spoken and written words from individuals and groups. Data is obtained from interviews, events, documents, or other records of the experience under study. Clarity regarding alignment of approach and method with design and data collection is essential to successfully designed qualitative research and delivery of high-quality results.

Qualitative research uses a variety of specifically defined methods, anchored in specific behavioral, cultural, or social theories, in philosophy, or in an explicitly atheoretical orientation, to describe and interpret human experience. Where quantitative research most broadly aims to tell clinicians what to do under a specific set of circumstances, qualitative inquiry commonly attends more closely to illuminating how to be, what it is to experience a state, process, or situation, and how these experiences feel or what they mean. These human experiences represent various facets of being and meaning, and with consequences for the person receiving physical therapy as well as the physical therapist.

Qualitative research, as in quantitative research, uses time as a prominent dimension of human experience to define research design. Time undergirds processes, relationships, and other experiences of interest in physical therapy research. Time is often represented in chronological terms for example, at different points after therapy is initiated. However, gauging response to or understanding of an event like an injury or illness may be better represented in narrative or experiential time. Qualitative designs offer capacity to make use of cross-sectional or longitudinal time perspectives by matching data collection methods (e.g. individual or group interviews as opposed to field observation) to the representation of time specified design of the study. Finally, any qualitative design is amenable to different levels of analysis required to address individual, group, community, or defined case as the focus of inquiry.

Time frames possibilities for describing and interpreting phenomena of interest in qualitative research design. The phenomenon may be approached in ‘snap shot’ form using cross-sectional designs. Cross-sectional design offers flexibility in accruing participants, used with the goal of understanding a single or multiple points of interest in an experience. Isolating a point or points of interest in the phenomenon under study implies a chronological perspective and suggests that participants are recruited in relation to predetermined times. Alternatively, emphasizing narrative or experiential time implies using cross sectional design to gather a variety of perspectives from different points across an experience. This sense of cross sectional design enables participants to reflect on their experiences and allows investigators to analyze across an experience, capturing how it evolves over time across participants. In contrast to cross sectional design, longitudinal design in qualitative work offers potential to represent intra-individual experiences over time. Narrative methods, phenomenology, and participatory methods like photo-voice and appreciative inquiry typically value longitudinal design with engagement of participants over the duration of an experience. Qualitative case study design may make use of either chronological or narrative interpretations of time as the case is defined and data sources identified.29

Qualitative research relies on participant engagement, over time, to delimit the scope and nature of an investigation. As a result, the role of participants in qualitative studies of any design is prioritized in relation to that of the investigator. Typically, the investigator becomes the agent of data collection in contrast to Positivist quantitative science where measures and instruments are used. There is a range of roles for participants depending on the investigator’s approach and on the method used. Participant involvement may appear similar to quantitative studies where qualitative data are collected and analysis may be verified through member checking and other strategies. Conversely, participants may be more engaged with inception of a study, co-creating and serving as researchers as in participatory and action methods. Qualitative case study design uses participant engagement differently, depending on the nature of the case. Reflection on the extent of partnership with participants to whom the phenomenon under investigation belongs helps refine design, leading to more precise identification of the best method suited to the phenomenon and investigative partnership.

Qualitative designs, while similar to quantitative research, differ in the way in which underlying principles, drawn from post-Positivist paradigms of science, guide design selection and development. The role of participants, representation of time, and focus of analysis are critical to defining design. Aligning design with aim, method, data collection and analytic technique support a successful qualitative study. Design is most broadly understood as cross sectional or longitudinal with either category requiring a match to the elements of the experience to be explored. The option of defining a case as the focus of a study further refines qualitative design when considering complex, multilayered phenomena. Determining the extent of partnership desired with participants further refines design and thus commensurate identification of method and data collection techniques.

Discussion

There is a need for physical therapy to acknowledge that there are many rehabilitation research questions that can best be answered using single-case, observational, or qualitative research designs. We are a profession that values problem-solving and determining the optimal intervention for a person. We teach our students to avoid “one size fits all” approaches that claim to work for everyone. Rather, we educate students to critically think about each person’s needs, to review the evidence on potential interventions, and to determine the optimal intervention for each client.

We propose that this same thought process should be applied when selecting the best design to answer a specific research question and applied when teaching students to review literature, articles for publication, and grants. A redesign of the common research design hierarchy would be useful. The redesign should focus on the design characteristics that strengthen internal and external validity with examples of a variety of designs that describe each of those design characteristics. This would replace the ranking of specific design categories. The hierarchical ranking of research designs presumes that designs within each of the categories are all of higher quality than any design in lower ranking categories. This would emphasize research designs that now fall lower on the pyramid and could incorporate useful designs that are not currently listed in the pyramid.

Physical therapists are typically well educated to appreciate the advantages of RCTs and meta-analyses but may lack education in other types of research design and methods. This can result in a variety of inaccurate assumptions. For instance, RCTs and meta-analyses of RCTs are generally considered optimal for answering most research questions related to intervention and other designs are often dismissed as weaker or simply as a step in the process of moving toward an RCT.1, 2 Qualitative measures may be dismissed as not being objective.30 Basic statistics courses teach that studies with less than 30 participants may be dismissed as too small to be useful.31 These are examples of possible assumptions that can reflect a lack of understanding of the benefits of less commonly used research designs. These assumptions may have negative consequences such as enabling the dismissal of entire bodies of literature and research designs that can be informative for clinicians and deterring those without extensive resources from engaging in the research process.

We advocate for a problem-solving based approach to research design and evidence review, whereby the research question, feasibility, ethical, and safety issues are considered when determining the optimal research design. We propose this will result in the implementation of a larger variety of research designs and will enhance the quality and depth of our understanding of the interventions and the individuals we serve.

Acknowledgments

Grant Support: Dr. Lobo’s contribution to this manuscript was supported by the Eunice Kennedy Shriver National Institute of Child Health & Human Development (1R21HD076092-01A1, Lobo, PI). Dr. Corrigan’s contribution was supported by a Traumatic Brain Injury Model System Centers grant from the National Institute on Disability Independent Living and Rehabilitation Research (NIDILRR) to Ohio State University (grant no.90DP0040-01-00). This article does not reflect the official policy or opinions of NIDILRR or NICHD and does not constitute an endorsement of the individuals or their programs.

Footnotes

The authors declare no conflict of interest.

Contributor Information

Dr. Michele A. Lobo, Department of Physical Therapy, Biomechanics & Movement Science Program, University of Delaware, Newark, DE, USA.

Dr. Sarah H. Kagan, Department of Biobehavioral Health Sciences, School of Nursing, University of Pennsylvania, Philadelphia, PA, USA.

Dr. John D. Corrigan, Department of Physical Medicine & Rehabilitation, The Ohio State University, Columbus, OH, USA.

References

  • 1.Jewell DV. Guide to evidence-based physical therapist practice. 3. Burlington, MA: Jones & Bartlett Learning; 2015. [Google Scholar]
  • 2.Garattini S, Jakobsen JC, Wetterslev J, et al. Evidence-based clinical practice: Overview of threats to the validity of evidence and how to minimise them. Eur J Intern Med. 2016;32:13–21. doi: 10.1016/j.ejim.2016.03.020. [DOI] [PubMed] [Google Scholar]
  • 3.Graham JE, Kannarkar AM, Ottenbacher KJ. Small sample research designs for evidence-based rehabilitation: Issues and methods. Arch Phys Med Rehabil. 2012;93(8):S111–S116. doi: 10.1016/j.apmr.2011.12.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Tate RL, Perdices M, McDonald S, Togher L, Rosenkoetter U. The design, conduct and report of single-case research: Resources to improve the quality of the neurorehabilitation literature. Neuropsychol Rehabil. 2014;24(3–4):315–331. doi: 10.1080/09602011.2013.875043. [DOI] [PubMed] [Google Scholar]
  • 5.Tate RL, Perdices M, Rosenkoetter U, et al. The single-case reporting guideline in behavioural interventions (SCRIBE) 2016 statement (Reprinted from Arch Sci Psychol, vol 4, pg 1–9, 2016) Phys Ther. 2016;96(7):932–932. doi: 10.2522/ptj.2016.96.7.e1. [DOI] [PubMed] [Google Scholar]
  • 6.Trainor AA, Graue E. Evaluating rigor in qualitative methodology and research dissemination. Rem Spec Educ. 2014;35(5):267–274. [Google Scholar]
  • 7.Onghena P. Single case designs. In: Everitt BS, Howell DC, editors. Encyclopedia of Statistics in Behavioral Science. Vol. 4. Chichester: John Wiley & Sons, Ltd; 2005. pp. 1850–1854. [Google Scholar]
  • 8.Kratochwill TR, Levin JR. Single-case intervention research. Washington, DC: American Psychological Association; 2014. [Google Scholar]
  • 9.Richards SB, Taylor RL, Ramasamy R. Single subject research: Applications in educational and clinical settings. Belmont, CA: Wadsworth; 2013. [Google Scholar]
  • 10.Kratochwill TR, Levin JR. Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue. Psychol Methods. 2010;15(2):124–144. doi: 10.1037/a0017736. [DOI] [PubMed] [Google Scholar]
  • 11.Levin JR, Ferron JM, Kratochwill TR. Nonparametric statistical tests for single-case systematic and randomized ABAB...AB and alternating treatment intervention designs: New developments, new directions. J School Psychol. 2012;50(5):599–624. doi: 10.1016/j.jsp.2012.05.001. [DOI] [PubMed] [Google Scholar]
  • 12.Parker RI, Hagan-Burke S. Useful effect size interpretations for single case research. Behav Ther. 2007;38(1):95–105. doi: 10.1016/j.beth.2006.05.002. [DOI] [PubMed] [Google Scholar]
  • 13.Bulte I, Onghena P. An R package for single-case randomization tests. Behav Res Meth. 2008;40(2):467–478. doi: 10.3758/brm.40.2.467. [DOI] [PubMed] [Google Scholar]
  • 14.Bulte I, Onghena P. Randomization tests for multiple-baseline designs: An extension of the SCRT-R package. Behav Res Meth. 2009;41(2):477–485. doi: 10.3758/BRM.41.2.477. [DOI] [PubMed] [Google Scholar]
  • 15.Gage NA, Lewis TJ. Hierarchical linear modeling meta-analysis of single-subject design research. J Spec Educ. 2014;48(1):3–16. [Google Scholar]
  • 16.Ponterotto JG. Qualitative research in couseling psychology: A primer on research paradigms and philosophy of science. J Couns Psychol. 2005;52(2):126–136. [Google Scholar]
  • 17.Dijkers MP, Harrison-Felix C, Marwitz JH. The traumatic brain injury model systems: History and contributions to clinical service and research. J Head Trauma Rehab. 2010;25(2):81–91. doi: 10.1097/HTR.0b013e3181cd3528. [DOI] [PubMed] [Google Scholar]
  • 18.Horn SD, Corrigan JD, Dijkers MP. Traumatic brain injury rehabilitation comparative effectiveness research: Introduction to the traumatic brain injury-practice based evidence archives supplement. Arch Phys Med Rehabil. 2015;96(8):S173–S177. doi: 10.1016/j.apmr.2015.03.027. [DOI] [PubMed] [Google Scholar]
  • 19.Traumatic brain injury model systems national database. [Accessed February 9, 2013];TBI Model Systems National Data and Statistical Center. 2013 http://www.tbidsc.org.
  • 20.Bogner J, Whiteneck G, MacDonald J, et al. Test-retest reliability of traumatic brain injury outcome measures: A TBI model systems study. J Head Trauma Rehab. doi: 10.1097/HTR.0000000000000291. in review. [DOI] [PubMed] [Google Scholar]
  • 21.Effgen SK, McCoy SW, Chiarello LA, Jeffries LM, Bush H. Physical therapy-related child outcomes in school: An example of practice-based evidence methodology. Pediatr Phys Ther. 2016;28(1):47–56. doi: 10.1097/PEP.0000000000000197. [DOI] [PubMed] [Google Scholar]
  • 22.Horn SD, DeJong G, Smout RJ, Gassaway J, James R, Conroy B. Stroke rehabilitation patients, practice, and outcomes: Is earlier and more aggressive therapy better? Arch Phys Med Rehabil. 2005;86(12):S101–S114. doi: 10.1016/j.apmr.2005.09.016. [DOI] [PubMed] [Google Scholar]
  • 23.Whiteneck GG, Gassaway J. SCIRehab uses practice-based evidence methodology to associate patient and treatment characteristics with outcomes. Arch Phys Med Rehabil. 2013;94(4):S67–S74. doi: 10.1016/j.apmr.2012.12.022. [DOI] [PubMed] [Google Scholar]
  • 24.DeJong G, Horn SD, Smout RJ, Tian WQ, Putman K, Gassaway J. Joint replacement rehabilitation outcomes on discharge from skilled nursing facilities and inpatient rehabilitation facilities. Arch Phys Med Rehabil. 2009;90(8):1284–1296. doi: 10.1016/j.apmr.2009.02.009. [DOI] [PubMed] [Google Scholar]
  • 25.Horn SD, Corrigan JD, Bogner J, et al. Traumatic brain injury-practice based evidence study: Design and patients, centers, treatments, and outcomes. Arch Phys Med Rehabil. 2015;96(8):S178–S196. doi: 10.1016/j.apmr.2014.09.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Horn SD, Kinikini M, Moore LW, et al. Enteral nutrition for patients with traumatic brain injury in the rehabilitation setting: Associations with patient preinjury and injury characteristics and outcomes. Arch Phys Med Rehabil. 2015;96(8):S245–S255. doi: 10.1016/j.apmr.2014.06.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Creswell JW, Hanson We, Clark Plano VL, Morales A. Qualitative research designs: Selection and implementation. Couns Psychol. 2007;35(2):236–264. [Google Scholar]
  • 28.Sim J. Collecting and analysing qualitative data: Issues raised by the focus group. J Adv Nurs. 1998;28(2):345–352. doi: 10.1046/j.1365-2648.1998.00692.x. [DOI] [PubMed] [Google Scholar]
  • 29.Baxter P, Jack S. Qualitative case study methodology: Study design and implementation for novice researchers. Qual Report. 2008;13(4):544–559. [Google Scholar]
  • 30.Goodwin WL, Goodwin LD. Understanding quantitative and qualitative research in early childhood education. New York, NY: Teachers College Press; 1996. [Google Scholar]
  • 31.Cohen J. Things I have learned (so far) Am Psychol. 1990;45(12):1304–1312. [Google Scholar]

RESOURCES