An adaptive intervention is a set of diagnostic, preventive, therapeutic, or engagement strategies that are used in stages, and the selection of the intervention at each stage is based on defined decision rules. At the beginning of each stage in care, treatment may be changed by the clinician to suit the needs of the patient. Typical adaptations include intensifying an ongoing treatment or adding or switching to another treatment. These decisions are made in response to changes in the patient’s status, such as a patient’s early response to, or engagement with, a prior treatment. The patient experiences an adaptive intervention as a sequence of personalized treatments.
Adaptive interventions are necessary because, for many disorders, the optimal sequence of interventions differs among patients. Not all patients respond the same way or have the same adverse event profile; not all patients engage with treatment in the same way; many disorders have a waxing and waning course; and comorbidities arise or become more salient during the course of care. The trial by Fortney et al1 constructed a two-stage, adaptive telecare intervention to treat complex psychiatric disorders in underserved, rural, primary care settings. The investigators used a sequential, multiple assignment, randomized trial (SMART)2 design to answer questions concerning the most effective mode of intervention delivery at two critical decision points in the adaptive telecare intervention.
Use of Method
Description of a SMART Design
A SMART is a type of multi-stage, factorial, randomized trial, in which some or all participants are randomized at two or more decision points. Whether a patient is randomized at the second or a later decision point, and the available treatment options, may depend on the patient’s response to prior treatment.
In a prototypical SMART (Figure 1), all participants are randomized at the beginning of stage 1 to treatment A or B. Responders continue to receive their assigned stage 1 treatment, and non-responders are re-randomized at the beginning of stage 2 to treatment C or D. SMARTs often include multiple adaptive interventions, each represented by a collection of treatment paths offered to individual patients across the multiple stages. The SMART in Figure 1 has four adaptive interventions, each based on a pair of the 6 treatment paths. One of them is: “Start with treatment A in stage 1 and assess the patient for early signs of response at the end of stage 1. If the patient is an early responder, continue treatment A in stage 2; whereas, if the patient is a non-responder switch to treatment C in stage 2.” This could be represented as the pair (A → responder → A, A → non-responder → C). The other three adaptive interventions are (A → responder → A, A → non-responder → D); (B → responder → B, B → non-responder → C); and (B → responder → B, B → non-responder → D).
Figure 1:

An example SMART; R denotes randomization.
SMART designs can be varied to address different scientific questions.3–5 For example, SMARTs can have more than two stages, and may randomize to more than two options at each stage. Further, they can be varied due to pragmatic considerations. For example, in Figure 1, stage 2 treatments C and D need not be the same for groups A and B; the definition of non-response need not be the same for groups A and B; and all individuals need not transition to stage 2 at the same time point.
Why Is a SMART Design Used?
A SMART is one type of trial that can be used to construct a high-quality adaptive intervention. SMARTs do this by answering multiple questions across multiple stages, such as: (i) What is the best stage 1 treatment? (ii) What baseline measures should be used to make stage 1 treatment decisions? (iii) At what points, and based on what measures, should the need for a change in treatment be assessed? (iv) For patients who do not respond to stage 1 treatment, should treatment be changed or intensified? and (v) what ongoing measures should be used to make subsequent-stage treatment decisions? To address questions concerning when to transition a patient to the next stage of treatment, the timing of the clinical assessments, or duration of stage 1 treatment, could be randomized.
While any one of these questions may be answered in a one stage randomized trial design, a SMART can answer multiple questions and provide evidence regarding interactions among sequential treatments, beneficial or harmful, that may otherwise be missed in a one-stage randomized trial design. For example, a treatment prescribed early during care may magnify the therapeutic effect of a treatment provided subsequently for non-responders (i.e., positive synergy). The analysis of SMART data is not complex, but researchers must take care to appropriately account for the effects of interventions across multiple stages6–7 and avoid causal bias.8
Limitations
There are several common misconceptions regarding SMARTs. First, SMARTs generally to not provide definitive evidence of the effectiveness of an adaptive intervention. The objective of a SMART is to empirically construct a high-quality adaptive intervention. Following a SMART, researchers may decide to evaluate an adaptive intervention in a confirmatory randomized trial. Second, while the goal of a SMART is to develop an adaptive intervention, a SMART is generally not an adaptive randomized trial,9 namely a trial whose design characteristics (e.g. randomization probabilities) are varied based on accruing data. Third, it is common to conflate the variables used to make treatment decisions in an adaptive intervention (e.g., whether a patient is a responder in Figure 1) with the study’s research assessments. However, the measures used in an adaptive intervention are generally assessments that would be used in routine clinical practice, whereas not all research assessments would be.
How Was the SMART Design Used?
Fortney et al. were interested in two sets of questions: (1) What is the effect of a teleclinician (i.e. off-site psychiatrist/psychologist) intervening directly by video with a patient at the clinic versus indirectly by providing patient-specific support to the patient’s primary care physician? And are there baseline patient factors that ought to inform the decision to use a direct versus an indirect telecare approach as an initial approach? (2) Among patients who were offered direct video encounters but did not engage in them by month 6, what is the effect of offering treatment by telephone call to the patient’s home versus not? And are there ongoing patient factors that should be used to inform the decision to intervene by telephone versus not for those patients?
To answer these questions, patients were randomized to direct versus indirect telecare in stage 1. Direct telecare included patient monitoring for engagement. At the end of stage 1 (month 6), direct telecare patients were classified as engagers or non-engagers (≤2 telehealth encounters); non-engagers were re-randomized in stage 2 to receive a telephone call at their home versus not. The trial’s primary research outcome was the Mental Health Component Summary (MCS) at month 12. Patient factors were collected at baseline and throughout stage 1 to investigate their utility as additional tailoring variables.
How Should a SMART Design Be Interpreted?
In the study by Fortney et al,1 the findings did not support an effect of direct versus indirect telecare in stage 1 on MCS at month 12 (β = 1.0; 95% CI, −0.8 to 2.8). The article also reports results from a secondary aim: to compare the provision of telephone calls to the home versus not among non-engagers to direct telecare. The findings did not suggest an effect of home telephone calls on MCS at month 12 (β = 2.0; 95% CI, −1.7 to 5.7) among non-engagers. In observational analyses, mental health outcomes were observed to improve over time.
Not reported were results of analyses examining (1) whether certain baseline factors could be used to determine stage 1 direct versus indirect telecare, and (2) whether certain baseline and other factors collected during stage 1 direct care could be used to determine whether to make a telephone call to the home among non-engagers at month 6 in stage 2.
Acknowledgements:
The authors acknowledge the support of the National Institutes of Health (R01DA039901; R01MH114203; R01HD095973; R01DA047279; and P50DA054039), Institute for Education Sciences (R324B220001), and Patient Centered Outcomes Research Institute (ME-2020C3-20925). The authors would like to thank Ian Burnette, from the Data Science for Dynamic Intervention Decision-making Center (d3c) at the University of Michigan, for their helpful suggestions, and for creating the artwork used in this article.
References
- 1.Fortney JC, Bauer AM, Cerimele JM, et al. Comparison of Teleintegrated Care and Telereferral Care for Treating Complex Psychiatric Disorders in Primary Care: A Pragmatic Randomized Comparative Effectiveness Trial. JAMA Psychiatry. 2021;78(11):1189. doi: 10.1001/jamapsychiatry.2021.2318 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Murphy SA. An experimental design for the development of adaptive treatment strategies. Stat Med. 2005;24(10):1455–1481. doi: 10.1002/sim.2022 [DOI] [PubMed] [Google Scholar]
- 3.Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A “SMART” design for building individualized treatment sequences. Annu Rev Clin Psychol. 2012;8:21–48. doi: 10.1146/annurev-clinpsy-032511-143152 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Projects Using SMARTs. d3center. Accessed August 8, 2022. https://d3c.isr.umich.edu/experimental-designs/sequential-multiple-assignment-randomized-trials-smarts/projects-using-smarts/
- 5.Tsiatis AA, Davidian M, Shannon T. Holloway, Laber EB. Sequential Multiple Assignment Randomized Trials. In: Dynamic Treatment Regimes: Statistical Methods for Precision Medicine.; 2020. Accessed August 8, 2022. https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=2345849 [Google Scholar]
- 6.Nahum-Shani I, Qian M, Almirall D, et al. Experimental design and primary data analysis methods for comparing adaptive interventions. Psychol Methods. 2012;17(4):457–477. doi: 10.1037/a0029372 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Nahum-Shani I, Qian M, Almirall D, et al. Q-learning: A data analysis method for constructing adaptive interventions. Psychol Methods. 2012;17(4):478–494. doi: 10.1037/a0029373 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Holmberg MJ, Andersen LW. Collider Bias. JAMA. 2022;327(13):1282–1283. doi: 10.1001/jama.2022.1820 [DOI] [PubMed] [Google Scholar]
- 9.Berry DA. Adaptive Clinical Trials: The Promise and the Caution. JCO. 2011;29(6):606–609. doi: 10.1200/JCO.2010.32.2685 [DOI] [PubMed] [Google Scholar]
