Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2022 Nov 27.
Published in final edited form as: J Clin Epidemiol. 2021 Nov 8;142:152–160. doi: 10.1016/j.jclinepi.2021.11.007

Sequential multiple assignment randomized trial studies should report all key components: a systematic review

Theophile Bigirumurame a,*, Germaine Uwimpuhwe b, James Wason a
PMCID: PMC7613855  EMSID: EMS157114  PMID: 34763037

Abstract

Objective

Sequential Multiple Assignment Randomized Trial (SMART) designs allow multiple randomizations of participants; this allows assessment of stage-specific questions (individual randomizations) and adaptive interventions (i.e. treatment strategies). We assessed the quality of reporting of the information required to design SMART studies.

Study design and setting

We systematically searched four databases (PubMed, Ovid, Web of Science and Scopus) for all trial reports, protocols, reviews, and methodological papers which mentioned SMART designs up to June 15, 2020.

Results

Of the 157 selected records, 12 (7.64%) were trial reports, 24 (15.29%) were study protocols, 91 (58%) were methodological papers, and 30 (19.1%) were review papers. All these trials were powered using stage-specific aims. Only four (33.33%) of these trials reported parameters required for sample size calculations. A small number of the trials (16.67 %) were interested in determining the best embedded adaptive interventions. Most of the trials did not report information about multiple testing adjustment.

Furthermore, most of records reported designs that were mainly focused on stage-specific aims.

Conclusions

Some features of SMART designs are seldomly reported and/or used. Furthermore, studies using this design tend to not adequately report information about all the design parameters, limiting their transparency and interpretability.

Keywords: Sequential multiple assignment randomized trial, Adaptive intervention, Dynamic treatment regimens, Adaptive treatment strategies, Multistage treatment strategies, Treatment policies

1. Introduction

An adaptive intervention (AI), sometimes referred to as dynamic treatment regimens (DTRs), is a treatment strategy that formalizes the personalization of treatment through established decision rules that recommend when and how the treatment should change. This strategy would account for the patient’s treatment history and response to those treatments [14]. Often there will be a huge number of potential AIs that could be used, and it is difficult to collect robust information on which perform best.

An AI consists of four key elements: 1) critical decision point(s), comprising: the intervention to begin with; when and how to measure signs of response/nonresponse; how to maintain the success of the initial intervention; and what interventions may be used for non-responders. 2) intervention component(s), a set of intervention/treatment options at each critical decision point. 3) tailoring variable(s), an early indicator of the overall outcome (success or failure of the intervention). 4) decision rule(s), linking the tailoring variable(s) to the intervention components, at each critical decision point.

The sequential multiple assignment randomization trial (SMART) is design that involves multiple stages of randomization, and is used to develop and refine effective AIs, also known as dynamic treatment regimens (DTR), adaptive treatment strategies (ATS), multistage treatment strategies (MTS), proportionate interventions, or treatment policies [1,58].

AIs are said to be embedded in a SMART study. Each patient is randomly assigned to initial treatment and subsequent treatments are based on the intermediate outcome and patient’s characteristics. Each stage in a SMART design corresponds to one of the critical decisions involved in the AI. Investigators can build more complex AIs (more deeply tailored AIs) using extra information collected on potential moderators (e.g., baseline characteristics of the individual and/or context, adherence to and/or side effects from prior treatment stages) [5,9]. These tailored AIs are also called deeply embedded AIs.

SMART designs are a generalization of conventional randomized controlled trials (RCT) that allow participants to be randomized more than once, depending on intermediate outcomes. RCTs are designed to test efficacy or effectiveness of an intervention compared to a control condition, while SMARTs allow identifying treatment strategies that would otherwise take multiple RCTs to evaluate. Despite the use of the word ‘adaptive’ in AIs, SMARTs differ from adaptive designs in which each stage involves different participants (between-participants adaptations) [10]. In SMART design, adaptation is made at participant level (within-participant adaptations) [6,11]. Compared to crossover trials (CT), SMART designs are used to develop AIs, while CT are used to contrast the effects of stand-alone treatments. In a CT, patients receive the assigned sequence irrespective of their intermediate outcomes.

Like a traditional RCT, a SMART study must be appropriately designed to meet clearly defined objectives. The CONSORT statement [12] and the International Conference on Harmonization E9 (ICH E9) [13] provide guidelines on reporting and justifications on sample size determination for RCTs. Candlish et al. (2019) [8] reviewed how trials of multi-component and multi-stage interventions (i.e., stepped care, AI, DTR) were designed in practice, the type of statistical designs and analysis methods used in trials involving staged AIs. In their review, they also studied how AIs were analyzed compared to trials of non-proportional interventions. Miller (2019) [14] conducted a review to describe the unique characteristics of three types of adaptive interventions (i.e., Just In Time Adaptive Interventions, SMART, and stepped care) in behavioral interventions. To our knowledge, no paper has explored the reporting of trials using SMART designs. To address this knowledge gap, we conducted a systematic review.

In this systematic review, we aimed at assessing the quality of reporting the information required to design SMART studies in published reports. More specifically, our review questions were grouped into basic trials characteristics (research area, sample size calculation information), aims considerations (stage-specific aims, AIs related aims) and analyses considerations (use of multiple testing adjustments, analysis methods, software for sample size calculations and analysis).

2. Material and methods

This systematic review adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [15]f. A protocol and a completed PRISMA checklist are given in Appendices A and B.

2.1. Literature search strategy

PubMed (including Medline), Ovid (including Embase, Medline, APA PsycINFO), Web of Science (Web of Science core collection, KCI-Korean journal database, Medline), and Scopus were searched between 12th May and 15th June 2020. Moreover, some reference lists of the studies included in the final analysis were checked, along with grey literature search, i.e., the methodology centre web-page at Pennsylvania University was searched too [16]. Search strategy is given in Appendix C.

2.2. Inclusion criteria

We included studies of any design, including qualitative, quantitative studies reporting data, review papers, protocols and methodological papers. We excluded feasibility or pilot studies, book chapters, abstracts and conference papers, studies reporting data from observational studies, proof of concept papers. Systematic reviews and meta-analyses papers were excluded, but they were checked for any eligible paper.

2.3. Paper screening and data extraction

All papers identified through the search were imported into EndNoteX9 and duplicates deleted. The screening was undertaken using Rayyan [17] in 3 stages. In the first stage, titles of all the papers were doubly screened (T.B. and G.U.) and any paper failing to meet inclusion criteria was excluded. In the second stage, abstracts were screened and reasons for exclusion recorded (T.B. and G.U.). Lastly, full texts of the remaining papers were screened for inclusion. T.B. and G.U. independently checked whether to include a study, and consensus was sought in cases there was a discrepancy. A data extraction tool was developed for this review in an Excel spreadsheet. The basic study characteristics, study aims, and analyses considerations were recorded wherever possible. More details about recorded items are given in the Appendix C.

2.4. Quality control

We did not undertake a quality assessment of the identified studies as the aim of this systematic review was to assess the quality of reporting the information required to design SMARTs studies in published reports.

3. Results

3.1. Search results

Figure 1 presents the process of study selection in this systematic review. Four databases were searched, identifying a total of 10,055 records. An additional 21 records were identified from grey literature search, i.e., the Pennsylvania State Methodology Centre, webpage [16]. After removing 3,940 duplicate records, 6,136 unique records were considered for title and abstract screening.

Figure 1. PRISMA study flow diagram. Number of records identified, included and excluded during the literature search.

Figure 1

A total of 285 records were considered for inclusion. 128 records did not meet the inclusion criteria: secondary analyses (n = 11), pilot studies (n = 15), conference papers (n = 26), not related to SMART design (n = 67), observational studies (n = 5), proof of concept papers (n = 2), and feasibility studies (n = 2).

A total of 157 records were included in the systematic review: 24 (15.3%) protocols, 91 (58%) methodological papers, 12 (7.6%) trials, and 30 (19.1%) review papers.

Table 1 summarizes the research areas in which SMART designs have been used, the countries in which the studies were carried out, and sources of funding. Most of the studies were conducted in health and social related research. This may be partly a consequence of the databases used in our search. However, within the health-related research, there was an extremely broad selection of domains, including for instance cancer [1820], toxicology [21], mental health [22,23], HIV [24], weight loss [3].

Table 1. Characteristics of included studies.

Characteristic N (%)
Area
Education 3 (1.91)
Health:
Psychiatry 30 (19.11)
Mood disorders 20 (12.74)
Cancer 23 (14.65)
HIV/AIDS 9(5.73)
Sport and exercises 8 (5.10)
Other areas 20 (12.74)
Social 12 (7.64)
Not specific 32 (20.38)
Country
Australia 1 (0.64)
Belgium 1 (0.64)
Brazil 2 (1.27)
Canada 10 (6.37)
China 1 (0.64)
German 1 (0.64)
Kenya 2 (1.27)
Singapore 1 (0.64)
South Africa 1 (0.64)
Spain 1 (0.64)
United Kingdom 1 (0.64)
United Kingdom-France 1 (0.64)
USA 134 (85.35)
Funding
Other 21 (13.37)
Private 13 (8.28)
Public 123 (78.35)

Other: this category comprises studies for which no funding information was given, and studies without clear funding. Other areas: this group comprises health related areas such as: surgery, cardiology, rheumatology, nutrition, addiction, dentistry.

Thirty-two (20.38%) records were not specific to any research area since we included methodological papers treating research questions that can be applied to any area. Three (1.91%) of the records considered research from educational trials [2527]. Most of the studies were mainly conducted in the USA 134 (85.35%) and Canada 10 (6.37%).

A large majority of the studies were supported by public funds 123 (78.35%). Twenty-one (13.37%) of the records did not provide enough information about the source of funding.

3.2. Trials review

The characteristics of the 12 (7.6%) trials found in the systematic review are summarized in Table 2. A list of selected trials is given in Appendix D.

Table 2. Trials and protocols characteristics.

Characteristic Trials (N = 12) Protocols (N = 24)
N (%) N (%)
Area
   Education 1 (8.33)
   Health 11 (91.67) 21 (87.50)
   Social 3 (12.5)
Number of initial Treatments
   1 1 (8.33) 2 (8.33)
   2 8 (66.67) 18 (75)
   3 1 (8.33) 4 (16.67)
   4 1 (8.33)
   5 1 (8.33)
Multicentre trial
   No 4 (33.33) 13 (54.17)
   Yes 8 (66.67) 11 (45.83)
Sample calculation size reporting
   No 8 (66.67) 2 (8.33)
   Yes 4 (33.33) 22 (91.67)
Stage-specific aim
   First stage 7 (58.33) 12 (50)
   Second stage 5 (41.67) 12 (50)
Non-responders comparison
   No 3 (25) 10 (41.67)
   Yes 9 (75) 14 (58.33)
Embedded AIs comparison
   No 9 (75) 18 (75)
   Yes 3 (25) 6 (25)
Selecting interesting AIs
   No 10 (83.33) 22 (91.67)
   Yes 2 (16.67) 2 (8.33)
Selecting deeply embedded AIs
   No 12 (100) 21 (87.50)
   Yes 3 (12.50)
   Finding optimal AIs
   No 11 (91.67) 21 (87.50)
   Yes 1 (8.33) 3 (12.50)
Use of multiple testing
   No 9 (75) 19 (79.17)
   Yes 3 (25) 5 (20.83)
Analysis method
   ANOVA 1 (8.33)
   Cox PH 3 (25.00)
   GEE 1 (8.33)
   LMM 6 (50.00)
   Negative binomial regression 1 (8.33)
Embedded AIs as secondary aim
   No 9 (75) 19 (79.17)
   Yes 3 (25) 5 (20.83)
Sample size software
   Not reported 11 (91.67) 20 (83.33)
   R 1 (8.33) 1 (4.17)
   PS 2 (8.33)
   STPLAN 1 (4.17)
Analysis software
   Not reported 7 (58.33)
   R 1 (8.33)
   SAS 1 (8.33)
   R+SAS 1 (8.33)
   SPLUS 1 (8.33)
   SPSS 1 (8.33)
Country
   Australia 1 (4.17)
   Brazil 1 (8.33) 1 (4.17)
   German 1 (4.17)
   Kenya 2 (8.33)
   South Africa 1 (4.17)
   USA 11 (91.67) 18 (75)
Funding
   Private 3 (25) 2 (8.33)
   Public 8 (66.67) 22 (91.67)
   Other 1 (8.33)

ANOVA: Analysis of Variance; Cox PH: Cox Proportional Hazard; GEE: Generalized Estimating Equations; LMM: Linear Mixed Model; PS: Power and Sample size calculation; SAS: Statistical Analysis System; STPLAN: Study Planning calculations; SPSS: Statistical Package for the Social Sciences.

*

Sums may not add due to rounding errors.

Basic trial characteristics

Most of the trials, 11 (91.67%), were conducted in health-related area (cancer [18,19], mental health [22,23,28], toxicology [21]).

One of the 12 trials started with one treatment option for all patients [29]. The remaining 11 (91.67%) started with at least two treatment options. Five (41.67%) of the 12 trials were conducted in children and adolescent population. Eight (66.67%) of the 12 trials were conducted in multicentre settings. Seven (58.33%) trials had continuous primary endpoints, while 3 (25%) trials used time-to-event primary endpoints. The remaining 2 trials (16.67%) had a binary and count primary outcome respectively. It was observed that only 4 (33.33%) of the trials reported all the parameters required for a priori sample size calculation (i.e., such that we were able to replicate their results using provided information). The smallest trial recruited 83 patients [22], while the largest recruited 2,000 patients [29]. The median sample size was 171 patients. Most of the trials (66.67%) were publicly funded and were conducted in the USA (91.67%).

Trial aims considerations

All the trials considered stage-specific aims in their analyses. All 12 trials (100%) considered the first stage aim as their primary aim. Moreover, 5 (41.67%) of the 12 trials also included second stage aims within their primary aims. One key component to SMART design is to re-randomise all or a certain group of patients, mostly non-responders to the initial treatment option. In this review, in 9 (75%) of the trials, there were treatment comparisons among non-responder to first stage treatment options. Another advantage of the SMART design compared to standard RCT, is the possibility to compare embedded AIs. Although none of the trials did consider this comparison as one of the primary aims, some of them considered this comparison as a secondary aim. Three (25%) of the trials did perform some comparisons among embedded AIs. One of the reasons to compare may be to identify/test some pre-specified AIs. This type of analysis was performed in 2 (16.67%) of the 12 trials [23,30]. Another type of analysis performed, if there is interest in AIs, is to build some AIs which optimizes an outcome of interest using method such as Q-learning [9,31]. This type of analysis was performed in one (8.33%) of the 12 trials [32].

Analysis considerations

Once the trial aims are clearly defined, the next point to consider is the type of analyses to be performed to answer the research questions. The statistical methods used to answer the primary research questions were Cox proportional hazard model (for the time-to-event) used in 3 (25%) trials; the negative binomial model, generalized estimating equations and analysis of variance, each used in one (8.33%) trial. In the remaining six (50%) trials, linear mixed-effects models were used. Only one trial (8.33%) reported information about the software that was used for sample size calculations. In the case of multiple primary aims, it may be appropriate to apply a multiple testing adjustment to control the overall chance of making a type one error. The planned method to adjust for multiple testing should generally be pre-specified and reported. In our review, 3 (25%) of the 12 trials mentioned that they considered multiple testing adjustment, using closed test procedure and Bonferroni methods [33]. The last point, which was covered during this systematic review, was about the software used for the final analyses. Five (41.67%) trials reported information about the software. Each of the following software packages was used once (8.33%): R [34], SAS, R + SAS, SPLUS and SPSS.

3.3. Protocol review

We noted that some of the information could not be collected since the protocols were published before completion of the studies. In total, we retrieved 24 protocols. Their characteristics are displayed in Table 2. A list of selected protocols is given in Appendix D.

Basic protocol characteristics

Most of the protocols, 21 (87.50%), concerned studies that were conducted in health-related area (cancer [35], chronic pain [36], mental health [37,38], HIV [3941], tobacco cessation [42,43]). Eleven (45.83%) of the studies were conducted in multicentre settings. Most (18 (75%)of 24) of the protocols described studies that started with two initial treatment options. Fourteen (58.33%) of the protocols described studies whose primary endpoints were continuous outcomes. Nineteen (79.17%)of 24, were mainly conducted in the adult population. It was observed that 22 (91.67%) protocols provided all the parameters required for sample size calculations. Twenty-two studies (91.67%) were publicly funded and 18 (75%) were conducted in the USA.

Trial aims considerations

Twenty-three (95.83%) of the studies were powered to answer their primary research questions using stage-specific aims. One study compared an AI against a control treatment [37]. Thirteen (54.17%) had their primary aims based on the first stage goals while 12 (50%) had second stage goals within their primary aims. Two (8.33%) of the studies had both first and second stage-specific aims [43,44]. It was also observed that in 14 (58.33%) studies, it was planned to compare the outcome among patients who did not respond to their first stage treatment options. From Table 2, one can also see that few protocols (6, 25%) were interested in comparing embedded AIs in order to identify the best one; or in selecting some pre-specified AIs to be compared with the rest. A small number of studies, 3 (12.5%) out of 24, were interested in deeply embedded AIs, and in the estimation of optimal AIs [35,36,45]. Lastly, 5 (20.83%) studies considered the comparisons of embedded AIs as their secondary aims.

Analysis considerations

The reporting of the analysis considerations was different for protocols, compared to trials. For instance, the details about the analyses method used to answer the primary research question(s) and the software used were not generally specified at the time of writing the protocols. These details are mainly provided in the statistical analysis plan documents[46]. Five (20.83%) protocols mentioned that multiple comparisons will be adjusted for. However, only two of these mentioned which methods were going to be used [43, 47]. Most of the protocols, 20 (83.33%) out of 24, did not provide information about which software was used for sample size calculation. The Power and Sample size calculation software (PS) [48] was mentioned in two protocols, followed by R [34], and STPLAN [49] each mentioned once.

4. Discussion

In this systematic review, we identified 157 records, of which 36 (22.93%) contained most of the information needed to answer our research questions (12 trial reports and 24 protocols). We observed that the highest number of records were published in the USA. Most of the studies were conducted in multicentre settings and started with two treatment options in the first stage. Even though one can answer more research questions using a SMART design compared to standard RCT, it was observed that research mainly powered their studies using stage-specific aims. Statistically speaking, this may be explained by the fact that it is easier to power and analyze studies whose primary aims are based on the first stage. However, one must keep in mind that study design should not solely rely on ease of statistical methods. Most of the published trials did not provide parameters required for sample size calculations (we could not replicate their results using provided information), however this was not the case for the retrieved protocols. The problem of reporting on the parameters required for sample size calculations was also reported in the literature for RCTs [50].

One of the appealing features of the SMART design is the possibility to re-randomize all or a group of patients, mainly non-responders to the initial treatment, based on the response, patient’s characteristics, or behaviors observed during the previous treatment. This has advantages over more traditional designs such as a “one-stage-at-time” randomized trial (RT) and a RT in which a fully formed AI is compared to an appropriate control [51]. The first alternative treats each intervention stage as a different trial, hence requiring more participants compared to a SMART design. The use of tailoring variables allows answering more clinical and theoretical questions compared to the second alternative design.

It was observed that most of the published records had planned to make some comparisons among non-responders to first stage treatment options. The treatment options at each stage and the criteria on which randomizations are based determine a set of AIs said to be embedded in the SMART design. Researchers can power their study to address primary research questions concerning the AIs that are embedded in the design. Some publications used embedded AIs as their secondary aims. However, a low proportion of trials used analyses involving embedded AIs. Those analyses may involve some hypotheses testing aiming at identifying the most significant AI compared to a reference one or from a group of AIs. Another approach is to identify an AI leading to an optimal outcome. AIs comparisons may lead to many tests requiring some multiplicity adjustment. It was observed that not many publications did provide information about multiple testing procedures which were/or will be used in the analyses. We noted that most of the records did not provide information about statistical software which were used for sample size calculations and final analyses (especially for trials). Software tools for final analyses are not always available for potential users without a strong programming background. The Pennsylvania State Methodology Centre webpage [16] provides a wide range of software tools to compute sample size for SMART studies,

It was observed that most of the SMART studies were publicly funded. One explanation may be that fully powered SMART designs are large, expensive and generally multisite studies [52]. Another reason may be that SMARTs consider complex interventions or long-term treatment strategies whereas the pharmaceutical industry is focused on efficacy trials for new drug(s) for a particular stage of a condition. Pharmaceutical companies may be reluctant to fund studies which may combine or compare drugs in sequences with other companies’ drugs [53]. We also observed that publicly funded studies were larger compared to privately funded ones.

4.1. Reporting of SMART studies

Studies should clearly state their primary aim, one of: a) comparison of different interventions options at different stages of the interventions, b) comparisons of AIs that are embedded within the SMART design, c) construction of more deeply embedded AIs. Reports should show how the sample size calculation links to the primary aim. When the primary aim is to compare first-stage interventions, the standard sample size calculation formula can be used (significance level, power, effect size information should be provided). When the comparison concern second-stage intervention options, postulated information about overall initial (non)response rate is also required. If simulations are used for sample size calculations, all assumptions and scenario should be clearly reported. They should also clearly provide information about the analysis considerations: state whether multiple comparisons were performed, adjustment method used, primary analysis method, Software availability for primary analyses and sample size calculations. An example of good reporting, results for methodological and review papers are given in Appendix E.

Our review had some limitations. Due to resource limitations, it was not possible to supplement the database by checking all reference lists, or trial registries. Further work may include a supplementary search. We included only articles published in English. We may have missed studies published in journals not included in the databases we searched from or studies that did neither describe themselves as using a SMART design nor mention adaptive intervention nor other keywords. A good reporting example can be found in O’Keefe et al. (2019)[47]

5. Conclusions

Despite the increase in the use of SMART design, there still some appealing features of this design that are not widely used or reported well. Many methodological publications have been exploring different aspects of the SMART design. Trials implementing SMART design should report all key components wherever possible, to better aid the understanding and ongoing research.

Supplementary Material

Supplementary File

What is new?

Key findings:

  • Most of the trials using SMART designs did not provide parameters required for sample size calculations and were mainly powered using stagespecific aims.

  • Very few studies considered analyses involving embedded adaptive interventions (AIs).

  • Some attractive features of the SMART design were rarely used in reviewed studies.

What this adds to what is known?

  • The CONSORT statement and the International Conference on Harmonisation E9 (ICH E9) provide guidelines on reporting and justifications on sample size calculation for randomised controlled trials (RCT).

  • In a sequential multiple assignment randomised trial (SMART) design, some or all patients can be randomised more than once, especially nonresponders to the first stage treatment option. One can answer more research questions using a SMART design compared to a classical RCT design.

  • While there are some reviews which summarised how multistage interventions were designed in practice, none of them covered the reporting of information required to design SMART studies.

What is the implication, what should change now?

  • Researchers designing studies using SMART design should report all parameters required for sample size calculations and information related to multiple testing procedures.

  • They should also consider aims involving embedded AIs rather than relying only on stage-specific aims. Increased availability of software tools to perform advanced analyses would also make SMART designs more appealing.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Footnotes

CRediT author statement

Theophile Bigirumurame: Conceptualization, Methodology, Data curation, Formal analysis, Writing- Original draft preparation. Germaine Uwimpuhwe: Data curation, Validation, Writing-Reviewing and Editing. James Wason: Writing-Reviewing and Editing.

Ethical approval

Not required

Declaration of conflicting interests: The Author(s) declare(s) that there is no conflict of interest

References

  • [1].Nahum-Shani I, Ertefaie A, Lu X, Lynch KG, McKay JR, Oslin DW, et al. A SMART data analysis method for constructing adaptive treatment strategies for substance use disorders. Addiction. 2017;112(5):901–9. doi: 10.1111/add.13743. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Tamura RN, Krischer JP, Pagnoux C, Micheletti R, Grayson PC, Chen YF, et al. A Small n Sequential Multiple Assignment Randomized Trial Design For Use in Rare Disease Research. Contemp Clin Trials. 2016;46:48–51. doi: 10.1016/j.cct.2015.11.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Naar-King S, Ellis DA, Idalski Carcone A, Templin T, Jacques-Tiura AJ, Brogan Hartlieb K, et al. Sequential multiple assignment randomized trial (SMART) to construct weight loss interventions for African American adolescents. J Clin Child Adolesc Psychol. 2016;45(4):428–41. doi: 10.1080/15374416.2014.971459. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Lavori PW, Dawson R. Adaptive treatment strategies in chronic disease. Ann Rev Med. 2008:443–53. doi: 10.1146/annurev.med.59.062606.122232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabiano GA, et al. Experimental design and primary data analysis methods for comparing adaptive interventions. Psychol Methods. 2012;17(4):457. doi: 10.1037/a0029372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Chakraborty B, Murphy SA. Dynamic Treatment Regimes. Annu Rev Stat Appl. 2014;1:447–64. doi: 10.1146/annurev-statistics-022513-115553. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Zhong X, Cheng B, Qian M, Cheung YK. A gate-keeping test for selecting adaptive interventions under general designs of sequential multiple assignment randomized trials. Contemporary Clin Trials. 2019;85:105830. doi: 10.1016/j.cct.2019.105830. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Candlish J, Teare MD, Cohen J, Bywater T. Statistical design and analysis in trials of proportionate interventions: a systematic review. Trials. 2019;20(1):1–20. doi: 10.1186/s13063-019-3206-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Nahum-Shani I, Qian M, Almirall D, Pelham WE, Gnagy B, Fabi-ano GA, et al. Q-learning: A data analysis method for constructing adaptive interventions. Psychol Methods. 2012;17(4):478. doi: 10.1037/a0029373. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16(1):1–15. doi: 10.1186/s12916-018-1017-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Song M-K, Dabbs AD, Ward SE. A SMART design to optimize treatment strategies for patient and family caregiver outcomes. Nurs Outlook. 2016;64(4):299–305. doi: 10.1016/j.outlook.2016.04.008. [DOI] [PubMed] [Google Scholar]
  • [12].CONSORT Group (Consolidated Standards of Reporting Trials) Moher D. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med. 2001;134:657–62. doi: 10.7326/0003-4819-134-8-200104170-00011. [DOI] [PubMed] [Google Scholar]
  • [13].Committee IS. Statistical principles for clinical trials. Statist Med. 1999;18:1905–42. [PubMed] [Google Scholar]
  • [14].Miller CK. Adaptive intervention designs to promote behavioral change in adults: what is the evidence. Curr Diab Rep. 2019;19(2):7. doi: 10.1007/s11892-019-1127-4. [DOI] [PubMed] [Google Scholar]
  • [15].Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Bmj. 2021 Mar 29;372 doi: 10.1136/bmj.n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].The Methodology Center advancing methods improving health. Projects Using SMARTs. 2016. [15/05/2020]; Available from: https://www.methodology.psu.edu/ra/adap-inter/projects/
  • [17].Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):1–10. doi: 10.1186/s13643-016-0384-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Powell BL, Moser B, Stock W, Gallagher RE, Willman CL, Stone RM, et al. Arsenic trioxide improves event-free and overall survival for adults with acute promyelocytic leukemia: North American Leukemia Intergroup Study C9710. Blood. 2010;116(19):3751–7. doi: 10.1182/blood-2010-02-269621. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Thall PF, Logothetis C, Pagliaro LC, Wen S, Brown MA, Williams D, et al. Adaptive therapy for androgen-independent prostate cancer: a randomized selection trial of four regimens. J Natl Cancer Inst. 2007;99(21):1613–22. doi: 10.1093/jnci/djm189. [DOI] [PubMed] [Google Scholar]
  • [20].Kelleher SA, Dorfman CS, Vilardaga JCP, Majestic C, Winger J, Gandhi V, et al. Optimizing delivery of a behavioral pain intervention in cancer patients using a sequential multiple assignment randomized trial SMART. Contemp Clin Trials. 2017;57:51–7. doi: 10.1016/j.cct.2017.04.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Petry NM, Alessi SM, Rash CJ, Barry D, Carroll KM. A randomized trial of contingency management reinforcing attendance at treatment: Do duration and timing of reinforcement matter. J Consult Clin Psychol. 2018;86(10):799. doi: 10.1037/ccp0000330. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Fatori D, de Bragança Pereira CA, Asbahr FR, Requena G, Alvarenga PG, de Mathis MA, et al. Adaptive treatment strategies for children and adolescents with obsessive-compulsive disorder: a sequential multiple assignment randomized trial. J Anxiety Disord. 2018;58:42–50. doi: 10.1016/j.janxdis.2018.07.002. [DOI] [PubMed] [Google Scholar]
  • [23].Karp JF, Zhang J, Wahed AS, Anderson S, Dew MA, Fitzgerald GK, et al. Improving patient reported outcomes and preventing depression and anxiety in older adults with knee osteoarthritis: results of a Sequenced Multiple Assignment Randomized Trial (SMART) study. Am J Geriatr Psychiatry. 2019;27(10):1035–45. doi: 10.1016/j.jagp.2019.03.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Jiang R, Lu W, Song R, Davidian M. On estimation of optimal treatment regimes for maximizing t-year survival probability. J R Stat Soc Series B Stat Methodol. 2017;79(4):1165. doi: 10.1111/rssb.12201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Chow JC, Hampton LH. Sequential multiple-assignment randomized trials: Developing and evaluating adaptive interventions in special education. Remed Special Educ. 2019;40(5):267–76. [Google Scholar]
  • [26].August GJ, Piehler TF, Miller FG. Getting “SMART” about implementing multi-tiered systems of support to promote school mental health. J School Psychol. 2018;66:85–96. doi: 10.1016/j.jsp.2017.10.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Kim JS, Asher CA, Burkhauser M, Mesite L, Leyva D. Using a sequential multiple assignment randomized trial (SMART) to develop an adaptive K-2 literacy intervention with personalized print texts and app-based digital activities. AERA Open. 2019;5(3):1–18. [Google Scholar]
  • [28].Pelham WE, Jr, Fabiano GA, Waxmonsky JG, Greiner AR, Gnagy EM, Pelham WE, III, et al. Treatment sequencing for childhood ADHD: A multiple-randomization study of adaptive medication and behavioral interventions. J Clin Child Adolesc Psychol. 2016;45(4):396–415. doi: 10.1080/15374416.2015.1105138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Rush AJ, Fava M, Wisniewski SR, Lavori PW, Trivedi MH, Sack-eim HA, et al. Sequenced treatment alternatives to relieve depression (STAR* D): rationale and design. Control Clin Trials. 2004;25(1):119–42. doi: 10.1016/s0197-2456(03)00112-0. [DOI] [PubMed] [Google Scholar]
  • [30].McKay JR, Drapkin ML, Van Horn DH, Lynch KG, Oslin DW, De-Philippis D, et al. Effect of patient choice in an adaptive sequential randomization trial of treatment for alcohol and cocaine dependence. J Consult Clin Psychol. 2015;83(6):1021. doi: 10.1037/a0039534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Chakraborty B. Statistical methods for dynamic treatment regimes. Springer; 2013. [Google Scholar]
  • [32].Swartz MS, Stroup TS, McEvoy JP, Davis SM, Rosenheck RA, Keefe RS, et al. Special section on implications of CATIE: what CATIE found: results from the schizophrenia trial. Psychiatr Serv. 2008;59(5):500–6. doi: 10.1176/ps.2008.59.5.500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Shaffer JP. Multiple hypothesis testing. Annu Rev Psychol. 1995;46(1):561–84. [Google Scholar]
  • [34].Team RC. R Foundation for Statistical Computing. Vienna, Austria: R: A language and environment for statistical computing; 2017. 2016. [Google Scholar]
  • [35].Sikorskii A, Wyatt G, Lehto R, Victorson D, Badger T, Pace T. Using SMART design to improve symptom management among cancer patients: A study protocol. Res Nurs Health. 2017;40(6):501–11. doi: 10.1002/nur.21836. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Flynn D, Eaton LH, Langford DJ, Ieronimakis N, McQuinn H, Burney RO, et al. A SMART design to determine the optimal treatment of chronic pain among military personnel. Contemp Clin Trials. 2018;73:68–74. doi: 10.1016/j.cct.2018.08.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Kilbourne AM, Smith SN, Choi SY, Koschmann E, Liebrecht C, Rusch A, et al. Adaptive School-based Implementation of CBT (ASIC): clustered-SMART for building an optimized adaptive implementation intervention to improve uptake of mental health interventions in schools. Implement Sci. 2018;13(1):1–15. doi: 10.1186/s13012-018-0808-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Johnson JE, Wiltsey-Stirman S, Sikorskii A, Miller T, King A, Blume JL, et al. Protocol for the ROSE sustainment (ROSES) study, a sequential multiple assignment randomized trial to determine the minimum necessary intervention to maintain a postpartum depression prevention program in prenatal clinics serving low-income women. Implement Sci. 2018;13(1):1–12. doi: 10.1186/s13012-018-0807-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Comins CA, Schwartz SR, Phetlhu DR, Guddera V, Young K, Farley JE, et al. Siyaphambili protocol: an evaluation of randomized, nurse-led adaptive HIV treatment interventions for cisgender female sex workers living with HIV in Durban, South Africa. Res Nurs Health. 2019;42(2):107–18. doi: 10.1002/nur.21928. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Belzer ME, MacDonell KK, Ghosh S, Naar S, McAvoy-Banerjea J, Gurung S, et al. Adaptive antiretroviral therapy adherence interventions for youth living with HIV through text message and cell phone support with and without incentives: protocol for a sequential multiple assignment randomized trial (SMART) JMIR Res Protoc. 2018;7(12):e11183. doi: 10.2196/11183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Inwani I, Chhun N, Agot K, Cleland CM, Buttolph J, Thiru-murthy H, et al. High-yield HIV testing, facilitated linkage to care, and prevention for female youth in Kenya (GIRLS Study): implementation science protocol for a priority population. JMIR Res Protoc. 2017;6(12):e179. doi: 10.2196/resprot.8200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [42].Fu SS, Rothman AJ, Vock DM, Lindgren B, Almirall D, Begnaud A, et al. Program for lung cancer screening and tobacco cessation: study protocol of a sequential, multiple assignment, randomized trial. Con-temp Clin Trials. 2017;60:86. doi: 10.1016/j.cct.2017.07.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].Fernandez ME, Schlechter CR, Del Fiol G, Gibson B, Kawamoto K, Siaperas T, et al. QuitSMART Utah: an implementation study protocol for a cluster-randomized, multi-level sequential multiple assignment randomized trial to increase reach and impact of tobacco cessation treatment in community health centers. Implement Sci. 2020;15(1):1–13. doi: 10.1186/s13012-020-0967-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Schmitz JM, Stotts AL, Vujanovic AA, Weaver MF, Yoon JH, Vincent J, et al. A sequential multiple assignment randomized trial for cocaine cessation and relapse prevention: Tailoring treatment to the individual. Contemp Clin Trials. 2018;65:109–15. doi: 10.1016/j.cct.2017.12.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Buchholz SW, Wilbur J, Halloway S, Schoeny M, Johnson T, Vis-pute S, et al. Study protocol for a sequential multiple assignment randomized trial (SMART) to improve physical activity in employed women. Contemp Clin Trials. 2020;89:105921. doi: 10.1016/j.cct.2019.105921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [46].Hemming K, Kearney A, Gamble C, Li T, Jüni P, Chan A-W, et al. Prospective reporting of statistical analysis plans for randomized controlled trials. Springer; 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [47].O’Keefe VM, Haroz EE, Goklish N, Ivanich J, Cwik MF, Barlow A. Employing a sequential multiple assignment randomized trial (SMART) to evaluate the impact of brief risk and protective factor prevention interventions for American Indian Youth Suicide. BMC Public Health. 2019;19(1):1–12. doi: 10.1186/s12889-019-7996-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48].Dupont WD, Plummer WD., Jr Power and sample size calculations for studies involving linear regression. Control Clin Trials. 1998;19(6):589–601. doi: 10.1016/s0197-2456(98)00037-3. [DOI] [PubMed] [Google Scholar]
  • [49].The University of Texas MD Anderson Center STPLAN Double Precision Study Planning Calculations. 2010. [accessed on 23rd Feb 2021]. Available from https://biostatistics.mdanderson.org/SoftwareDownload/SingleSoftware/Index/41.
  • [50].Charles P, Giraudeau B, Dechartres A, Baron G, Ravaud P. Reporting of sample size calculation in randomized controlled trials. BMJ. 2009;338 doi: 10.1136/bmj.b1732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Lei H, Nahum-Shani I, Lynch K, Oslin D, Murphy SA. A “SMART” design for building individualized treatment sequences. Ann Rev Clin Psychol. 2012;8:21–48. doi: 10.1146/annurev-clinpsy-032511-143152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Pistorello J, Jobes DA, Compton SN, Locey NS, Walloch JC, Gallop R, et al. Developing adaptive treatment strategies to address suicidal risk in college students: A pilot sequential, multiple assignment, randomized trial (SMART) Arch Suicide Res. 2018;22(4):644–64. doi: 10.1080/13811118.2017.1392915. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Kidwell KM. SMART designs in cancer research: past, present, and future. Clin Trials. 2014;11(4):445–56. doi: 10.1177/1740774514525691. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary File

RESOURCES