Skip to main content
. 2017 Jul 11;14:91. doi: 10.1186/s12966-017-0546-3

Table 1.

Control Condition Treatments

Control Condition Treatment Pros (+) and Cons (−)
Inactive Control: Control group receives no comparison treatment at all during the study or receives treatment after the study ends. (+) No resource input for control condition development.a
(+) Increased likelihood of yielding large effect size because least likely to change targeted cognitions or behaviors or these may worsen without treatment [1].
(+) Can help identify potential adverse effects of intervention [1].
(+) May be useful for pilot testing new interventions [1].
(−) Potential to overstate outcome of intervention because nearly all interventions are more effective in changing outcomes than simple passage of time [6, 85].
(−) Increased risk of control group refusal to participate.
(−) Increased risk of attrition and/or seeking alternate source of treatment during the waiting period [10].
(−) Generally considered a weak design [7, 17].
No Treatment Control: Control participants receive no treatment. Additional Points:
(−) Ethical issues when depriving a group in need of intervention of help when a suitable standard treatment/usual care is available; ethical problem lessens when no immediate risks (e.g., disease treatment) [20, 90].
(−) Vulnerable to treatment fidelity issues (temptation of research staff/clinicians to offer some treatment to needy participant) [1].
Wait-list (delayed treatment) Control: Closely related to no treatment control; control participants wait until the study concludes to receive treatment. During waiting period, wait list control participants may receive standard treatment/usual care which may impact study outcomes. Additional Points:
(+) No additional input for control condition development, but implementation costs must be considered
(+) All participants receive the “active ingredient” treatment.
(−) Ethical issues lessened unless control group is in immediate need of treatment and available standard treatment/usual care not provided.
Active Control: Control group receives a different treatment contemporaneously with the experimental group.[4, 17] (+) Considered a strong design [7].
(+) If control group is given a bona fide treatment, possibility of ethical issues diminished.
(+) Controlling non-specific treatment effects (e.g., participant burden, activity, and data collection format and scheduling, attention from researchers) [85] minimizes threats to internal validity and permits effects of the intervention to be more accurately attributed to the “active ingredient” hypothesized to affect the dependent variables [7].
(−) Creating a credible control treatment that is equally preferred by participants is difficult [20].
(−) Detrimental effects may occur if action control treatments lead to inaccurate conclusions about their personal health or other conditions and/or lack of action to improve a health or other condition. See Pagoto et al. [91] for more detailed discussion.
Usual or Standard Treatment: Control participants receive a treatment that is typically offered. Additional Points:
(+) Limited additional resource input for control condition development.
(+) Provides opportunity to investigate whether new intervention is superior to existing treatment
(−) Non-specific treatment effects likely different from intervention (e.g., differs in frequency of contact, type of intervention [e.g., passive vs active], time commitment, and/or provider qualifications, experience, and/or researcher/clinician allegiance to the protocol) [6]. For instance, if the experimental condition requires greater effort, experimental group completers likely will be more motivated than control participants and confound results [6].
(−) “Usual” treatment interventions often do not exist in nutrition education, negating this as an option.
(−) “Usual” treatment intervention components often insufficiently described (e.g., in peer-reviewed articles or implementation manuals) to permit comparison by external reviewers [1, 6, 25].
(−) Often no verification of fidelity of usual treatment to protocol implementation (e.g., process evaluation, manual, or oversight of providers) [1].
(−) Lack of equipoise (sincere uncertainty of whether intervention will be beneficial over usual practices) may affect research staff interactions with participants [1] (detailed implementation manuals, frequent process evaluation, and strong supervision can mitigate this) [1].
(−) Research staff personality differences and variations (even inadvertently) affect their behavior toward and expectations of control vs. experimental participants [6].
(−) Comparing “usual” practices to experimental condition is reasonable only if experimental participants are blind to the novelty of the experimental condition [2].
Alternative Active Treatment: Control group receives an alternative treatment equal in non-specific treatment effects (e.g., participant burden, activity, and data collection format and scheduling, attention from researchers) to the experimental group and differs only in the non-receipt of the “active ingredient” of the intervention hypothesized to affect the dependent variables (e.g., only the subject matter content of the intervention differs). [4, 6] Additional Points:
(+) Controls for non-specific treatment effects enhances ability to ascribe efficacy to the experimental treatment [7].
(−) Control treatment components often insufficiently described (e.g., in peer-reviewed articles or implementation manuals) to permit comparison by external reviewers [1, 6].
(−) Often no verification of fidelity to protocol implementation (e.g., process evaluation, manual, or oversight of providers) for either experimental or control groups [1].
(−) Research staff personality differences and variations (even inadvertently) affect their behavior toward and expectations of control vs. experimental participants [6].
(−) Comparing control and experimental condition is reasonable only if both are blind to treatment group assignment [2, 92].
(−) Additional resource input for control condition development; using alternative active treatment when the effect of attention on participant outcome is unknown may be an unnecessary expense.
(−) Rigorous control of non-specific treatment tends to contribute to study effects (i.e., control participant improvement), thus larger sample sizes or an increased risk of Type 1 error (e.g., p-level set higher than typical <0.05) is needed to prevent erroneously rejecting effective interventions as ineffective and to detect potentially small yet clinically important effect sizes [1, 9, 13, 17, 24].
Dismantling (or Additive) Component Attention Control: Typically used with a multi-part intervention where the individual parts are separated to identify which are most salient to the outcomes (often with the goal of increasing cost-effectiveness by paring down intervention parts).[7] Example: study of the effectiveness of a self-instructional guide accompanied by telephone counseling compared to the guide alone. Additional Points:
(+) Method is well suited if “usual” care is effective and desire is to improve on it; also overcomes ethical issue of denying treatment to those in need [93].
(−) Adequate sample size needed for each part of the multi-part intervention [1].
(−) Outcomes may be confounded if effect is due to differing exposure levels rather than the added component itself [24].
(−) Lower statistical power if added parts have small effect compared to existing intervention [7].
(−) Lack of equipoise (genuine uncertainty of whether individual intervention parts will be beneficial alone and/or better than usual practices) may affect research staff interactions with participants[1] (detailed implementation manuals, frequent process evaluation, and strong supervision can mitigate this) [1].

aResources include time investment by participant and/or researcher, money, and research staff expertise