Abstract
Studies follow a hierarchy in terms of the quality of evidence that they can provide. Randomized double blind placebo control (RDBPC) studies are considered the “gold standard” of epidemiologic studies. And the same is discussed at length in this paper taking example of a real journal article employing this study design to answer the research question; “Does once daily dose of Valacyclovir reduce the risk of transmission of genital herpes in a susceptible partner?” RDBPC studies remain the most convincing research design in which randomly assigning the intervention can eliminate the influence of unknown or immeasurable confounding variables that may otherwise lead to biased and incorrect estimate of treatment effect. Also, randomization eliminates confounding by baseline variables and blinding eliminates confounding by co-interventions, thus eliminating the possibility that the observed effects of intervention are due to differential use of other treatments. The best comparison is placebo control that allows participants, investigators and study staff to be blinded. The advantage of trial over an observational study is the ability to demonstrate causality. Hope, this will be useful to neophyte researchers to understand causal hierarchy when critically evaluating epidemiologic literature.
Keywords: Blinding, clinical trials, placebo, randomization, randomized controlled trials
INTRODUCTION
Causal Hierarchy: Epidemiologists evaluate evidence to determine whether an exposure is directly responsible for an outcome. Studies follow a hierarchy in terms of the quality of evidence that they can provide. Strongest study is the “Randomized Controlled Trial” (RCT). And the same is discussed at length in this paper taking example of a real journal article employing this study design to answer the research question; “Does once daily dose of Valacyclovir reduce the risk of transmission of genital herpes in susceptible partner?” The study population being investigated was heterosexual couples who were serologically discordant for HSV-2 infection from 96 study sites.[1] RCTs are experimental studies, also called intervention studies. Two major types of planned experimental studies are: randomized controlled trials (RCTs/clinical trials) and community trials (community intervention trials). The basic difference between them is the unit of analysis; in RCTs, this unit is the individual whereas in community trials it is the group.
It is important to note that in preventive measurement (primary prevention), participants are healthy in whom preventive therapies are tested. The unit of analysis can be either individuals or populations (for example, polio vaccine-field trial; fluoride-community trial). In contrast, when therapeutic measurements (secondary or tertiary prevention) are carried out the participants have a disease or condition in which therapies are tested for benefit (efficacy). Some examples are; new vs. old diet in diabetics or in cancer treatment, surgical vs. medical (coronary bypass vs. drug treatment), surgical vs. surgical (in breast cancer radical vs. limited mastectomy).
In intervention-based clinical studies in clinical trials, the investigator applies an intervention and measures its effect on outcomes. Randomized double blind placebo control (RDBPC) studies are considered the “gold standard” of epidemiologic studies. If well designed, (they) provide the strongest possible evidence of causation.[2,3] To understand this clearly, it is necessary to elaborate upon the key words used in the above statement. To start with, they are prospective studies also known as analytical studies. The investigator selects exposure of interest (say therapeutic regimen or preventive measure) and subjects are assigned at random to the exposure and the control, then they are followed and occurrence of outcome is compared between the two groups. Actually, the combination of randomization and blinding is the best design but at times it can lead to ethical issues.
The key words: Clinical trials
Clinical trials are prospective studies in which humans are exposed to “something” at the discretion of the investigator and followed for an outcome. The purpose is to draw inferences about the potential effect of the “something” on a target population represented by trial participants. To explain in detail, a “clinical trial” is a planned experiment (1) designed to assess the efficacy [or effectiveness] (2) of a treatment [or intervention] (3) in men (4) by assessing the outcomes (5) in a group of patients [or participants] (6) treated with the test treatment and usually by comparing these outcomes with those observed in a comparable group (7) of patients receiving a control (8) treatment. In the definition as mentioned above, the key words as numbered serially need further explanation. 1. Planned experiment: the word experiment means that the exposure is determined by the investigator, “planned” is relevant. If I draw from a database all patients with disease X who were on drug A or drug B and then compare the outcomes associated with two drugs, this is not a clinical trial. 2. Efficacy: refers to the effect of a treatment or intervention under idealconditions, e.g., all patients are compliant with the full dosage regimen, and there are no concurrent illnesses or other drugs that interfere with the outcome, whereas effectiveness refers to the effect of a treatment or intervention under usual conditions. 3. Treatment: this is simply an exposure. You can expose a patient to a drug, or a type of surgery, or an exercise plan or a diagnostic device (e.g., a new way of doing mammography). 4. In man: whereas epidemiology can be stretched to include the study population of animals, a clinical trial by definition refers to an experiment conducted in humans. 5. Outcomes: examples are resolution of disease, increased survival rate, and improvement in quality-of-life. All clinical trials are prospective studies in which individuals are exposed (or not) and followed for an outcome (or a few different outcomes). The outcomes must be clearly defined. 6. Group of patients: this is a sample from a target population. Inferences will be drawn about the target population and not a specific individual studied. 7. Comparable group: as in any hypothesis-testing epidemiologic study, a reference group is necessary. In a clinical trial, there is need for comparability among study groups as lack of comparability is called confounding (explained later). Best way to assure comparability is by “Randomization.” 8. Control: in clinical trial jargon, the term “control” refers to a person unexposed to the test treatment or intervention under study. A control may be on a placebo or on a reference treatment. It is important to clarify that an explicit control group is not always necessary to meet the definition of a clinical trial. So, is a control mandatory? The answer is: studies of potential curative agents (e.g., antibiotics) of highly fatal diseases do not require a control, because the untreated outcome, i.e., death, is already known. In all other cases, a control is necessary.
Randomization
The history dates back to Sir Ronald Aylmer Fisher, the father of modern statistics, who contributed to the understanding of randomization. He also made great contributions to the understanding of confounding and created designs to handle problems posed by confounding.[4] Random and Haphazard, though sometimes used interchangeably. In literal terms, haphazard is a process occurring without any apparent order or pattern, whereas statistical definition of random is assignment resulting from a chance process in which the probability of any given assignment is known. It forms the basis for the derivation of statistical tests. Very importantly, randomization avoids selection bias that could occur if either the physician or the patient chooses the treatment. Randomization also removes most confounding by all known and unknown factors, because it prevents an association between the treatment and any other known or unknown factor. In other words, it minimizes the possibility that the observed association between the exposure and the outcome is really caused by a third factor. Here, it is important to understand that, in order to be labeled as confounder, the potential confounding factor (PCF) must satisfy three conditions: it is associated with the study exposure, it is the risk factor for the disease/outcome of interest independently of exposure of interest and it is not an intermediate step in the causal pathway between the exposure and outcome. Randomization with blinding (discussed later) avoids reporting bias, since no one knows who is treated and who is not and therefore all treatment groups should be treated the same. In the study article taken as example,[1] the HSV-2-seropositive partners were randomly assigned, in a 1:1 ratio, to 500 mg of valacyclovir once daily or to matching placebo. At each visit, safer sex practices, including the use of condoms during sexual intercourse, were discussed with each partner, and standardized counseling was provided when signs and symptoms of genital herpes were recognized. Randomization was performed at a central site in blocks of 10 to ensure balance between the groups. Randomization was stratified according to the sex and HSV-1 status of the susceptible partners. Thus, potential confounding variables minimized by randomization here could be frequency of sexual contact, frequency of condom usage, sex of susceptible partner, duration of relationship, duration of infection in source partner, etc.
Placebo controlled
One more important keyword to understand is placebo controlled. A placebo is an “inert” substitute for a treatment or intervention. “Inert” means the compound has no known activity that would be expected to affect the outcome. Factually, a placebo effect is a psychosomatic effect brought about by relief of fear, anxiety or stress because of study participation. A component of every specific treatment effect can be attributed to the placebo response. The question that a study should be asking is whether the treatment has any effect on outcome aside from the stress-relieving effect of study participation. It is important to note that NO treatment is NOT the same as placebo treatment. To determine if improvement in the treated group is due to drug effect rather than the act of being treated, a placebo must be used. In the study article taken as example,[1] the HSV-2-seropositive partners were randomly assigned, in a 1:1 ratio, to 500 mg of valacyclovir once daily or to matching placebo. An active control is another treatment that is known to have efficacy and is an alternative to placebo, also called positive (active) control. When use of a placebo is deemed unethical, namely when with-holding treatment from a patient could produce irreversible harm. For, example, in HIV infection control group may be given AZT. As the effect of both treatments may be due to a placebo effect it is necessary that the new treatment must be shown to be better than the active control.
Blinding, also called Masking
This is another important keyword to understand. When the outcome can conceivably be affected by patient or investigator's expectations, then blinding is important. Blinding is of three types - single blind: when the patient is blind, double blind: when the patient and the investigator are blind, and triple blind: when the patient, investigator and data clean-up people are blind. The statistician can only be partially blinded since he/she has to know which patients are in the same treatment group. In the study article taken as example,[1] an end-points committee, whose members were blinded to the treatment assignment, reviewed all cases of genital herpes clinically diagnosed during the study. This committee also reviewed all cases in which the susceptible partner had an abnormal genital symptom or sign during the study as well as all cases of genital herpes confirmed by laboratory analysis.
An important thing to understand is what is involved in an RCT? As mentioned earlier, the combination of randomization and blinding are characters of best study design; at times it can be unethical. Hence, to go through Institutional Review Board (IRB) is necessary.
Some limitations of RDBPCT
As all studies have their own limitations and strengths, clinical trials are not bereft of limitations. They are expensive and time-consuming. At times not blinding at all looses the benefits of randomization. Some of the biases that the study is prone to are: non-compliance, withdrawals after randomization, attrition/losses to follow-up, ineligible patients enrolled and misclassification of outcome.
To conclude, the major advantage of trial over an observational study is the ability to demonstrate causality i.e., cause-effect relationship. When RDBPC is compared with other research designs, the level of evidence given by RDBPC is nearly 100% and hence it is considered “gold standards” for comparison.
ACKNOWLEDGMENT
I acknowledge the support from Patricia J. Emmanuel, M.D., Professor of Paediatrics, Associate Dean for Clinical Research University of South Florida and Director of the online course in "Clinical Investigation". The course described was supported by Award Number D43TW006793 from the Fogarty International Centre, National Institute of Health, USA. I also acknowledge learning from Dr. Shyam S. Mohapatra, Professor of Medicine and Director of Basic Research, Joy McCann Culverhouse Airway Disease Centre, USF. I render my sincere thanks to Dr. R.K.Baxi, Professor, Department of PSM and acknowledge his incessant motivation and encouragement to prepare this article.
Footnotes
Source of Support: Nil.
Conflict of Interest: None declared.
REFERENCES
- 1.Corey L, Wald A, Patel R, Sacks SL, Tyring SK, Warren T, et al. Once-Daily valacyclovir to reduce risk of transmission of Genital Herpes. N Engl J Med. 2004;350:11–20. doi: 10.1056/NEJMoa035144. [DOI] [PubMed] [Google Scholar]
- 2.Hulley SB, Cummings SR, Browner WS, Grady DG, Newman TB. 3rd ed. 503 Walnut street, Philadelphia, PA, USA: Williams and Wilkins, a Walters Kluwer business Lippincott; 2007. Designing clinical research; pp. 251–65. [Google Scholar]
- 3.William A. 4180 IL route 83, suite 101 Long Groove, IL: Waveland Press, Inc; 9580; Oleckno.Essential epidemiology principles and applications; pp. 147–59. [Google Scholar]
- 4.Fisher R A. (1971) The Design of Experiments. Macmillan ISBN 0-02-844690-9. [Google Scholar]