Skip to main content
. 2021 Jan 1;203(1):14–23. doi: 10.1164/rccm.202010-3943ST

Table 2.

Perceptions and Generalizations Surrounding Observational Research by Some in the Medical Community

Perception/Generalization Reality Additional Comments
Study quality can be determined by the “hierarchy of evidence” in which observational studies are always of inferior quality compared with RCTs. Study design is only one factor that determines study quality. The traditional hierarchy of evidence has been updated by more accurate frameworks that consider study design (e.g., GRADE) and other factors.
Different study designs are suited to studying different aspects of medicine.
Observational studies cannot determine causal association. Minimal risk-of-bias associations shown by observational studies support causal association. Methods are available to determine how well a study establishes causal effect, regardless of study type. For example, GRADE recognizes that an observational study supports causal association if there is a large effect size, a “dose–response” gradient, and/or if all plausible residual confounding results in an underestimate of an apparent association (10).
Because randomization does not occur, unmeasured confounding limits the interpretability of observational studies. Confounding can be minimized through careful study design and appropriate analyses and can be further addressed through sensitivity analyses. Assessing the quality of study designs means scrutinizing them for different types of bias. Sensitivity analyses offer ways to address the likelihood of bias if it exists (12).
Conflicting results from observational studies and RCTs that address similar research questions prove that observational studies are of poor quality. Differences in observational studies and RCTs addressing similar research questions are commonly explained by factors other than study design, such as differences in types of patients being studied, definitions of study variables, and/or study settings (ideal vs. real-world conditions) (28). Disagreement rates between RCTs and observational studies are no greater than disagreement rates between different RCTs addressing the same research question (1214).
Observational studies, unlike RCTs, can be manipulated to produce results of interest. Observational studies and RCTs can be manipulated. Researchers are encouraged to submit study protocols before analyses begin (e.g., to clinicaltrials.gov or to the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance). The development of tools to ensure reliability and prespecification of study procedures in observational studies lags behind that in RCTs, but these tools do exist in observational research. For example, the STROBE statement and the RECORD statement are tools to assess completeness of reporting of observational studies (41).
Because of randomization, RCTs are free from bias. RCTs can have many biases. Some possible biases of RCTs include selection bias, performance bias, detection bias, attrition bias, and reporting bias (42).

Definition of abbreviations: GRADE = Grading of Recommendations Assessment, Development and Evaluation; RECORD = Reporting of Studies Conducted Using Observational Routinely-Collected Health Data; RCT = randomized controlled trial; STROBE = Strengthening the Reporting of Observational Studies in Epidemiology.