Skip to main content
Indian Journal of Anaesthesia logoLink to Indian Journal of Anaesthesia
letter
. 2024 Jun 7;68(7):662–663. doi: 10.4103/ija.ija_443_24

Reviewing research reporting in randomised controlled trials – the sample size calculation

Venkata Ganesh 1, Neeru Sahni 1,
PMCID: PMC11285892  PMID: 39081916

Dear Editor,

In our previous attempt to characterise the patterns of research reporting in Indian anaesthesia research in the Indian Journal of Anaesthesia (IJA), we restricted our discourse from brevity to confidence intervals and P values. In this monograph, we would like to clarify the elements of sample size calculation in the context of the same cross-sectional exploration.[1]

In clinical research, sample size calculations are required to establish the number of units/participants needed to detect a clinically meaningful treatment effect with reliable certainty and an acceptably low error rate. In other words, to calculate the sample size, one needs to choose a well-defined primary outcome, a clinically relevant magnitude of difference (target effect size), a method of analysis for the same outcome, and a level of power and alpha error. Our exploration found that about 14% of the trials had reported replicable sample size calculations.[1] This reporting frequency was lower than the expected 30%, as documented in the seminal paper published in 1994 that emphasised the importance of including these particulars.[2]

A probable reason sample size calculations are not adequately performed or reported is the lack of understanding of the term “effect size”. Social science research has been known to use standardised effect sizes to interpret their findings and calculate sample sizes.[3] In psychology papers, as most prior treatment effect data are non-existent or measured on different scales, determining a clinically significant measure would not be easy across studies. Using standardised effect estimates, such as Cohen’s d, with arbitrary rules may be acceptable in these cases.[4] However, “effect size” also refers to the simpler, more clinically interpretable measures of treatment effect, such as the mean, median, proportion, or relative inter-group differences. The effect size should consider the minimum meaningful effect that has biological relevance.

We must understand that sample size calculation is essentially a reverse calculation to arrive at a minimum required sample size (N) using the target effect size (), power, and alpha (P value) in the statistical model/test intended for analysing the primary outcome. In other words, this N is only powered to detect the intended at the said power and P value for the intended primary outcome. Hence, conclusions should be based mainly on primary outcomes (for which the sample size was calculated) and not on unmeasured or ancillary outcomes solely because they had a P less than 0.05.

In addition to mentioning the software or the formula used for sample size calculation, the authors should provide the outcome and analysis procedure (e.g., t-test, ANOVA, Chi-square test) for calculating the sample size. The details of whether the test was one-tailed or two-tailed should also be provided. Also, the outcome should be analysed using the same test/analysis procedure.

Negative results in a clinical trial are interpretable if the sample size was calculated to detect a clinically significant effect; the interpretation is that the treatment failed to produce at least as substantial as the effect deemed clinically relevant.[2] When this targeted effect size is not mentioned at the stage of sample size calculation, readers may misinterpret the findings as “no difference” going by the general incorrigible trend of considering only the P value in the results section. It is pertinent to mention here that there are ethical reasons to elaborate on the details of the sample size calculation, as no additional participant should have received a potentially ineffective or harmful experimental intervention.[5]

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

REFERENCES

  • 1.Ganesh V, Sahni N. Reviewing research reporting in randomised controlled trials: Confidence and P values. Indian J Anaesth. 2024;68:492–5. doi: 10.4103/ija.ija_189_24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Moher D, Dulberg CS, Wells GA. Statistical power, sample size, and their reporting in randomised controlled trials. JAMA. 1994;272:122–4. [PubMed] [Google Scholar]
  • 3.Rubio-Aparicio M, Marín-Martínez F, Sánchez-Meca J, López-López JA. A methodological review of meta-analyses of the effectiveness of clinical psychology treatments. Behav Res Methods. 2018;50:2057–73. doi: 10.3758/s13428-017-0973-8. [DOI] [PubMed] [Google Scholar]
  • 4.Davidson IJ. The ouroboros of psychological methodology: The case of effect sizes (mechanical objectivity vs. expertise) Rev Gen Psychol. 2018;22:469–76. [Google Scholar]
  • 5.Gopinath R. To reveal or to conceal: Appropriate statistical analysis is a moral obligation for authors in modern medicine. Indian J Anaesth. 2023;67:323–5. doi: 10.4103/ija.ija_221_23. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Indian Journal of Anaesthesia are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES