Skip to main content
. Author manuscript; available in PMC: 2016 May 1.
Published in final edited form as: Osteoarthritis Cartilage. 2015 May;23(5):677–685. doi: 10.1016/j.joca.2015.03.011

Table I.

Recommendations to address methodologic challenges in osteoarthritis clinical trials.

Domain Recommendation
Design Analysis
Selection of primary endpoint Clearly define primary outcome (endpoint), means of measuring, timing of assessment. Ensure that selected outcome is acceptable by regulatory organizations and scientific community. Do not select outcomes that did not undergo a thorough evaluation regarding validity and reliability Plan and execute primary analysis focused on selected a priori primary outcome, which should be clearly and unambiguously defined. Always report results on primary outcome.
Choice of study design Decision between superiority and non-inferiority designs should be governed by the novelty of treatment modality and choice of control group. If the treatment modality under investigation is compared to placebo – superiority design is the design of choice.
In situations where the treatment under investigation is compared to another active treatment non- inferiority design are chosen if new treatment is likely to have similar efficacy but offers better tolerability and safety profile
Reporting should be consistent with chosen study design. The results of superiority and, in particular, non-inferiority trials are best presented using two-sided 95% confidence intervals.
Blinding and Allocation Concealment When possible, both participants and the research team should be blinded to treatment assignment. When this is not possible, those ascertaining the outcome and analyzing the data should be blinded. Design features such as a varying block randomization scheme and sequentially numbered, opaque, sealed envelopes should be utilized to ensure allocation concealment. Unblinding of data should not take place until the trial is terminated and data are cleaned to an acceptable level of quality. If an interim analysis is performed, this must be a completely confidential process by independent statisticians and conducted with blinded data.
Randomization Randomization should be performed using a computer random number generator, incorporating strategies to ensure concealment (e.g. varying random blocks, sequentially numbered, opaque sealed envelopes). Primary analysis should use intention to treat principal, at least in superiority trials. The intention-to-treat population plays a slightly different role in non-inferiority trials. It is usually a good recommendation to present results for both the intention-to-treat population and the per-protocol population.
Sample size Careful sample size calculations should be made before the start of the RCT, and should take into account the hypothesized difference in primary outcome between the treatment and control groups and the variability in outcome. It is important that investigators consider an effect size that is realistic, but also clinically important. Sample size calculations should also take into account secondary outcomes and subgroup analyses. In some cases analytic sample size calculations cannot be easily performed because of methodological complexity. It may then be a good alternative to use sample size simulation instead.
Missing data Strategies should be put in place at the design phase to minimize missing data. These include limited burden of data collection on study participants, incentives, and clear protocols for study staff to follow for contacting participants. Detailed reason for dropout should be recorded, and participants wishing to discontinue their assigned intervention should be given the opportunity to continue study assessments. Last observation carried forward and complete case analyses require strong assumptions and should be avoided unless there is strong justification. Likelihood based approaches such as mixed-effects models and multiple imputation are valid approaches when the missing data is MAR. Sensitivity analyses should be utilized to assess the robustness of the results to this assumption.
When to consider design changes Design changes should be kept to a minimum and be pre-specified in the study protocol. Such pre-specified design changes are usually initiated by the results of an interim analysis, especially regarding the sample size and termination of a trial. Interim analyses may be logistically complicated as they usually require independent statistical analysis. Avoid interim analyses, especially if they are not pre-specified.
Reporting The study protocol can be published in a journal, and a short version of it should be registered at a public trial registry accepted by the ICMJE and WHO prior to the first patient’s randomization. The statistical analysis and results reporting should comply with the CONSORT statement’s requirements and methodological recommendations provided by regulatory authorities. When submitting a manuscript to a scientific journal, include a copy of the study protocol and a completed CONSORT statement checklist and flow chart.