Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Aug 5.
Published in final edited form as: Sex Transm Infect. 2014 May;90(3):172–173. doi: 10.1136/sextrans-2013-051426

Relative or Absolute? A Significant Intervention for Chlamydia Screening with Small Absolute Benefit

William C Miller 1,2, Nadia L Nguyen 1,2
PMCID: PMC4526139  NIHMSID: NIHMS709642  PMID: 24719029

“This complex intervention within the English chlamydia screening programme led to a 76% increase in chlamydia screening test rates across all practices offered the intervention, with a 40% increase in infections detected.”[1] Thus begins the Discussion section of the article by McNulty, et al. in this edition of Sexually Transmitted Infections. On the surface, the finding appears dramatic, offering hope for a major advance in the coverage of screening for chlamydial infection under the United Kingdom’s National Chlamydia Screening Programme. Look a little deeper, however, and chlamydia screening clearly remains a major challenge.

McNulty and colleagues conducted a cluster randomized controlled trial using a “Modified Zelen” design to assess the impact of an intervention developed to alter the screening behaviors of general practices in the southwest of England.[1] Over 150 practices were randomized: 77 to receive the intervention and 83 to serve as control practices. The intervention, developed using the Theory of Planned Behavior,[2] incorporated an outreach educational workshop, posters, invitation cards, feedback on practice performance, and ongoing support to the practices. Grounding the intervention in the Theory of Planned Behavior provided a strong framework and justification for each component of the intervention and is a fundamental strength of this study.

The uptake of the intervention was modest: only 62% of intervention practices agreed to three contacts with a chlamydia support worker and 17% refused all contacts. Additionally, only 45% of practices used computer prompts and about two-thirds used invitation cards. Use of posters was more common, with about 80% of practices using posters.

Given the modest uptake of the intervention by the intervention practices, what was its impact? During the intervention period, the incidence rate ratio for chlamydia testing was 1.76 (i.e. a 76% relative increase) comparing the intervention practices to the control practices. During the 9 months after the intervention, the incidence rate ratio for chlamydia testing was 1.57. These increases are statistically significant, but are they clinically significant? We would argue only weakly so. The rate of testing in both groups was remarkably low, and the absolute difference between the intervention and control practices was very small. The rate of testing was 4.34 per 100 patients 15–24 years of age in the intervention practices, compared to 3.00 per 100 patients 15–24 years of age in the control practices, an absolute difference of less than 1.5 per 100 patients.

Ultimately, the effectiveness of a chlamydia screening program is judged not by the number of persons tested, but by the number of infections identified. In both the intervention and control practices, the number of infections identified was low, with only about two infections per practice detected during the intervention period. In the intervention group, 164 episodes of chlamydia were detected, corresponding to a rate of 2.5 infections per 1000 patients 15–24 years of age. The corresponding findings in the control group were 182 episodes of chlamydia with a rate of 2.3 per 1000 patients 15–24 years of age.

Furthermore, we would expect that the rate of infection would correspond roughly to the population prevalence. But, the observed rate is about 10–20 times lower than would be expected based on population-based or clinic-based studies.[37] For example, in the recently released results of Natsal 3, the prevalence of chlamydial infection in the UK was 3.1% in women and 2.3% in men aged 16–24 years.[7]

Overall, McNulty et al. view their results much more positively than we do, despite acknowledgement of a small absolute intervention effect. Their view appears to be based on the relative impact of the intervention and the statistical significance of the effect. But despite the affection of many clinical researchers, epidemiologists, and biostatisticians for relative measures, such as the incidence rate ratio, the absolute effect is often substantially more important from a clinical or public health perspective. In this particular case, baseline rates of screening were so low that an intervention would need a many fold higher effect to achieve a meaningful impact at the community level.

Taken together, we have an intervention that yielded a modest relative increase in screening, a small absolute increase in screening, and a minimal increase in the number of infected persons identified. These effects varied somewhat with uptake of the intervention: practices that adopted more components of the intervention tended to have higher screening rates. Even considering this variation, however, the maximum absolute increase was only 29% from baseline.

Given the small effects, what are the primary take home messages from the study? Simply put, provider behavior is exceedingly difficult to modify. Screening rates were exceptionally low before the study began, the intervention had a small effect, and the rates, although remaining “significantly higher” in the intervention practices compared to the control practices, dropped after the intervention period. The intervention used here – straightforward, practical, and theory-based – would seemingly be ideal for boosting chlamydia screening. But in many ways, the results are quite disappointing. The small effect and the limited number of infections identified suggest that chlamydia screening is not a priority for these practices – even with a carefully developed and well-planned intervention. This observation implies that changing chlamydia screening behavior to achieve a reasonable level of coverage will require a substantially more intensive and sustained intervention, or perhaps, an alternative economically-based motivation.

Before concluding, we wish to bring attention to one unique feature of the study design – the “modified Zelen” design, an approach that is unlikely to be familiar to many researchers. The traditional Zelen design, also known as the randomized consent design, randomizes prospective participants in a randomized controlled trial before consent is obtained and then seeks consent only from those assigned to the intervention group.[8,9] With this design, persons in the standard of care arm are not aware of their participation in a trial. One modification of the Zelen design allows consent for participation in an observational study first, followed by a second consent for the intervention for participants assigned to that group.[8,9]

McNulty, et al. used a further modification of the Zelen design: no informed consent was obtained from the practices involved in either the intervention or standard of care groups.[1] Thus, the clinicians were unaware that they were engaged in a research study. The practitioners could, however, accept or refuse any component of the intervention. The design received appropriate ethical review and approval.

While a full review of the ethical issues of this modification of Zelen’s design is beyond the scope of this commentary, we believe two points are worth consideration. First, a key question in any research study that involves any form of deception is whether the deception was necessary.[10,11] The rationale for the design in this case was that the behavior of the providers might change with knowledge that their clinical behaviors were under study. Was that deception necessary? Possibly – whether screening rates would have been substantially higher in the intervention or control groups if the study was revealed is unclear. Second, one consequence of informed consent may be to build and maintain trust between the participant and the researcher.[12] In this case, the research team included both academics and government employees. Loss of providers’ trust because of concerns related to being studied or “manipulated” is one plausible outcome of this study. Only those providers who engaged in the study, and others who subsequently learn of it, can answer that question.

In conclusion, this carefully designed intervention resulted in modest increases in chlamydia screening among general practices in southwest England. The emphasis on relative effects tends to overstate the small gains in screening and identified infections. The small absolute effects reflect the difficulty in changing clinicians’ behavior. Undoubtedly, greater coverage of young persons at risk for chlamydial infection will require creative solutions.

Acknowledgments

Grant support: National Institutes of Health, T32 AI007001

References

  • 1.McNulty CA, Hogan AH, Ricketts EJ, et al. Increasing chlamydia screening tests in general practice: a modified Zelen prospective Cluster Randomised Controlled Trial evaluating a complex intervention based on the Theory of Planned Behaviour. Sex Transm Infect. 2013 doi: 10.1136/sextrans-2013-051029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ajzen I. Theory of planned behavior. Organ Behav Hum Decis Processes. 1991;50:179–211. [Google Scholar]
  • 3.Datta SD, Sternberg M, Johnson RE, et al. Gonorrhea and chlamydia in the United States among persons 14 to 39 years of age, 1999 to 2002. Ann Intern Med. 2007;147:89–96. doi: 10.7326/0003-4819-147-2-200707170-00007. [DOI] [PubMed] [Google Scholar]
  • 4.Datta SD, Torrone E, Kruszon-Moran D, et al. Chlamydia trachomatis trends in the United States among persons 14 to 39 years of age, 1999–2008. Sex Transm Dis. 2012;39:92–6. doi: 10.1097/OLQ.0b013e31823e2ff7. [DOI] [PubMed] [Google Scholar]
  • 5.LaMontagne DS, Fenton KA, Randall S, et al. Establishing the National Chlamydia Screening Programme in England: results from the first full year of screening. Sex Transm Infect. 2004;80:335–41. doi: 10.1136/sti.2004.012856. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Miller WC, Ford CA, Morris M, et al. Prevalence of chlamydial and gonococcal infections among young adults in the United States. JAMA. 2004;291:2229–36. doi: 10.1001/jama.291.18.2229. [DOI] [PubMed] [Google Scholar]
  • 7.Sonnenberg P, Clifton S, Beddows S, et al. Prevalence, risk factors, and uptake of interventions for sexually transmitted infections in Britain: findings from the National Surveys of Sexual Attitudes and Lifestyles (Natsal) Lancet. 2013;382:1795–806. doi: 10.1016/S0140-6736(13)61947-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Homer CS. Using the Zelen design in randomized controlled trials: debates and controversies. J Adv Nurs. 2002;38:200–7. doi: 10.1046/j.1365-2648.2002.02164.x. [DOI] [PubMed] [Google Scholar]
  • 9.Schellings R, Kessels AG, ter Riet G, et al. Randomized consent designs in randomized controlled trials: systematic literature search. Contemp Clin Trials. 2006;27:320–32. doi: 10.1016/j.cct.2005.11.009. [DOI] [PubMed] [Google Scholar]
  • 10.Wendler D, Miller FG. Deception in the pursuit of science. Arch Intern Med. 2004;164:597–600. doi: 10.1001/archinte.164.6.597. [DOI] [PubMed] [Google Scholar]
  • 11.2010 Amendments to the 2002 “Ethical principles of psychologists and code of conduct”. Am Psychol. 2010;65:493. doi: 10.1037/a0020168. [DOI] [PubMed] [Google Scholar]
  • 12.Eyal N. Using informed consent to save trust. J Med Ethics. doi: 10.1136/medethics-2012-100490. Published Online First: 8 December 2012. [DOI] [PubMed] [Google Scholar]

RESOURCES