Skip to main content
JNCI Journal of the National Cancer Institute logoLink to JNCI Journal of the National Cancer Institute
editorial
. 2019 Apr 11;111(11):1116–1117. doi: 10.1093/jnci/djz047

Reporting Standards for Patient-Reported Outcomes in Clinical Trial Protocols and Publications

Ethan Basch 1,, Allison Barz Leahy 1
PMCID: PMC6855962  PMID: 30959517

Over the past 20 years, interest has grown to include patient-reported outcomes (PRO) in cancer clinical trials, evidenced by guidance documents from international regulatory authorities (1) and a steady increase of PRO-based endpoints in protocols (eg, pain improvement) (2). But is inclusion of PRO endpoints rigorous and systematic? Concerns have been raised previously about incomplete reporting of PRO results in clinical trial publications, which may reflect underlying design weaknesses (3). These concerns have prompted recent international collaborative efforts to standardize expectations for describing PRO endpoints in protocols and publications.

For protocols, as an adjunct to the well-established Standard Protocol Items Recommendations for Interventional Trials (SPIRIT) checklist, a SPIRIT-PRO Extension was developed in 2013 (4). For publications, an adjunct to the Consolidated Standards of Reporting Trials (CONSORT) checklist, the CONSORT-PRO Extension, was issued in 2018 (5). Example items in these extension checklists include clear description of PRO hypothesis; evidence of validity and reliability of PRO questionnaire; and planned approach to missing data. The intention of these extensions, like the core SPIRIT and CONSORT checklists, is to improve rigor and transparency through standardization of reporting requirements.

Leaders of these PRO standardization efforts provide insights into the status of PRO reporting in cancer clinical trials in the current issue of the Journal (6). Kyte and colleagues analyzed 228 publicly funded randomized controlled trials in the United Kingdom that included primary or secondary PRO endpoints and had closed to accrual by 2014 or published results by 2017.

Protocols were available for slightly less than one-half of studies and generally did poorly on checklist criteria: Protocols adhered to a mean of about two-thirds of core SPIRIT recommendations and about one-third of PRO-specific recommendations—regardless of whether PROs were a primary or secondary endpoint (6). For publications reporting on these studies, findings were similar in an analysis of adherence to core and PRO-specific CONSORT recommendations.

How should these findings be interpreted? Many of the SPIRIT and CONSORT PRO extension items seem intuitive (eg, describe PRO data collection plan), and it is difficult to understand how such details could be omitted in scientifically reasonable reporting, with or without access to a checklist. That said, mandatory adherence to SPIRIT and CONSORT will ideally compel consideration of these details, prompt consultation of PRO experts if needed, and improve rigor. Indeed, the authors found better adherence to standards with more recent trials (6). This is reassuring and makes sense; standardization of PRO clinical research methodology is relatively nascent, and the SPIRIT-PRO and CONSORT-PRO extensions are recent developments.

Moving forward, wide adoption of the SPIRIT-PRO and CONSORT-PRO extensions will face the same challenges as adoption of any reporting standards. Incentives and disincentives are necessary. Only some journals require authors to submit their protocols for review with manuscripts, and most journals lack the resources to systematically assess protocols for checklist compliance. Requiring authors to complete checklists and cite supporting manuscript page numbers and paragraphs is a step in the right direction. But given the mission of scientific journals to disseminate meaningful and rigorous discovery, simple efforts to assure checklist adherence seem like a reasonable priority.

Funding agencies like the one supporting trials in the Kyte et al. article (6) (the UK National Institute for Health Research) should also play a role at the front end by requiring adherence to reporting standards. Arguably, that agency has failed in this regard. An alternative model is provided by the funding agency the Patient-Centered Outcomes Research Institute, which requires grantees to comply with general study design methodological standards, including several Patient-Centeredness Standards (7).

Another key PRO standards effort is Setting International Standards in Analyzing Patient-Reported Outcomes and Quality of Life Endpoints Data (SISAQOL), which is focused on statistical methods for analysis and reporting of PROs in clinical trials, through a consortium of industry, regulatory, patient, and biostatistician participants (8). Together with the CONSORT and SPIRIT extensions, these directives provide a clear path forward for improving the quality of data collection, analysis, and reporting.

Ultimately, assurance of rigor in clinical research reporting is a collective responsibility including stakeholders at every step of research development, conduct, analysis, and reporting. The quality of PRO endpoint reporting has improved of late but still has far to go. Kyte and colleagues (6) are to be commended for their efforts to raise the bar.

Notes

Affiliations of authors: Cancer Outcomes Research Program, Lineberger Comprehensive Cancer Center, University of North Carolina, Chapel Hill, NC (EB); Division of Oncology, Children’s Hospital of Philadelphia, Philadelphia, PA(ABL).

EB receives research funding from the National Cancer Institute and Patient-Centered Outcomes Research Institute. He is an Associate Editor for JAMA and a scientific advisor to Sivan, CareVive, and Self Care Catalysts. ABL has no disclosures.

References

  • 1. Kluetz PG, Slagle A, Papadopoulos EJ, et al. Focusing on core patient-reported outcomes in cancer clinical trials: symptomatic adverse events, physical function, and disease-related symptoms. Clin Cancer Res. 2016;227:1553–1558. [DOI] [PubMed] [Google Scholar]
  • 2. Basch E. High compliance rates with patient-reported outcomes in oncology trials submitted to the US Food and Drug Administration. J Natl Cancer Inst. 2019;111(5):djy183;doi: 10.1093/jnci/djy183. [Epub ahead of print] PubMed PMID: 30561701. [DOI] [PubMed] [Google Scholar]
  • 3. Efficace F, Fayers P, Pusic A, et al. Quality of patient-reported outcome reporting across cancer randomized controlled trials according to the CONSORT patient-reported outcome extension: a pooled analysis of 557 trials. Cancer. 2015;12118:3335–3342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Calvert M, Kyte D, Mercieca-Bebber R, et al. Guidelines for inclusion of patient reported outcomes in clinical trial protocols: the SPIRIT-PRO extension. JAMA. 2018;3195:483–494. [DOI] [PubMed] [Google Scholar]
  • 5. Calvert M, Blazeby J, Altman DG, Revicki DA, et al. Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA. 2013;3098:814–822. [DOI] [PubMed] [Google Scholar]
  • 6. Kyte D, Retzer A, Ahmed K, Keeley T, et al. Systematic evaluation of patient-reported outcome protocol content and reporting in cancer trials. J Natl Can Inst. 2019;111:djz038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Methodology Committee of the Patient-Centered Outcomes Research Institute (PCORI). Methodological standards and patient-centeredness in comparative effectiveness research: the PCORI perspective. JAMA. 2012;30715:1636–1640. [DOI] [PubMed] [Google Scholar]
  • 8. Bottomley A, Pe M, Sloan J, et al. Moving forward toward standardizing analysis of quality of life data in randomized cancer clinical trials. Clin Trials. 2018;156:624–630. [DOI] [PubMed] [Google Scholar]

Articles from JNCI Journal of the National Cancer Institute are provided here courtesy of Oxford University Press

RESOURCES