Abstract
The SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) 2013 Statement was developed to provide guidance for inclusion of key methodological components in clinical trial protocols. However, these standards do not include guidance specific to pathology input in clinical trials. This systematic review aims to synthesise existing recommendations specific to pathology practice in clinical trials for implementation in trial protocol design. Articles were identified from database searches and deemed eligible for inclusion if they contained: (1) guidance and/or a checklist, which was (2) pathology‐related, with (3) content relevant to clinical trial protocols or could influence a clinical trial protocol design from a pathology perspective and (4) were published in 1996 or later. The quality of individual papers was assessed using the AGREE‐GRS (Appraisal of Guidelines for REsearch & Evaluation – Global Rating Scale) tool, and the confidence in cumulative evidence was evaluated using the GRADE‐CERQual (Grading of Recommendations Assessment, Development and Evaluation–Confidence in Evidence from Reviews of Qualitative research) approach. Extracted recommendations were synthesised using the best fit framework method, which includes thematic analysis followed by a meta‐aggregative approach to synthesis within the framework. Of the 10 184 records screened and 199 full‐text articles reviewed, only 40 guidance resources met the eligibility criteria for inclusion. Recommendations extracted from 22 guidance documents were generalisable enough for data synthesis. Seven recommendation statements were synthesised as follows: (1) multidisciplinary collaboration in trial design with early involvement of pathologists, particularly with respect to the use of biospecimens and associated biomarker/analytical assays and in the evaluation of pathology‐related parameters; (2) funding and training for personnel undertaking trial work; (3) selection of an accredited laboratory with suitable facilities to undertake scheduled work; (4) quality assurance of pathology‐related parameters; (5) transparent reporting of pathology‐related parameters; (6) policies regarding informatics and tracking biospecimens across trial sites; and (7) informed consent for specimen collection and retention for future research.
Keywords: clinical trial, pathology, systematic review, checklist, guideline, protocol, recommendations, SPIRIT
Introduction
Randomised controlled trials are considered the ‘gold standard’ in medical research for comparing the safety and effectiveness of novel interventions as they minimise bias compared to other empirical study designs, thus delivering a high level of evidence [1, 2]. The confidence in the quality of evidence drawn from the conclusions of clinical trials is ultimately only as good as the robustness of the study design and methodology, which encompasses the quality assurance of detailed processes involved in the delivery of the trial [2, 3]. Poorly planned clinical trials result in misleading findings from suboptimal trial execution and analysis, thus wasting resources and potentially harming patients [4]. Such issues can be mitigated by methodologically rigorous clinical trial protocols to facilitate the design, conduct, analysis, and reporting of reliable clinical trials [3, 4, 5]. For this reason, the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) 2013 Statement was developed to provide standardised guidance for inclusion of key methodological components in clinical trial protocols [6]. However, these standards do not include guidance specific to pathology input in clinical trials.
To attain methodological rigour in trials, there is increasing recognition of the need for pathologists to be involved early in trial planning and design [7, 8, 9, 10]. Current literature contains a considerable number of reviews and perspective papers offering opinions from various pathologists and biomedical scientists on different specific aspects of the laboratory workflow that could improve clinical trial quality [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]. However, there is no single comprehensive guidance document covering all aspects of pathology workflow feeding into various stages of the clinical trial process. Furthermore, trialists writing protocols typically have a limited understanding of the role of the laboratory and pathologist in their studies and may overlook key issues that need to be addressed during the design phase.
Aims and objectives
This systematic review aims to synthesise existing guidance and recommendations for pathology practice in clinical trials for implementation in trial protocol design.
Based on recent systematic review typology recommendations by Munn et al [43] from the Joanna Briggs Institute (JBI), this is a systematic review of expert opinion. The review question was therefore developed based on the recommended PICo (Population, phenomena of Interest, Context) framework, as follows:
P (Population): individuals involved in the design, conduct, and analysis of clinical trials requiring pathology input.
I (phenomena of Interest): recommendations or guidelines specific to pathology input (within the ‘context’ below).
Co (Context): the entire clinical trial process, from the design and conduct of the trial to the analysis and dissemination of trial findings.
Review question: What recommendations are available to guide cellular and molecular pathology input in clinical trials?
Materials and methods
Protocol and registration
The protocol was prospectively registered on the Open Science Framework online repository (Registration DOI: https://doi.org/10.17605/osf.io/jeqtx) [44]. This review was reported in accordance with the 2009 PRISMA (Preferred Reporting Items for Systematic Reviews and Meta‐Analyses) statement [45].
Eligibility criteria
Resources were deemed eligible for inclusion if they contained: (1) guidance (in the form of guidelines or expert recommendations) and/or a checklist, which is (2) pathology‐related, with (3) content relevant to clinical trial protocols or could influence a clinical trial protocol design from a pathology perspective. In addition, the year of publication was restricted to include resources published in 1996 or later, after the publication of the CONSORT (Consolidated Standards Of Reporting Trials) Statement guidelines [46]. There were no other restrictions by language or publication type.
Information sources and search strategy
The search strategy was devised by initially scoping the literature on MEDLINE and EMBASE via the Ovid platform, as well as an internet search on Google Scholar to identify a comprehensive set of relevant search terms. The full search strategies for all databases and web searches are available in the review protocol [44]. Free‐text terms such as ‘(histolo*; OR patholo*)’ AND ‘(checklist; OR guideline; OR recommendation)’ AND ‘(clinical trial; OR protocol)’, along with equivalent controlled vocabulary terms, were used in the search of the MEDLINE (Ovid), EMBASE (Ovid), and Cochrane Library databases. Additional search terms such as ‘biomarker*’, ‘molecular diagnos*’, ‘practice guid*’, and ‘study design’ were also applied across the MEDLINE and EMBASE databases. Web searches on Google and Google Scholar were performed using the advanced search function with the keywords ‘(Pathology; OR Histology; OR Biomarkers)’ AND ‘(Guideline; OR Checklist)’ AND ‘Clinical trial’. Only the first three pages (30 results) from each internet search were screened as lower‐ranked results are less relevant to the search query [47]. The databases and search engines were searched from 1 January 1996 to 13 January 2020. Besides the year of publication, no other limitations (such as language restrictions or restrictions by publication status) were placed on the searches.
Selection of guidance resources
All citations were imported into Mendeley Desktop software (Elsevier, London, UK; Version 1.19.5/2019). The titles and abstracts of the records were screened by two reviewers; SJL screened all the records, and TJK and MR acted as second reviewers. Inter‐rater agreement between the two reviewers was measured using Fleiss' kappa (κ), and the strength of agreement was interpreted according to Landis and Koch [48]. In cases where an abstract was not available, the full text of the article was retrieved. Resources that both reviewers selected for inclusion were subjected to full‐text review, while any disagreements were arbitrated by a third reviewer (TJK or MR). Two reviewers independently reviewed the full texts; any disagreements were discussed with a third reviewer and resolved by consensus. Resources not meeting the inclusion criteria upon review of full texts were excluded with reasons provided.
Quality assessment of guidance resources
Each eligible guidance resource was appraised by two reviewers independently using the AGREE‐GRS quality assessment instrument [49]. The final scores for each domain of the AGREE‐GRS tool were calculated based on the method recommended in the AGREE II User's Manual [50]. Scores of 33% and below for each domain were considered to be of low quality, scores between 34% and 66% were considered to be of moderate quality, and scores of 67% and more were considered to be of high quality. Guidance resources were not excluded based on the quality scores. The scores were used to compare the variation in methodological quality across guidance resources, which were then categorised as low, moderate, and high quality based on the AGREE‐GRS scores. This was used to inform judgement on the level of confidence in the evidence contributing to the final recommendation statements using principles from the GRADE‐CERQual (Grading of Recommendations Assessment, Development and Evaluation–Confidence in Evidence from Reviews of Qualitative research) approach [51, 52, 53, 54, 55, 56].
Data extraction and data items
Using best practice guidance from the JBI Manual for Evidence Synthesis [57], a pre‐piloted digital data extraction form was customised in Excel, which was adapted from the JBI ‘text and opinion data extraction tool’ [58] to incorporate data fields specific to systematic reviews of textual opinion‐based evidence. The following specific data items were extracted from each resource:
population – target audience for which guidance is developed;
context within trial – areas of pathology input within clinical trial;
phenomena of interest – variables associated with pathology‐specific trial guidance (guidance development methodology, organisational and geographical representation contributing to guidance development, reference to clinical research regulatory authority, clinical specialty focus, presence of a pathology‐specific trial protocol checklist);
verbatim extracts of guidance statements; and
interpretation of guidance statements; classifying guidance statements as ‘explicit’ or ‘implicit’ (explicit – ‘distinctly expressing all that is meant; leaving nothing merely implied or suggested’; implicit – ‘suggested or understood but not directly expressed’) [59].
Data from each selected resource were independently extracted by two reviewers onto the data extraction form, followed by checks for consistency. Any discrepancies were first discussed between the two reviewers, and any disagreements were resolved by a third reviewer.
Data synthesis and analysis
We used the best fit framework synthesis method [60, 61, 62], which incorporates all elements of the JBI meta‐aggregation approach [63, 64], for synthesis and analysis of descriptive qualitative data. The best fit framework synthesis method incorporates techniques from both the framework synthesis and thematic analysis methods. We followed the five stages of this method, which involved ‘familiarisation’ with the literature to select an appropriate a priori framework (the SPIRIT 2013 Statement [6] was used as the a priori framework). This framework was applied to the ‘indexing’, ‘charting’, and ‘mapping and interpretation’ of extracted guidance statements in the data extraction, synthesis, and analysis stages of the review, respectively.
Results
Selection of guidance resources
The database searches identified 12 403 records: 2706 from MEDLINE (Ovid), 5264 from EMBASE (Ovid), and 4433 from Cochrane Library. The web searches identified another 56 unique records. The supplementary search of reference lists yielded an additional 48 unique records, of which 12 full‐text articles were eligible for inclusion. No grey literature was identified. The titles and abstracts of 10 184 records were screened after de‐duplication. A total of 9985 records were excluded by screening titles and abstracts (κ = 0.76, substantial agreement). From the 199 full‐text articles retrieved and assessed for eligibility, 40 articles were included in the review (κ = 0.65, substantial agreement). There were no foreign language articles. The other 159 full‐text articles were excluded with reasons stated in Figure 1.
Figure 1.

PRISMA flow diagram of selection process of guidance resources.
Characteristics of guidance resources
Of the 40 guidance resources included [7, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 65, 66, 67, 68, 69, 70, 71], around half (n = 21, 53%) offered recommendations pertaining to all aspects of pathology input within clinical trials (patient selection, risk stratification, and outcome assessment) [7, 9, 11, 13, 17, 20, 26, 27, 28, 29, 30, 35, 36, 38, 41, 42, 65, 67, 68, 69, 70]. Seven (17%) of the included guidance resources contained explicit recommendations [7, 18, 31, 39, 41, 66, 71], 24 (60%) contained implicit recommendations [9, 11, 12, 13, 14, 15, 16, 19, 22, 23, 24, 25, 27, 28, 29, 30, 34, 35, 36, 37, 42, 67, 69, 70], and the remaining 9 (23%) offered a combination of explicit and implicit recommendations [17, 20, 21, 26, 32, 38, 40, 65, 68]. Only four guidance resources had a pathology‐specific trial protocol checklist [17, 18, 32, 34]. A third of the guidance resources (n = 13) referenced one or more clinical research regulatory authority [9, 11, 18, 22, 25, 26, 32, 38, 40, 65, 67, 68, 69]. Detailed characteristics of individual guidance resources are provided in supplementary material, Table S1.
Quality assessment
The quality scores for each AGREE‐GRS domain (out of 100%) for individual guidance resources are available in supplementary material, Table S2. All guidance resources were considered to be highly relevant and applicable to clinical trial practice (Figure 2). The majority of the guidance resources was assessed to have high quality (AGREE‐GRS score of >67%). However, the quality was considered to be lower (AGREE‐GRS score of <33%) in the AGREE‐GRS domain of ‘completeness of reporting’, subsequently mapping to the GRADE‐CERQual domain of ‘adequacy of data’.
Figure 2.

Proportion of guidance resources showing low, moderate, and high quality across AGREE‐GRS domains.
Heterogeneity across guidance resources
Sources of heterogeneity that could contribute to bias across guidance resources are listed in Table 1. The majority (58%) of the included guidance resources was discussion papers (including perspectives or opinion papers), while the remainder consisted of a mixture of literature review, primary research papers, letters to editors, regulatory authority guidance documents, and methodology papers. The majority of the resources (73%) were published in the last decade, whereas only two (5%) were published between 1996 and 2005; the remaining were published between 2006 and 2010. In terms of the guidance development process, nearly half (45%) of the resources had employed a rigorous development methodology involving a formal working group at a national or international level, while 15 (38%) did not report a formal methodology. The majority of these 15 that did not report a formal methodology (n = 11) was based on the opinions, perspectives, and experiences of a selected group of authors. Nearly half of the guidance resources (45%) were recommendations developed by the United States and/or Canada, whereas only a quarter were developed by the United Kingdom (n = 5) and other countries within Europe (n = 5). Only 10 resources (25%) included recommendations developed from an international representation of experts.
Table 1.
Heterogeneity across guidance resources.
| Sources of heterogeneity | Number of resources (%) | |
|---|---|---|
| Type of publication | Discussion paper | 23 (57.5) |
| Literature review | 4 (10.0) | |
| Research paper (primary studies) | 3 (7.5) | |
| Short communication (letter) | 2 (5.0) | |
| Regulatory authority guidance document | 4 (10.0) | |
| Methods paper | 3 (7.5) | |
| Book chapter | 1 (2.5) | |
| Year of publication | 1996–2000 | 1 (2.5) |
| 2001–2005 | 1 (2.5) | |
| 2006–2010 | 9 (22.5) | |
| 2011–2015 | 16 (40.0) | |
| 2016–2020 | 13 (32.5) | |
| Methodology used in guidance development | Formal consensus process involving international stakeholders | 9 (22.5) |
| National Task Force consensus or formal working group consensus | 9 (22.5) | |
| Review of literature | 4 (10.0) | |
| Conclusions from primary studies | 3 (7.5) | |
| Guideline development methodology unreported | 15 (37.5) | |
| Opinions or perspectives of a single author or few authors | 11 (27.5) | |
| Regulatory agency guidance document | 3 (7.5) | |
| Book chapter | 1 (2.5) | |
| Geographical representation of experts involved in guidance development | USA only | 15 (37.5) |
| North America only | 3 (7.5) | |
| UK only | 5 (12.5) | |
| The Netherlands only | 1 (2.5) | |
| Europe only | 4 (10.0) | |
| North America and Europe | 2 (5.0) | |
| International | 10 (25.0) |
Results of individual guidance resources
The verbatim extracts of recommendation statements from each individual guidance resource, mapped onto the a priori framework (the SPIRIT 2013 Statement) [6], followed by its associated interpretation and ‘charting’ are available on the Open Science Framework online repository [72]. During the charting stage, the recommendation statements from the individual guidance resources were coded into themes that best describe the nature and type of recommendations offering best practice guidance at various stages of the clinical trial process. The charting process revealed that the themes could be broadly categorised according to stages of the clinical trial process, namely, pre‐analytical, analytical, and post‐analytical phases, as well as across all phases of the trial. Table 2 shows the frequency and distribution of resources offering guidance for each identified theme.
Table 2.
Results of individual guidance resources showing types of recommendations for pathology input in clinical trials.
| Recommendation category | Specific recommendation | Number of resources | % | References |
|---|---|---|---|---|
| Pre‐analytical phase | Personnel | |||
| Accreditation and training of pathologists | 8 | 20 | [9, 11, 13, 25, 38, 65, 68, 69] | |
| Accreditation and training of laboratory staff | 4 | 10 | [9, 13, 65, 68] | |
| Statistical and technical laboratory expertise for pathology‐based parameters | 8 | 20 | [11, 16, 17, 18, 26, 32, 42, 69] | |
| General laboratory systems and facilities | ||||
| Laboratory accreditation | 10 | 25 | [7, 11, 13, 18, 25, 28, 38, 65, 68, 69] | |
| Assay validation and performance testing | 4 | 10 | [13, 17, 18, 25] | |
| Standardisation of laboratory processes (including standard operating procedures) | 4 | 10 | [11, 13, 65, 68] | |
| Rationale for pathology‐specific criteria | ||||
| Use of biospecimens, specific biomarker, or associated analytical assay | 8 | 20 | [17, 18, 22, 24, 25, 32, 67, 71] | |
| Inclusion criteria or risk stratification | 3 | 7.5 | [22, 25, 32] | |
| Outcome measurement | 3 | 7.5 | [22, 32, 41] | |
| Biospecimens | ||||
| Standardisation of sample collection and handling procedures | 7 | 17.5 | [7, 17, 19, 25, 28, 67, 70] | |
| Sample storage conditions | 5 | 12.5 | [19, 28, 67, 68, 70] | |
| Sample transport conditions across sites | 7 | 17.5 | [17, 19, 25, 28, 67, 68, 70] | |
| Biobank facilities | 4 | 10 | [19, 28, 67, 70] | |
| Disease‐specific pre‐analytical sampling and processing methods (see supplementary material, Table S3) | 4 | 10 | ||
| Multidisciplinary collaboration among all parties involved in trial | 10 | 25 | [7, 9, 11, 17, 18, 24, 25, 26, 65, 67] | |
| Funding of materials, laboratory staff, and pathologists | 4 | 10 | [11, 18, 32, 38] | |
| Analytical phase | Microscopic assessment methods | |||
| Use of artificial intelligence for microscopic analysis | 2 | 5 | [29, 38] | |
| Central pathology review | 3 | 7.5 | [11, 35, 66] | |
| Histopathology reporting | 2 | 5 | [7, 68] | |
| Disease‐specific analytical methods (see supplementary material, Table S3) | 8 | 20 | [12, 14, 15, 23, 31, 34, 37, 40] | |
| Auditing and data validation | 8 | 20 | [11, 13, 18, 30, 35, 38, 66, 68] | |
| Post‐analytical phase | Dissemination of results | |||
| Data sharing | 3 | 7.5 | [19, 28, 70] | |
| Transparent reporting | 3 | 7.5 | [27, 28, 70] | |
| Across all trial phases | Data monitoring and validation | 5 | 12.5 | [18, 30, 42, 65, 68] |
| Informed consent materials and supporting documentation to be given to participants | 8 | 20 | [11, 17, 18, 24, 39, 41, 67, 69] | |
| Confidentiality and data protection | 5 | 12.5 | [18, 25, 41, 67, 68] | |
| Ethics surrounding biospecimens collection, handling, storage, and transport | 4 | 10 | [24, 39, 41, 67] | |
| Ethics surrounding genetic testing and data sharing | 2 | 5 | [25, 67] |
The recommendation statements categorised according to recurring themes form the basis of synthesised recommendations. Other guidance resources offering standalone recommendations do not contribute to any recurring themes and are therefore not generalisable enough for synthesis; these are discussed below.
Five of the guidance resources contain recommendations specific only to certain types of clinical trials. These include best practice guidance for specific laboratory techniques such as tissue microarray construction and evaluation [42], consideration of laboratory technical parameters in omics‐based clinical trials [26], and the technical aspects of biomarker development [16] and biomarker integration [18, 25] in early‐phase clinical trials.
Eleven of the guidance resources offer recommendations focused on the pre‐analytical and analytical phases of clinical trials specific to a clinical specialty, such as breast cancer [12, 20, 21, 31, 40], non‐small cell lung cancer [34], paediatric neuroblastoma [37], paediatric rheumatology [36], prostate cancer [14], and lymphoma [15, 23]. Details of these disease‐specific recommendations are shown in supplementary material, Table S3.
The guidance documents from the European Medicines Agency [65] and the Medicine and Healthcare products Regulatory Agency [68] stipulate the minimum regulatory standards to which all laboratory supporting clinical trial work should adhere. As meeting these standards is a legal requirement, these two resources have not been included in recommendation synthesis.
Synthesis of recommendations
Best practice recommendation statements that contributed to the recurring themes were gathered and synthesised from 22 of the guidance resources [7, 9, 11, 13, 17, 19, 22, 24, 27, 28, 29, 30, 32, 35, 38, 39, 41, 66, 67, 69, 70, 71] using the JBI meta‐aggregation approach (see supplementary material, Table S4). The GRADE‐CERQual evidence profile (see supplementary material, Table S5) shows the CERQual assessment details, with reasons for reaching the judgements, for each of the four GRADE‐CERQual components. The synthesised pathology‐specific recommendations at different stages of the clinical trial process, along with the overall confidence in each synthesised statement, are as follows:
Pre‐analytical phase
Recommendation 1 (R1): The responsibilities and level of involvement in clinical trial work should be agreed upon among all multidisciplinary collaborators and formally documented prior to trial opening. Input from pathologists and other relevant personnel with technical laboratory and statistical expertise and experience should be sought during the development of trial protocol, trial design, and implementation, in particular for justifying the use of biospecimens and/or a specific biomarker and associated analytical assays in the clinical trial, as well as reaching a consensus on pre‐specified definitions of pathology‐related parameters when interpreting findings for trial inclusion or risk stratification. (Moderate confidence).
Recommendation 2 (R2): All personnel undertaking any aspect of clinical trial work should have proper accreditation and sufficient funding and training corresponding to their involvement and role in the clinical trial. (High confidence).
Recommendation 3 (R3): The laboratory site selected to carry out trial work should have:
appropriate accreditation, with practices of laboratory management and operations complying with standards of regular external quality assurance schemes;
capacity to adhere to trial‐specific standard operating procedures in the testing platforms and preparation and storage of sample; and
suitable facilities required for the trial (e.g. accredited digital pathology platforms). (Moderate confidence).
Analytical phase
Recommendation 4 (R4): Plans to ensure the completeness and accuracy of pathology‐related data sets in clinical trials should be clearly documented and should address the following:
a system for prospective rapid real‐time central pathology review or double reporting with consensus to achieve uniformity in diagnosis;
use of standardised digital pathology platforms where appropriate;
data quality review by a trained pathology quality manager or review committee to ensure adherence to standardised pathology examination and interpretation procedures; and
regular analytical audits of internal testing platforms and assay performance testing and validation of inter‐pathologist reproducibility and inter‐laboratory repeatability analysis. (High confidence).
Post‐analytical phase
Recommendation 5 (R5): The pathology‐relevant aspects of clinical trials should be transparently reported according to the BRISQ (Biospecimen Reporting for Improved Study Quality) checklist and REMARK (REporting recommendations for tumour MARKer prognostic studies) criteria, where appropriate, to include specific details relevant to biospecimen procurement, type, anatomical site, and associated patients' clinical details, as well as protocols for tissue preparation, preservation, and biomarker staining parameters. These standard operating procedures for the management of biospecimens during the clinical trial should be registered on a publicly accessible database, with digital location cited in the research publication. (Low confidence).
Across all trial phases
Recommendation 6 (R6): Laboratories and biorepositories should have policies and procedures with secure informatics systems in place to minimise risks of harm to participants and to protect the confidentiality and data of participants, including anonymising collected biospecimens, tracking the movement of biospecimens within and across sites, and ensuring consent for biospecimen retention is valid prior to storage. (Moderate confidence).
Recommendation 7 (R7): Patient information sheets and consent forms should be reviewed by pathologists and should include information pertaining to the rationale of the use of biospecimen within the context of the clinical trial; the risks and benefits involved; how data will be analysed, stored, transferred between institutions, and shared with their healthcare providers; the details of biospecimen collection (type, frequency, volume, or size of sample); and specimen retention policies for future research. In studies involving genomics or genetics, participants should specifically be counselled on the implications of a positive result on themselves and their relatives. (Moderate confidence).
Discussion
Applicability of synthesised findings to clinical trial practice
The findings of this review yielded seven synthesised recommendation statements, covering the pre‐analytical, analytical, and post‐analytical phases of clinical trials, as well as guidance pertaining to biospecimen ethics and informed consent documentation, which spans across all phases of the clinical trial process. The GRADE‐CERQual Summary of Qualitative Findings (see supplementary material, Table S6) shows a summary of the synthesised recommendations and their respective overall quality. It also shows the applicability of each of the seven synthesised recommendation statements to particular aspects of the clinical trial process and the relevance of each recommendation to different groups of key clinical trial stakeholders, as well as the implications of each recommendation on potentially changing clinical trial practice in trial protocol development and trial reporting, within the context of the SPIRIT Statement [6] and CONSORT Statement [73] checklist items.
Strengths and limitations
This systematic review has been transparently reported according to the PRISMA 2009 Statement. Selection bias of resources was minimised by having no restrictions placed on language and publication type and by two reviewers independently selecting the studies for inclusion. Publication bias was mitigated by searching several databases and the web. Furthermore, this systematic review has been conducted in accordance with the international best practice guidance provided by the JBI Manual for Evidence Synthesis [57] for this type of reviews of text and opinion, except for the use of alternative assessment tools for appraising the quality of individual studies (AGREE‐GRS used instead of ‘JBI‐Qualitative Critical Appraisal Checklist’) and in establishing confidence in the synthesised evidence (GRADE‐CERQual used instead of the ConQual approach). The AGREE‐GRS tool was used because its checklist questions are more suitable and appropriate for use to evaluate the quality of the nature of content (recommendations and guidelines) presented in the papers within this systematic review compared to that of the JBI‐Qualitative Critical Appraisal Checklist [57]. The GRADE‐CERQual approach to establishing confidence was subsequently adopted as the domains of AGREE‐GRS map better onto the GRADE‐CERQual assessment domains than the ConQual assessment domains. During the data synthesis process of this systematic review, the formulation of recommendation statements and the quality assessment of cumulative evidence using the CERQual approach were determined through the consensus of three reviewers.
Individual guidance resources were found to be heterogeneous. The majority of publications were in the last decade, suggesting relevance to current pathology‐related practices within clinical trials, such as the use of digital pathology in data quality assurance and central pathology review. However, more than half of the resources included were discussion papers in the form of opinion papers and perspective papers, which is the main limitation to this review in terms of status on the evidence hierarchy. In fact, most of the synthesised recommendations with overall confidence downgraded to ‘moderate’ were due to ‘moderate methodological limitations’ on the CERQual assessment as a result of incomplete documentation of the guideline development process within the working groups that published the perspective papers. Furthermore, the preponderance of publications with organisational representation in the Western countries, particularly in the North American countries, may introduce bias and result in synthesised recommendations that are not generalisable to trials conducted in other geographical locations.
Suggestions for future guidance developments
Evidence from this systematic review suggests that the current literature contains recommendations addressing most aspects of pathology input into clinical trials. However, it has not kept up with domains within pathology that have evolved with recent rapid advancement in technologies used for developing personalised medicine, which have revolutionised conventional diagnostic pathology.
There is scope for future developments in pathology‐specific guidance into clinical trials in terms of transparent reporting of technology‐dependent pathology parameters, particularly pertaining to the use of standardised digital platforms for histopathological assessment, as well as techniques for biomarker analysis in the molecular classification of diseases. Recommendations for genomic profiling methods used in molecular diagnostics, which have clinical implications for treatment, disease monitoring, and prognostication, are also currently lacking. These gaps in the literature can be effectively addressed by appropriate guideline development initiatives [74].
Conclusions
This systematic review to synthesise pathology‐specific recommendations within clinical trials is the first of its kind. The findings will inform an international effort to develop a pathology extension of the SPIRIT Statement [6], called SPIRIT‐Path [74], using the Delphi consensus method involving stakeholders from diverse backgrounds. Implementation of pathology‐specific best practice recommendations into clinical trial protocols will enhance the methodological and scientific rigour of trial delivery, improving the reliability of evidence, and translating into rational healthcare improvements for the benefit of patients worldwide.
Author contributions statement
DO conceived the idea. All the authors contributed to the development of the protocol. KG provided methodological advice. SJL, TJK and MR extracted, analysed, and synthesised the data. SJL and MR wrote the first draft of the manuscript, and all authors critically reviewed and approved the final version of the manuscript.
Supporting information
Table S1. Characteristics of included guidance resources
Table S2. Quality assessment of individual guidance resources
Table S3. Clinical specialty‐specific recommendations for biospecimen collection, processing, and histopathological assessment in clinical trials
Table S4. Meta‐aggregative synthesis flowchart of pathology‐specific recommendations according to different stages of the clinical trial process
Table S5. GRADE‐CERQual evidence profile
Table S6. GRADE‐CERQual summary of qualitative findings
Acknowledgements
The work was supported by the National Cancer Research Institute Cellular Molecular Pathology Initiative (NCRI CM‐Path), which is funded by a collaborative venture between 10 organisations: Bloodwise, Breast Cancer Now, Cancer Research UK, the Chief Scientist Office (Scotland), the Department of Health (England), Health and Care Research Wales (Welsh Government), Health and Social Care (N Ireland), the Medical Research Council, Prostate Cancer UK, and Tenovus Cancer Care.
No conflicts of interest were declared.
References
- 1. Hariton E, Locascio JJ. Randomised controlled trials – the gold standard for effectiveness research. BJOG 2018; 125: 1716. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. National Institute for Health Research . Clinical Trials Guide. 2019. [Accessed 7 July 2020]. Available from: https://www.nihr.ac.uk/documents/clinical-trials-guide/20595
- 3. Bhide A, Shah PS, Acharya G. A simplified guide to randomized controlled trials. Acta Obstet Gynecol Scand 2018; 97: 380–387. [DOI] [PubMed] [Google Scholar]
- 4. Hartling L, Hamm M, Klassen T, et al. Standard 2: containing risk of bias. Pediatrics 2012; 129 (Suppl 3): S124–S131. [DOI] [PubMed] [Google Scholar]
- 5. Joseph PD, Caldwell PHY, Barnes EH, et al. Completeness of protocols for clinical trials in children submitted to ethics committees. J Paediatr Child Health 2019; 55: 291–298. [DOI] [PubMed] [Google Scholar]
- 6. Chan AW, Tetzlaff JM, Altman DG, et al. Spirit 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med 2013; 13: 1501–1507. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Nagtegaal ID, West NP, van Krieken JHJM, et al. Pathology is a necessary and informative tool in oncology clinical trials. J Pathol 2014; 232: 185–189. [DOI] [PubMed] [Google Scholar]
- 8. Röcken C. Quality assurance in clinical trials—the role of pathology. Virchows Arch 2016; 468: 83–92. [DOI] [PubMed] [Google Scholar]
- 9. Rees G, Salto‐Tellez M, Lee J, et al. Training and accreditation standards for pathologists undertaking clinical trial work. J Pathol Clin Res 2019; 5: 100–107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Provenzano E, Driskell OJ, O'Connor DJ, et al. The important role of the histopathologist in clinical trials: challenges and approaches to tackle them. Histopathology 2020; 76: 942–949. [DOI] [PubMed] [Google Scholar]
- 11. Robinson M, James J, Thomas G, et al. Quality assurance guidance for scoring and reporting for pathologists and laboratories undertaking clinical trial work. J Pathol Clin Res 2019; 5: 91–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Salgado R, Denkert C, Demaria S, et al. The evaluation of tumor‐infiltrating lymphocytes (TILs) in breast cancer: recommendations by an International TILs Working Group 2014. Ann Oncol 2015; 26: 259–271. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Sarzotti‐Kelsoe M, Cox J, Cleland N, et al. Evaluation and recommendations on good clinical laboratory practice guidelines for phase I‐III clinical trials. PLoS Med 2009; 6: e1000067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Scher HI, Morris MJ, Stadler WM, et al. Trial design and objectives for castration‐resistant prostate cancer: updated recommendations from the Prostate Cancer Clinical Trials Working Group 3. J Clin Oncol 2016; 34: 1402–1418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Spurgeon SE, Till BG, Martin P, et al. Recommendations for clinical trial development in Mantle cell lymphoma. J Natl Cancer Inst 2017; 109: djw263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Stewart DJ, Kurzrock R. Fool's gold, lost treasures, and the randomized clinical trial. BMC Cancer 2013; 13: 193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Hall JA, Daidone MG, Peters GJ, et al. Integrating collection of biospecimens in clinical trials: the approach of the European organization for research and treatment of cancer. Biopreserv Biobank 2011; 9: 181–186. [DOI] [PubMed] [Google Scholar]
- 18. Hall JA, Salgado R, Lively T, et al. A risk‐management approach for effective integration of biomarkers in clinical trials: perspectives of an NCI, NCRI, and EORTC working group. Lancet Oncol 2014; 15: e184–e193. [DOI] [PubMed] [Google Scholar]
- 19. LaBaer J. Improving international research with clinical specimens: 5 achievable objectives. J Proteome Res 2012; 11: 5592–5601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Leyland‐Jones BR, Ambrosone CB, Bartlett J, et al. Recommendations for collection and handling of specimens from group breast cancer clinical trials. J Clin Oncol 2008; 26: 5638–5644. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Loi S, Symmans WF, Bartlett JMS, et al. Proposals for uniform collection of biospecimens from neoadjuvant breast cancer clinical trials: timing and specimen types. Lancet Oncol 2011; 12: 1162–1168. [DOI] [PubMed] [Google Scholar]
- 22. LoRusso PM, Boerner SA, Seymour L. An overview of the optimal planning, design, and conduct of phase I studies of new therapeutics. Clin Cancer Res 2010; 16: 1710–1718. [DOI] [PubMed] [Google Scholar]
- 23. Maddocks K, Barr PM, Cheson BD, et al. Recommendations for clinical trial development in follicular lymphoma. J Natl Cancer Inst 2017; 109: djw255. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Makhlouf H, Watson MA, Lankes HA, et al. Toward improving practices for submission of diagnostic tissue blocks for National Cancer Institute Clinical Trials. Am J Clin Pathol 2019; 153: 149–155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Marton MJ, Weiner R. Practical guidance for implementing predictive biomarkers into early phase clinical studies. Biomed Res Int 2013; 2013: 891391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. McShane LM, Cavenagh MM, Lively TG, et al. Criteria for the use of omics‐based predictors in clinical trials. Nature 2013; 502: 317–320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. McShane LM, Hayes DF. Publication of tumor marker research results: the necessity for complete and transparent reporting. J Clin Oncol 2012; 30: 4223–4232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Moore HM, Kelly AB, Jewell SD, et al. Biospecimen Reporting for Improved Study Quality (BRISQ). J Proteome Res 2011; 10: 3429–3438. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Mroz P, Parwani AV, Kulesza P. Central pathology review for phase III clinical trials: the enabling effect of virtual microscopy. Arch Pathol Lab Med 2013; 137: 492–495. [DOI] [PubMed] [Google Scholar]
- 30. Nagtegaal I, Klein Kranenbarg E, Hermans J, et al. Pathology data in the central databases of multicenter randomized trials need to be based on pathology reports and controlled by trained quality managers. J Clin Oncol 2000; 18: 1771–1779. [DOI] [PubMed] [Google Scholar]
- 31. Bossuyt V, Provenzano E, Symmans WF, et al. Recommendations for standardized pathological characterization of residual disease for neoadjuvant clinical trials of breast cancer by the BIG‐NABCG collaboration. Ann Oncol 2015; 26: 1280–1291. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Dancey JE, Dobbin KK, Groshen S, et al. Guidelines for the development and incorporation of biomarker studies in early clinical trials of novel agents. Clin Cancer Res 2010; 16: 1745–1755. [DOI] [PubMed] [Google Scholar]
- 33. Driskell OJ, Lee J, Oien K, et al. Pathology and regulation for research in the UK: an overview. F1000Res 2019; 8: 975. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Eberhard DA, Giaccone G, Johnson BE. Biomarkers of response to epidermal growth factor receptor inhibitors in non‐small‐cell lung cancer working group: standardization for use in the clinical trial setting. J Clin Oncol 2008; 26: 983–994. [DOI] [PubMed] [Google Scholar]
- 35. Vujanić GM, Sandstedt B, Kelsey A, et al. Central pathology review in multicenter trials and studies: lessons from the nephroblastoma trials. Cancer 2009; 115: 1977–1983. [DOI] [PubMed] [Google Scholar]
- 36. Yeung R, Albani S, Feldman B, et al. Enhancing translational research in paediatric rheumatology through standardization. Nat Rev Rheumatol 2016; 12: 684–690. [DOI] [PubMed] [Google Scholar]
- 37. Park JR, Bagatell R, Cohn SL, et al. Revisions to the International Neuroblastoma Response Criteria: a consensus statement from the National Cancer Institute Clinical Trials Planning Meeting. J Clin Oncol 2017; 35: 2580–2587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Pell R, Oien K, Robinson M, et al. The use of digital pathology and image analysis in clinical trials. J Pathol Clin Res 2019; 5: 81–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Peppercorn J, Shapira I, Collyar D, et al. Ethics of mandatory research biopsy for correlative end points within clinical trials in oncology. J Clin Oncol 2010; 28: 2635–2640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Provenzano E, Bossuyt V, Viale G, et al. Standardization of pathologic evaluation and reporting of postneoadjuvant specimens in clinical trials of breast cancer: recommendations from an international working group. Mod Pathol 2015; 28: 1185–1201. [DOI] [PubMed] [Google Scholar]
- 41. Ricci DS, Franc M. Implementation of pharmacogenomic sample collection in clinical trials. In Pharmacogenomics and Personalized Medicine, Methods in Pharmacology and Toxicology, C Nadine (Ed.). Humana Press: Totowa, 2008; 27–54. Available from: https://link.springer.com/book/10.1007/978-1-59745-439-1. [Google Scholar]
- 42. Rimm DL, Nielsen TO, Jewell SD, et al. Cancer and Leukemia Group B Pathology Committee guidelines for tissue microarray construction representing multicenter prospective clinical trial tissues. J Clin Oncol 2011; 29: 2282–2290. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Munn Z, Stern C, Aromataris E, et al. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 2018; 18: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Lim SJ. The SPIRIT‐Path Extension (Updated). Open Science Framework. 2020. [Accessed 4 September 2020]. Available from: https://osf.io/jeqtx
- 45. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. BMJ 2009; 339: b2535. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. CONSORT . Consort – History. 2013. [Accessed 7 July 2020]. Available from: http://www.consort-statement.org/about-consort/history
- 47. iPROSPECT . iProspect Search Engine User Behavior Study. 2006. [Accessed 7 July 2020]. Available from: http://www.iprospect.com/premiumPDFs/WhitePaper_2006_SearchEngineUserBehavior.pdf
- 48. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–174. [PubMed] [Google Scholar]
- 49. Brouwers MC, Kho ME, Browman GP, et al. The Global Rating Scale complements the AGREE II in advancing the quality of practice guidelines. J Clin Epidemiol 2012; 65: 526–534. [DOI] [PubMed] [Google Scholar]
- 50. AGREE . AGREE II User Manual. 2017; 9. [Accessed 9 July 2020]. Available from: https://www.agreetrust.org/wp‐content/uploads/2017/12/AGREE‐II‐Users‐Manual‐and‐23‐item‐Instrument‐2009‐Update‐2017.pdf
- 51. Colvin CJ, Garside R, Wainwright M, et al. Applying GRADE‐CERQual to qualitative evidence synthesis findings‐paper 4: how to assess coherence. Implement Sci 2018; 13 (Suppl 1): 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Glenton C, Carlsen B, Lewin S, et al. Applying GRADE‐CERQual to qualitative evidence synthesis findings‐paper 5: how to assess adequacy of data. Implement Sci 2018; 13 (Suppl 1): 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Lewin S, Bohren M, Rashidian A, et al. Applying GRADE‐CERQual to qualitative evidence synthesis findings‐paper 2: how to make an overall CERQual assessment of confidence and create a Summary of Qualitative Findings table. Implement Sci 2018; 13 (Suppl 1): 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Lewin S, Booth A, Glenton C, et al. Applying GRADE‐CERQual to qualitative evidence synthesis findings: introduction to the series. Implement Sci 2018; 13 (Suppl 1): 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Munthe‐Kaas H, Bohren MA, Glenton C, et al. Applying GRADE‐CERQual to qualitative evidence synthesis findings‐paper 3: how to assess methodological limitations. Implement Sci 2018; 13 (Suppl 1): 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Noyes J, Booth A, Lewin S, et al. Applying GRADE‐CERQual to qualitative evidence synthesis findings‐paper 6: how to assess relevance of the data. Implement Sci 2018; 13 (Suppl 1): 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. McArthur A, Klugarova J, Yan H, et al. Chapter 4: systematic reviews of text and opinion. In JBI Manual for Evidence Synthesis, Aromataris E, Munn Z (Eds). JBI: Adelaide, 2020; 134–174. [Google Scholar]
- 58. McArthur A, Klugarova J, Yan H, et al. 4.3.4.5.2 Textual Data Extraction – Systematic Reviews of Text and Opinion. JBI Manual for Evidence Synthesis. 2020. [Accessed 18 July 2020]. Available from: https://wiki.joannabriggs.org/display/MANUAL/4.3.4.5.2+Textual+data+extraction
- 59. Oxford University Press . Oxford English Dictionary. 2021. [Accessed 7 July 2020]. Available from: https://www.oed.com/
- 60. Carroll C, Booth A, Cooper K. A worked example of ‘best fit’ framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol 2011; 11: 29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Carroll C, Booth A, Leaviss J, et al. ‘Best fit’ framework synthesis: refining the method. BMC Med Res Methodol 2013; 13: 37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Brunton G, Oliver S, Thomas J. Innovations in framework synthesis as a systematic review method. Res Synth Methods 2020; 11: 316–330. [DOI] [PubMed] [Google Scholar]
- 63. McArthur A, Klugarova J, Yan H, et al. 4.3.4.5.3 Textual Data Synthesis – Systematic Reviews of Text and Opinion. JBI Manual for Evidence Synthesis. 2020. [Accessed 20 July 2020]. Available from: https://wiki.joannabriggs.org/display/MANUAL/4.3.4.5.3+Textual+data+synthesis
- 64. McArthur A, Klugarova J, Yan H, et al. 4.3.4.6.3 Findings of the Review – Systematic Reviews of Text and Opinion. JBI Manual for Evidence Synthesis. 2020. [Accessed 20 July 2020]. Available from: https://wiki.joannabriggs.org/display/MANUAL/4.3.4.6.3+Findings+of+the+review
- 65. European Medicines Agency . Reflection paper for laboratories that perform the analysis or evaluation of clinical trial samples. EMEA Guidance Document. European Medical Agency: London, 2012; 1–19. [Accessed 8 August 2020]. Available from: https://www.ema.europa.eu/en/documents/regulatory‐procedural‐guideline/reflection‐paper‐laboratories‐perform‐analysis‐evaluation‐clinical‐trial‐samples_en.pdf
- 66. Pollack IF, Boyett JM, Yates AJ, et al. The influence of central review on outcome associations in childhood malignant gliomas: results from the CCG‐945 experience. Neuro Oncol 2003; 5: 197–207. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67. National Cancer Institute . NCI best practices for biospecimen resources. National Institute of Health National Cancer Institute: Bethesda, 2016; 1–81. [Accessed 8 August 2020]. Available from: https://biospecimens.cancer.gov/bestpractices/2016-NCIBestPractices.pdf
- 68. Medicine and Healthcare Products Regulatory Agency (MHRA) . Guidance on the maintenance of regulatory compliance in laboratories that perform the analysis or evaluation of clinical trial samples. 2009. [Accessed 3 April 2020]. Available from: http://www.mhra.gov.uk/home/groups/is-insp/documents/websiteresources/con051910.pdf
- 69. The Association of the British Pharmaceutical Industry (ABPI) . Guidelines for Phase I Clinical Trials. 2018. [Accessed 3 April 2020]. Available from: https://www.abpi.org.uk/media/4992/guidelines‐for‐phase‐i‐clinical‐trials‐2018‐edition‐20180626.pdf
- 70. LaBaer J, Miceli JF, Freedman LP. What's in a sample? Increasing transparency in biospecimen procurement methods. Nat Methods 2018; 15: 303–304. [DOI] [PubMed] [Google Scholar]
- 71. Moorthy T. Targeted therapy: a potential oversight in trial protocol. Lancet Oncol 2018; 19: e439. [DOI] [PubMed] [Google Scholar]
- 72. Lim SJ. Full findings of individual resources: verbatim extracts of recommendation and associated interpretation. 2020. [Accessed 31 August 2020]. Available from: https://osf.io/k7bt3
- 73. Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010; 340: 698–702. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74. National Cancer Research Institute . SPIRIT‐Path – NCRI CMPath. 2020. [Accessed 8 August 2020]. Available from: https://cmpath.ncri.org.uk/our-priorities/spirit-path/
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Table S1. Characteristics of included guidance resources
Table S2. Quality assessment of individual guidance resources
Table S3. Clinical specialty‐specific recommendations for biospecimen collection, processing, and histopathological assessment in clinical trials
Table S4. Meta‐aggregative synthesis flowchart of pathology‐specific recommendations according to different stages of the clinical trial process
Table S5. GRADE‐CERQual evidence profile
Table S6. GRADE‐CERQual summary of qualitative findings
