Abstract
Background
The field of pathology has not yet fully realized the potential of artificial intelligence (AI) and digital pathology. Adoption must be driven by demonstrable utility, and successful implementation depends on interoperability and sustainability, which require established standards. We address the imminent challenges facing the field of AI in digital pathology, which currently suffers from a lack of coordinated and adopted standards.
Methods
We conducted several roundtable discussions with key opinion leaders from multiple sectors across the healthcare ecosystem. Based on how standards are used, we distinguish different areas of practice (relevance, endorsement, and utility) and emphasize the importance of standards.
Results
Our roundtable discussion centered on one key theme: successfully implementing AI in digital pathology depends on achieving a certain level of uniformity across practices. We derive an approach to describe the critical role of standards consisting of seven interdependent areas of practice: value recognition, existing standards, dependencies for AI, failures, management of standards, trends, and a roadmap for accelerated and sustainable adoption. The promise of standards and our approach can be understood as the interconnection of these areas. We address imminent challenges surrounding the field of digital pathology by providing an approach for interoperable, coordinated, and sustainable use of standards across diverse practice settings.
Conclusion
With the concepts and frameworks outlined in this article, we highlight the importance of standards in pathology and their crucial role in driving computational advances and enabling AI solutions to enhance patient care.
Keywords: DICOM, FHIR, HL7, Biomarker
Introduction
Standards are essential building blocks of modern healthcare.1,2 In lab medicine, a standard is typically characterized by: (i) a consensus-driven specification, (ii) an established governance or maintenance process, and (iii) clear expectations for conformance or implementation. In pathology, standards enable consistency in slide size, staining protocols,3 magnification levels, and image formats.4, 5, 6, 7 These conventions are so embedded in daily workflows that they often go unnoticed.8 Yet, despite this reliance, the configuration and implementation of standards remain highly variable across institutions and geographies, shaped more by local precedent than by shared frameworks.9, 10 This variability poses increasing challenges as the field undergoes a digital transformation11, 12, 13 (Fig. 1).
Fig. 1.
Pathologists use many “Standards”. Standards in pathology fall into three large groups: structural (e.g. setting or size), process (e.g., standardized activities, standard operating procedures), or outcomes (standardized evaluation of an activity). Each group is guided by foundational, governance, and interoperability standards. However, here we focus on standards directly related to realizing AI in digital pathology.
The rationale for broad adoption is not technological inevitability, but rather the longstanding principle that standardized and reproducible practices (central to both clinical practice, clinical trials, and diagnostic pathology) are what allow innovations such as AI-derived biomarkers to deliver measurable utility for patients and society.14, 15, 16, 17
Given the proliferation of annotation formats across the field (e.g., CSV, XML, JSON, or GeoJSON), establishing clear, harmonized guidelines for the encoding, exchange, and provenance of annotations is essential to enable interoperability, reproducibility, and scalable AI development in digital pathology.
Standards are best adopted in incremental, value-aligned steps, not as a single comprehensive package. However, not all parameters require domain-specific standards; where higher-precedence standards such as SI units already provide an adequate, continuous representation of a variable, additional standardization may be unnecessary. In practice, a lab director can distinguish a true standard from a “standard-adjacent” tool by assessing whether it reflects formal consensus, includes explicit conformance requirements, and is maintained by a recognized standards body; benchmarking utilities such as MedHELM or HistoQC support evaluation and quality monitoring but do not constitute validation standards themselves.
Digital pathology and the integration of artificial intelligence (AI) have introduced new complexities to the discipline18; complexities that traditional standards are not fully equipped to address.11, 12, 13, 19 AI, and in particular machine learning, depends on structured, high-quality data to train, validate, and deploy algorithms that emulate diagnostic decision-making.20, 21 This dependency intensifies the need for interoperable formats, semantic consistency, and quality control across the data lifecycle.22, 23 However, current data collection practices in pathology are fragmented, inconsistent, and largely unstructured, creating a fundamental misalignment between requirements and workflows.24
The transformative potential of AI in pathology is widely acknowledged, yet the underlying infrastructure required to support it remains underdeveloped. Whereas individual standards (e.g., DICOM for imaging,25, 26 ICC for color calibration,27, 28 or IHE/PaLM for data workflows29) have made progress in specific domains,23, 30, 31, 32 there is no comprehensive, up-to-date resource that maps out the standards ecosystem in computational pathology, particularly with respect to AI implementation.1
We considered this absence a critical gap. Unlike in radiology or genomics, where standards have been widely adopted and are actively maintained, pathology lacks a unified reference framework that links standards development to AI-readiness, regulatory alignment, and clinical implementation. Without such a resource, institutions, vendors, and regulators may struggle to coordinate efforts, assess compliance, or scale solutions across systems.
The aim of this work is to address this gap by providing a structured overview of standards in computational pathology, with a focus on their relevance for AI. We examine the types, functions, and current state of applicable standards, highlight critical areas for development, and outline how standards can serve as enablers (i.e., not barriers) to safe, scalable, and interoperable AI integration. We aim to support informed decision-making by stakeholders across domains and contribute to a shared foundation for innovation in digital pathology and AI in research and patient care.
Methods
This work employed a structured narrative review approach to characterize existing and emerging standards relevant to computational pathology, with an emphasis on their role in enabling AI. The objective was not to evaluate diagnostic performance or conduct a systematic review, but rather to map the standards landscape and highlight design, implementation, and maintenance considerations.
Scope and selection criteria
We focused on standards that are directly or indirectly applicable to digital pathology workflows, image analysis, and AI model development. Inclusion criteria for standards were: (1) relevance to data capture, processing, or interoperability in pathology, (2) endorsement or recognition by regulatory, professional, or standards development organizations, and (3) potential utility for AI-enabled applications.
Data sources and expert input
Sources included peer-reviewed literature, publicly available technical specifications, regulatory guidance documents (e.g., FDA-recognized consensus standards), and outputs from international standards bodies (e.g., ISO, DICOM, ICC, and IHE). Additional insights were provided by domain experts through collaborative contributions from members of professional societies, industry, and regulatory science communities involved in standard-setting or AI implementation.
Organization and analysis
Identified standards were grouped into four categories based on function: (1) foundational (e.g., data structure and image formats), (2) governance (e.g., quality management and validation protocols), (3) interoperability (e.g., communication between systems), and (4) application-specific standards (e.g., AI explainability, performance monitoring). For each, we assessed scope, adoption status, relevance to AI, and known challenges or limitations. The analysis also considered alignment with regulatory frameworks and future trends such as federated learning and synthetic data use.
Results
Existing standards in digital pathology
A range of foundational and interoperability-focused standards already exist within digital pathology. Among the most widely recognized is the Digital Imaging and Communications in Medicine (DICOM) standard, originally developed for radiology and later extended by Working Group 26 to address pathology-specific requirements. These include support for whole-slide imaging (WSI), image metadata, and hierarchical image tiling. DICOM enables standardized image exchange across systems and has become increasingly important with the growth of telepathology and AI validation studies that rely on multicenter image sharing. DICOM leverages SNOMED CT as a standardized ontology of medical concepts to support semantic interoperability by enabling consistent annotation and tagging of clinical content.
The International Color Consortium (ICC) standard plays a key role in ensuring consistent color representation across devices, which is critical for reproducibility in digital image analysis.33, 34 ICC profiles are used to align the color output from scanners and displays, thereby reducing variability in computational image interpretation, particularly in tasks such as PD-L1 or HER2 scoring or tumor segmentation.35, 36
The Integrating the Healthcare Enterprise – Pathology and Laboratory Medicine (IHE PaLM) initiative focuses on workflow and data integration, leveraging existing standards whenever possible to solve interoperability challenges. The efforts of this initiative established technical frameworks that enable interoperability between laboratory information systems (LISs), electronic health records (EHRs), WSI scanners, digital pathology viewers, image archives, and image analysis software.37 These integration profiles support standardized communication protocols and play a pivotal role in scalable deployment of AI tools.
An overview of relevant standards in digital pathology is provided in Table 1. While we cannot cover all standards, one key result is the recognition of the value in applying existing standards rather than re-inventing them.
Table 1.
Overview of standards in digital pathology (selected).
| Standard | Description | Application | Example use case | Organization | Category |
|---|---|---|---|---|---|
| DICOM | DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information. The DICOM Working Group 26 supports and develops the standards for the pathology domain. | Storage and communication of digital pathology images | SP146; slide exchange in multisite AI model validation | DICOM (WG-26) | Interoperability |
| OME-TIFF | OME-TIFF is an open standard file format developed for storing, sharing, and analyzing biological image data, including multidimensional image formats. | Image data structuring and sharing | Multichannel microscopy data in AI training | OME Consortium | Foundational |
| ICC profile | Defines color characteristics of input and output devices and ensures consistent image appearance across devices and institutions. | Color normalization and display consistency | Standardized PD-L1 image review across platforms | ICC | Foundational |
| JPEG | Standardized image compression format widely used in medical imaging systems for efficient storage and transfer. | Image compression and exchange | Efficient WSI storage with minimal loss | JPEG (ISO/IEC) | Foundational |
| CPT | CPT codes are used to report medical, surgical, and diagnostic procedures and services. | Billing and reimbursement | AI-driven billing code generation for digital reads | AMA | Governance |
| ICD, SNOMED | ICD and SNOMED CT are used for standardized disease classification and clinical terminology. | Clinical terminology and disease classification | Standardized annotation of training datasets | WHO/IHTSDO | Application-specific |
| Antibody classification | Defines categories and nomenclature for antibodies used in diagnostics and therapeutics (e.g., PD-L1 classification standards). | Biomarker identification and categorization | PD-L1 scoring in immunotherapy response prediction | WHO/commercial panels | Application-specific |
| AJCC/TNM | AJCC/TNM is a global standard for cancer staging based on tumor size, lymph node involvement, and metastasis. | Cancer staging and treatment planning | TNM classification in prognostic AI tools | AJCC | Application-specific |
| HGNC | HGNC provides unique symbols and names for human gene nomenclature, including variants relevant to molecular pathology. | Genomic variant reporting | Gene panel output mapping in variant databases | HGNC | Application-specific |
| Staining | Standard protocols for hematoxylin and eosin staining, ensuring reproducibility and diagnostic consistency. | Staining consistency | H&E-based AI model for cancer detection | CAP/pathology consensus | Foundational |
| Processing | FFPE processing standardizes tissue preparation for microscopy and molecular analysis. | Tissue preparation | FFPE-based image archive for algorithm validation | ASCO/pathology guidelines | Foundational |
| Magnification | Magnification standards (e.g., 20× vs. 200×) ensure consistent interpretation and computational analysis of image data. | Image interpretation consistency | Scaling ML algorithms across magnifications | CAP/DICOM | Foundational |
Abbreviations: DICOM: Digital Imaging and Communications in Medicine; OME: Open Microscopy Environment; ICC: International Color Consortium; CPT: Current Procedural Terminology; AMA: American Medical Association; ICD: International Classification of Diseases; SNOMED: Systematized Nomenclature of Medicine; WHO/IHTSDO: World Health Organization/International Health Terminology Standards Development Organization; AJCC: American Joint Committee on Cancer; HGNC: HUGO Gene Nomenclature Committee; CAP: College of American Pathologists; FFPE: formalin-fixed, paraffin-embedded; ASCO: American Society of Clinical Oncology.
AI-specific requirements
For AI to be successfully implemented in pathology, standardization is not optional; it is foundational. AI model training requires large volumes of consistent, well-annotated data. Variability in staining protocols, tissue processing, and scanner configurations can introduce confounders that limit model generalizability. Algorithms trained on data from a single institution may perform poorly when applied to external datasets, underscoring the need for harmonized pre-analytical and analytical procedures. In other words, we identified a certain degree of dependency for AI to avoid failures.
WSIs are sensitive to artifacts such as blur, tissue folds, or air bubbles, which can distort model performance. Several groups have proposed quality control standards for WSIs to address these issues,38 enabling cleaner training datasets and more reliable model outputs. Standardized formats such as OME-TIFF and DICOM facilitate structured data capture and metadata embedding, which is essential for version control and reproducibility in AI workflows. Furthermore, interoperability standards allow for the aggregation and harmonization of data from multiple sites. This is a prerequisite for federated learning and multi-institutional validation studies. Without standardization, the lack of cross-site generalizability remains a persistent barrier to clinical deployment.
Regulatory and organizational alignment
The regulatory ecosystem is increasingly acknowledging the importance of standards in the deployment of AI tools in pathology. The U.S. Food and Drug Administration (FDA), through its Digital Health Center of Excellence, has issued guidance emphasizing the role of standards in evaluating software-based medical devices, particularly those involving continuous learning or real-world data.
The Digital Pathology Association (DPA) also plays a pivotal role in advancing the standardization and regulatory clarity of digital pathology tools.39 Through its Regulatory and Standards Task Force, the DPA actively engages with stakeholders (including industry, academia, and regulatory agencies) to promote interoperability, clarify approval pathways, and support the adoption of standards that facilitate safe and effective use of AI in clinical practice (https://digitalpathologyassociation.org/digital-pathology-association-regulatory-committee; last accessed on 07/02/25).
The College of American Pathologists (CAP) has developed digital pathology checklists and validation protocols that cover aspects such as pre-analytic image handling, image analysis validation, and quality control.10, 40 These41, 42 resources are especially relevant for labs implementing AI-assisted diagnostics and seeking regulatory approval or accreditation. The CAP also convenes both the DICOM working group 26 and the IHE/PaLM initiative.
Several organizations are directly involved in standard development and ecosystem coordination. The Pathology Innovation Collaborative Community (Plcc), formerly known as the Alliance for Digital Pathology, operates in collaboration with the FDA and other stakeholders to advance regulatory science through shared standards.43 Additional contributors include the Medical Device Innovation Consortium (MDIC) and the CAP Council on Informatics and Pathology Innovation (CIPI), both of which support the development, dissemination, and adoption of computational pathology standards. These organizations collectively emphasize that standards require management through sustainable organizations.
The duality of standards
This landscape reflects a dichotomy in how solutions are developed and adopted: the customized approach versus the standards-based approach (Table 2). Customized approaches offer flexibility, speed, and alignment with local workflows, often favoring rapid innovation and iterative development. However, they tend to be difficult to scale and sustain over time. In contrast, standards-based approaches require greater upfront investment, coordination, and domain expertise, but yield long-term benefits through interoperability, auditability, and ecosystem compatibility. Whereas the former may appear more agile, the latter provides the structural foundation necessary for scalable AI integration across diverse clinical settings, again emphasizing the needs of developing and managing standards.
Table 2.
The dual nature of working with standards.
| Customized approach | Standard-based approach |
|---|---|
| Highly individualized to local setting and question | Generalized solution that applies to other settings and questions |
| Short-term solution | Long-term solution |
| Example: .svs files and .xls | Example: DICOM |
| Learning while doing | Learning then doing |
| Heavy emphasis on functional relationships of components | Heavy emphasis on technical specifications of the components and their interaction |
| Initial low cost and fast adoption | Larger initial effort |
| Fast changes and iterative development | Changes of the standard take time (e.g., workgroups, backwards compatibility, and vocabulary) |
| Rarely sustainable over a long-time frame | Sustainable and large set of tools |
| Commitment to an approach for a solution | Commitment to “a” standard requires domain knowledge |
| Non-standardized solutions appear more innovative at first but cause long-term problems | Standards appear restrictive and constraining at first but result in long-term pay-off |
| Emphasis on quick adoption | Emphasis on interoperability |
| Organization: local control | Organization: committee/nominations |
| Strengths: flexibility, simplicity, and low-cost | Strengths: compliance, auditability, and compatibility |
Emerging needs and future trends
As AI applications mature, new challenges are emerging that demand dynamic and forward-compatible standards. The integration of real-world data into AI model development and regulatory evaluation introduces a need for consistent data structures, semantic frameworks, and performance monitoring systems. Tools that adapt over time—such as those based on continuous learning—require formal mechanisms for post-market surveillance, auditability, and explainability. Federated learning, in which models are trained across decentralized data sources without sharing raw data, offers an attractive path forward for collaborative AI development. However, this paradigm depends heavily on common data formats, aligned ontologies, and standardized performance metrics across institutions. The use of synthetic data to augment real-world datasets is another area of rapid evolution. Trials such as the VICTRE study in radiology have demonstrated the feasibility and value of synthetic cohorts for regulatory evaluation. In pathology, the generation, labeling, and validation of synthetic images will similarly require standard frameworks to ensure utility and comparability. Lastly, the growing emphasis on explainable AI (i.e., where algorithmic outputs are interpretable and traceable to input data) will necessitate standards for output formatting, provenance tracking, and model transparency. These features are essential not only for regulatory approval but also for clinician and patient trust. Following trends and developing a roadmap for adoption is key.
To illustrate the multifaceted role of standards in these pathology innovations, we developed a conceptual overview of the foundational dimensions of effective standards; it is not a stepwise adoption roadmap (Fig. 2). In practice, labs adopt standards iteratively, prioritizing those with the highest regulatory, operational, or interoperability value while remaining adaptable as standards evolve.
Fig. 2.
Core functional dimensions of standards as a concept. Standards in digital pathology and AI span technical, regulatory, operational, and ethical dimensions. This figure illustrates 11 foundational principles that guide the development, implementation, and sustainability of standards. From governance and regulatory alignment to harmonization, versioning, and equity, the diamond-shaped pathway highlights how standards must be grounded in existing frameworks, evolve through experience, and offer a clear value proposition. Effective standards require both top-down oversight and bottom-up practicality to ensure they are adopted, maintained, and trusted across diverse clinical and technological ecosystems.
Discussion
Here, we provide a structured synthesis of existing standards relevant to digital pathology and their role in enabling the adoption and scalability of AI. By categorizing standards into foundational, interoperability, governance, and application-specific domains, we illustrate how standards underpin every stage of AI deployment, from data acquisition and model training to regulatory validation and real-world implementation. We derive an approach to describe the critical role of standards consisting of seven interdependent areas of practice: value recognition, existing standards, dependencies for AI, failures, management of standards, trends, and a roadmap for accelerated and sustainable adoption. This overview highlights both the technical and organizational scaffolding needed to translate digital pathology innovation into sustainable clinical practice.
Our work fills a notable gap: whereas various standards have been proposed or adopted across domains, no single resource has consolidated and analyzed their relevance specifically for computational pathology and AI. The novelty of this work lies in its integrative scope. Prior reviews have focused on individual components (e.g., imaging formats, data exchange, quality control, or color calibration); however, to our knowledge, none have examined the broader system of interdependent standards necessary for scalable, interoperable AI. Furthermore, this overview is grounded not only in technical documentation but also in regulatory frameworks, real-world deployment scenarios, and collaborative initiatives like DPA, Plcc, MDIC, and CAP-CIPI. By doing so, we bridge the conceptual divide between technical specifications and clinical implementation.
There are limitations to this work. As a narrative review, our findings are subject to selection bias and may not capture every emerging or regional standard. The field is rapidly evolving, and certain initiatives (e.g., in synthetic data governance or explainable AI) are still under development. Additionally, standards are often embedded in institutional workflows or vendor systems in undocumented ways, making comprehensive coverage difficult. We also acknowledge that the utility of a given standard can vary by region, resource level, and clinical application. We consider these limitations acceptable when considering the main aim: providing a concise and meaningful starting point.
Looking ahead, the future of AI in pathology will increasingly depend on dynamic, multipurpose standards capable of supporting both innovation and regulation. As AI models evolve from locked to continuously learning systems, there will be a growing demand for standards that enable real-time auditing, version control, and model explainability. Federated learning and synthetic data offer tremendous promise but will require harmonized data structures and robust ontologies. Importantly, these standards must not only be technically sound but also accessible to developers, regulators, and clinicians alike. The convergence of imaging, genomics, and spatial biology will further accelerate the need for cross-domain standards that support integrated diagnostics.
In summary, the development, implementation, and maintenance of standards are critical to realizing the promise of AI in digital pathology. Whereas the creation and adoption of standards may appear restrictive at first, they serve as long-term enablers of interoperability, scalability, and trust. We hope our work serves as a foundational reference for multiple stakeholders across industry, academia, and regulation; encouraging informed participation in the standardization process and aligning innovation with sustainable positive impact on patient care.
Funding
Supported by the Digital Pathology Association.
Declaration of competing interest
L. P. is on the medical advisory board for Ibex and NTP, consults for Hamamatsu and AiXMed, and is a cofounder of LeanAP and Placenta AI. M. Herrmann is a Employee and shareholder of Roche. All other authors have no conflicts of interest to disclose.
References
- 1.He J., Baxter S.L., Xu J., et al. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25:30–36. doi: 10.1038/s41591-018-0307-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.World Health Organization (WHO). https://apps.who.int › handle › 9789241564052-eng Monitoring the building blocks of health systems.
- 3.Azar J.C., Busch C., Carlbom I.B. Histological stain evaluation for machine learning applications. J Pathol Inform. 2013;4:S11. doi: 10.4103/2153-3539.109869. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Daniel C., Macary F., García Rojo M., et al. Recent advances in standards for collaborative digital anatomic pathology. Diagn Pathol. 2011;6(suppl 1):S17. doi: 10.1186/1746-1596-6-S1-S17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Burnett D., Blair C., Haeney M.R., et al. Clinical pathology accreditation: standards for the medical laboratory. J Clin Pathol. 2002;55:729–733. doi: 10.1136/jcp.55.10.729. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Sadofsky M., Knollmann-Ritschel B., Conran R.M., Prystowsky M.B. National Standards in pathology education: developing competencies for integrated medical school curricula. Arch Pathol Lab Med. 1 March 2014;138(3):328–332. doi: 10.5858/arpa.2013-0404-RA. [DOI] [PubMed] [Google Scholar]
- 7.Daniel C., Booker D., Beckwith B., et al. Standards and specifications in pathology: image management, report management and terminology. Stud Health Technol Inform. 2012;179:105–122. PMID: 22925792. [PubMed] [Google Scholar]
- 8.Brown R.W., Della Speranza V., Alvarez J.O., et al. Uniform labeling of blocks and slides in surgical pathology: guideline from the College of American Pathologists Pathology and Laboratory Quality Center and the National Society for Histotechnology. Arch Pathol Lab Med. 2015;139(12):1515–1524. doi: 10.5858/arpa.2014-0340-SA. [DOI] [PubMed] [Google Scholar]
- 9.Kayser K., Görtler J., Goldmann T., et al. Image standards in tissue-based diagnosis (diagnostic surgical pathology) Diagn Pathol. 2008;3:17. doi: 10.1186/1746-1596-3-17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Williams B.J., Knowles C., Treanor D. Maintaining quality diagnosis with digital pathology: a practical guide to ISO 15189 accreditation. J Clin Pathol. 2019;72:663–668. doi: 10.1136/jclinpath-2019-205944. [DOI] [PubMed] [Google Scholar]
- 11.Madabhushi A. Digital pathology image analysis: opportunities and challenges. Imaging Med. 2009;1(1):7–10. doi: 10.2217/IIM.09.9. PMID: 30147749; PMCID: PMC6107089. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Bruce C., Prassas I., Mokhtar M., et al. Transforming diagnostics: the implementation of digital pathology in clinical laboratories. Histopathology. 2024;85:207–214. doi: 10.1111/his.15178. [DOI] [PubMed] [Google Scholar]
- 13.Dawson H. Digital pathology – rising to the challenge. Front Med. 2022;9 doi: 10.3389/fmed.2022.888896. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Thomas D.W., Burns J., et al. Biomedtracker; San Diego: 2016. Clinical Development Success Rates 2006–2015. [Google Scholar]
- 15.Hay M., Thomas D.W., et al. Clinical development success rates for investigational drugs. Nat Biotechnol. 2014;32:40–51. doi: 10.1038/nbt.2786. [DOI] [PubMed] [Google Scholar]
- 16.Wong C.H., Siah C.H., Lo A.W. Estimation of clinical trial success rates and related parameters. Biostatistics. April 2019;20(2):273–286. doi: 10.1093/biostatistics/kxx069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.D’Avó Luís A.B., Seo M.K. Has the development of cancer biomarkers to guide treatment improved health outcomes? Eur J Health Econ. 2021 Jul;22(5):789–810. doi: 10.1007/s10198-021-01290-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Farahani N., Parwani A., Pantanowitz L. Whole slide imaging in pathology: advantages, limitations, and emerging perspectives. Pathol Lab Med Int. 2015;7:23–33. doi: 10.2147/PLMI.S59826. [DOI] [Google Scholar]
- 19.Romanchikova Marina, Thomas S.A., Dexter A., et al. The need for measurement science in digital pathology. J Pathol Inform. 2022;13 doi: 10.1016/j.jpi.2022.100157. ISSN 2153-3539. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Meyer J., Khademi A., Têtu B., Han W., Nippak P., Remisch D. Impact of artificial intelligence on pathologists’ decisions: an experiment. J Am Med Inform Assoc. October 2022;29(10):1688–1695. doi: 10.1093/jamia/ocac103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Moxley-Wyles B., Colling R., Verrill C. Artificial intelligence in pathology: an overview. Diagn Histopathol. 2020;26(11):513–520. doi: 10.1016/j.mpdhp.2020.08.004[1]. [DOI] [Google Scholar]
- 22.Rehburg F., Graefe A., Hübner M., Thun S. How interoperability can enable artificial intelligence in clinical applications. Stud Health Technol Inform. 2024;316:596–600. doi: 10.3233/SHTI240485. [DOI] [PubMed] [Google Scholar]
- 23.Desai Marble H., Huang R., Nixon Dudgeon S., et al. Open Access Publications, Washington University School of Medicine; 2020. A Regulatory Science Initiative to Harmonize and Standardize Digital Pathology and Machine Learning Processes to Speed up Clinical Innovation to Patients. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Wang F., Kong J., Gao J., et al. A data model and database for high-resolution pathology analytical image informatics. J Pathol Inform. 2011;2:32. doi: 10.4103/2153-3539.85060[1]. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.DICOM Working Group 26 (Image standard for pathology) https://www.dicomstandard.org/wgs/wg-26/
- 26.Herrmann M.D., Clunie D.A., Fedorov A., et al. Implementing the DICOM standard for digital pathology. J Pathol Inform. 2018;9:37. doi: 10.4103/jpi.jpi_42_18. 10.4103/jpi.jpi_42_18 Published 2018 Nov 2. doi: [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.International Color Consortium Specification ICC.1:2010 (Profile version 4.3.0.0): Image technology colour management — Architecture, profile format, and data structure [Revision of ICC.1:2004–10] 2010. https://www.color.org/specification/ICC1v43_2010-12.pdf
- 28.Silverstein L.D., Hashmi S.F., Lang K., Krupinski E.A. Paradigm for achieving color-reproduction accuracy in LCDs for medical imaging. J Soc Inf Disp. 2012;20(1):53–62. doi: 10.1889/JSID20.1.53[1]. [DOI] [Google Scholar]
- 29.IHE PaLM https://wiki.ihe.net/index.php/Pathology_and_Laboratory_Medicine_(PaLM)
- 30.Singh R., Chubb L., Pantanowitz L., Parwani A. Standardization in digital pathology: supplement 145 of the DICOM standards. J Pathol Inform. 2011;2(1):23. doi: 10.4103/2153-3539.80719[1]. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Daniel C., Macary F., García Rojo M., et al. Recent advances in standards for collaborative digital anatomic pathology. Diagn Pathol. 2011;6(suppl 1):S17. doi: 10.1186/1746-1596-6-S1-S17[1]. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Yagi Y., Gilbertson J.R. Proceedings of the International Telecommunication Union Workshop on Standardization in E-health, Geneva, Switzerland, May 23–25, 2003. 2003. Digital imaging in pathology: the case for standardization. [Google Scholar]
- 33.Shrestha P., Hulsken B. Medical Imaging 2014: Digital Pathology. 2014. Color accuracy and reproducibility in whole slide imaging scanners. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Kruse C., Goswamy R., Raval Y., Marawi S. Challenges and opportunities of big data in health care: a systematic review. JMIR Med Inform. 2016;4(4):e38. doi: 10.2196/medinform.5359. 10.2196/medinform.5359 URL: https://medinform.jmir.org/2016/4/e38. doi: [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Bui M.M., Riben M.W., Allison K.H., et al. Quantitative image analysis of human epidermal growth factor receptor 2 immunohistochemistry for breast cancer: guideline from the College of American Pathologists. Arch Pathol Lab Med. 2019 Oct;143(10):1180–1195. doi: 10.5858/arpa.2018-0378-CP. Pubmedid: 30645156. Pmcid: PMC6629520. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Evans A.J., Brown R.W., et al. Validating Whole Slide Imaging Systems for Diagnostic Purposes in Pathology: Guideline Update From the College of American Pathologists in Collaboration With the American Society for Clinical Pathology and the Association for Pathology Informatics Arch Pathol Lab Med. 2022;146(4):440–450. doi: 10.5858/arpa.2020-0723-CP. [DOI] [PubMed] [Google Scholar]
- 37.Dash R.C., Jones N., Merrick R., et al. Integrating the health-care enterprise pathology and laboratory medicine guideline for digital pathology interoperability. J Pathol Inform. 2021 Mar 24;12:16. doi: 10.4103/jpi.jpi_98_20. PMID: 34221632; PMCID: PMC8240547. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Janowczyk A., Zuo R., Gilmore H., Feldman M., Madabhushi A. HistoQC: an open-source quality control tool for digital pathology slides. JCO Clin Cancer Inform. 2019;3:1–7. doi: 10.1200/CCI.18.00157. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Digital Pathology Association. Task Force for Standardization and Regulatory https://digitalpathologyassociation.org/digital-pathology-association-regulatory-committee
- 40.College of American Pathologists . College of American Pathologists; 2023. Digital Pathology Resource Guide (Version 7.0, Issue 3.0) [Google Scholar]
- 41.College of American Pathologists. (n.d.). Digital Pathology Topic Center. https://www.cap.org/member-resources/councils-committees/digital-pathology-topic-center[1].
- 42.Healthcare Quality Improvement Partnership. (n.d.). HQIP Whole Slide Image Quality Improvement Program (HQWSI). https://www.hqip.org.uk/[1].
- 43.Marble H.D., Huang R., Dudgeon S.N., et al. A regulatory science initiative to harmonize and standardize digital pathology and machine learning processes to speed up clinical innovation to patients. J Pathol Inform. 2020 Aug 6;11:22. doi: 10.4103/jpi.jpi_27_20. PMID: 33042601; PMCID: PMC7518200. [DOI] [PMC free article] [PubMed] [Google Scholar]


