Abstract
Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However for scientists wishing to publish the obtained images and image analyses results, there are to date no unified guidelines. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here we present community-developed checklists for preparing light microscopy images and image analyses for publications. These checklists offer authors, readers, and publishers key recommendations for image formatting and annotation, color selection, data availability, and for reporting image analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby heighten the quality and explanatory power of microscopy data in publications.
Introduction
Images and their analyses are widespread in life science and medicine. Microscopy imaging is a dynamic area of technology development, both in terms of hardware and software. This is especially true in the area of light microscopy with great recent improvements in sensitivity, and spatial and temporal collection. Resources developed by scientists help researchers to navigate designing microscopy experiments and obtaining image data 1–3, and cover aspects such as sample preparation 1, microscope usage 1,4, method reporting 5–8, or fluorophore and filter usage 9,10. Despite widespread adoption of microscopy as a tool for biology and biomedical research, the resulting image figures in publications at times fail to fully communicate results or are not entirely understandable to audiences. This may be because authors do not include comprehensive imaging method statements11, or because they omit basic information in figures such as specimen size or color legends 12, which are key to fully understanding the data. To ensure that images are presented in a clear, standardized, and reproducible manner, it is essential that the scientific community establishes unified and harmonized guidelines for image communication in publications.
Images document biological samples and ranges of their phenotypes. Increasingly, microscopy images are also a source of quantitative biological data and variables are measured with a growing number of image analysis software packages FIJI/ImageJ 13, CellProfiler 14, KNIME 15, Python software libraries 16(scikit-image.org17), and commercial software packages such as ZEN, LAS X, NIS-Elements, Amira-Avizo, Imaris, Arivis Vision 4d, Huygens 18. Image analysis is often a workflow of many steps, such as image reconstruction, segmentation, processing, rendering, visualization and statistical analysis, many of which require expert knowledge 19,20. A comprehensive publication of quantitative image data should then not only include basic specimen and imaging information, but additionally the image processing and analysis steps that produced the data plot and statistics. Towards fully reproducible image analysis it is also essential that images and workflows are available to the community, e.g., in image repositories or archives 21–23, and code repositories such as Github 24.
To ensure that image figures provide insights to their readership, any supportive experimental metadata and image analysis workflows must be clear and understandable (“what is the pixel size”, “what does the arrow mean”), accessible (“are colors visible to colorblind audiences”), representative (no cherry picking), and reproducible (“how were the data processed”, “can one access and re-analyze the images”). In the framework of the initiative for ‘Quality Assessment and Reproducibility for Instruments and Images in Light Microscopy’, QUAREP-LiMi 25,26, the ‘Image Analysis and Visualization workgroup’ established community consensus checklists to help scientists publish understandable and reproducible light microscopy images and image analysis procedures. Where applicable, the checklists are aligned with the FAIR principles, which were developed as recommendations for research data (Findability, Accessibility, Interoperability, and Reusability 27).
Scope of checklists
The scope of the checklists is to help scientists publish fully understandable and interpretable images and results from image analysis (Fig. 1). In this work the term images includes raw or essentially unprocessed light microscope data, compressed or reconstructed images, and quantification results obtained through image analysis (see glossary). While the focus of QUAREP-LiMi is on light microscopy images in life sciences, the principles may also apply to figures with other images (photos, electron micrographs, medical images) and to image data beyond life sciences. The intended audience of the checklists are novices or non-experts occasionally using light microscopy, and also experts (core facility staff, global bioimage community) who review image data or teach image handling.
Fig. 1. Scope of the checklists.
The checklists present easy-to-use guidelines for publishing microscopy image figures and image analysis workflows.
The checklists do not include principles for designing imaging experiments or recommendations to avoid image manipulation. Previous literature covers experimental design for microscopy images, including accurate and reproducible image acquisition and ensuring image quality 2,28, examples and recommendations for avoiding misleading images 1,29–33, detection of image manipulation 34–37, appropriate image handling and analysis 7,19,38,39, guidelines for writing materials and methods sections for images 40, and recommendations for general figure preparation 41. These topics are therefore not covered in the checklists.
The checklists cover image (Fig. 2) and image analysis (Fig. 6) publication and are structured into three levels that prioritizes legibility and reproducibility.
The first reporting level (“Minimal”) describes necessary, non-negotiable requirements for the publication of image data (microscopy images, data obtained through image analysis). Scientists can use these minimal criteria to identify crucial gaps before publication.
The second reporting level (“Recommended”) defines measures to ensure the understandability of images and aims to reduce the efforts toward evaluating image analysis. We encourage scientists to aim for the “Recommended” level as their image publication goal. However, we acknowledge that some aspects (e.g., large data in repositories) may be unattainable today for some authors.
The third reporting level (“Ideal”) are recommendations we encourage scientists to consider adopting in the future.
Fig. 2. Checklist for image publication.
Includes points to be addressed on image format, image colors and channels, image annotations, and image availability.
Fig. 6.
Checklist for publication of image analysis workflows.
Checklists for image publication
Image Formatting.
After exploring, processing and analyzing image data, authors then communicate insights in publications with figures as visual evidence. Preparing a figure begins with the selection of representative images from the dataset which illustrate the message. When quantitative measurements are reported in a chart, an example of the input image should be shown; when ranges of phenotypes are described, several images may be necessary for illustrating the diversity. To quickly focus the audience on key structures in the image, it is permitted to crop areas without data or with non-relevant data (Fig. 3A). As a rule, cropping, similar to selecting the field-of-view on the microscope, is allowed as long as this does not change the meaning of the conveyed image.
Fig. 3.
Image formatting may include (A) image cropping, rotation, and resizing, (B) image spacing in the figure, and (C) presenting several magnifications (zoom, inset) of images. Image colors and channels. (D) Adjust brightness/contrast to achieve good visibility of the imaging signal. (E) Channel information should be annotated and visible to audiences (high contrast to background color, visible to color-blind audiences). (F) Image details are most unbiased in grayscale. (G) It is best practice to publish legends to color intensities (intensity/calibration scales) with images, and recommended for pseudo-color scales.
Next, specimens are often presented in consensus orientation in figures (e.g., apical side of cells upwards, tree top upwards), which may require image rotation. When such rotation is done in vector software, pixel interpolation is not necessary, since the square representing a pixel can be rotated and size-changed as is, preventing any optical data modification. When rotation is done in a pixel-based image processing software, any rotation that is not a multiple of 90 degrees, however, changes the intensity values through interpolation and therefore alters the information in the image 31,42,43. The effect of interpolation, while it may be negligible in large images composed of many pixels, can greatly distort the information in small or zoomed images composed of fewer than 100x100 pixels.
When cropping and rotating, authors should ensure that the operation does not affect the original information contained in the image; while loss in image quality may be acceptable, quantifications, especially intensity measurements, should be performed beforehand 39. In a figure, individual images should be well separated (spacing, border, see Fig. 3B) to avoid misleading image-splicing 29,31.
When presenting two magnification views of the same image (e.g., a full- and a zoomed/inset view), the position of the inset in the full-view image should be made clear; if the inset is placed on top of the full-view image, e.g., to save space, it should not obstruct key data (Fig. 3C). If an inset is digitally zoomed, the original pixels should not be interpolated but “resized” to maintain the original resolution. Overall, the image should be sufficient in size so that audiences can identify all relevant details.
Image Colors and Channels.
Fluorescent light microscopes use a range of wavelengths to generate images of specimens. In the images, the light intensity for individual wavelengths, most often in grayscale, is assigned or mapped to a visible color scheme. In multi-colored images, several channels are overlaid to compare data from several channels.
Microscopy images often must be processed to adapt the bit depth to the visible range 2,44. Usually, brightness/contrast is adjusted for each channel independently in many software platforms (e.g., ImageJ/FIJI) by defining the minimum and maximum displayed intensity values before converting these into 8-bit images (for screen display, printing). Intensity range adjustments should be monitored (e.g., with the image histogram) and performed with care: a too wide an intensity range results in ‘faded’ images that lack details, while a too narrow an intensity range removes data (Fig. 3D). Scientists must be especially attentive with auto-contrast/auto-level, image intensity normalization, non-linear adjustments (‘gamma,’ histogram equalization, local contrast e.g., CLAHE 45), image filters, and image restoration methods e.g., deconvolution, Noise2Void, CARE, etc. 46–49 as their improper application may result in misleading images. When images are quantitatively compared in an experiment, the same adjustments and processing steps must be applied. If deemed critical for understanding the image data, advanced image processing steps (e.g., deconvolution, Noise2Void, CARE) may need to be indicated in the figure/figure legend, in addition to the material and methods sections.
Next, image colors must be interpretable and accessible to readers, and not mislead 50. For full-color (e.g., histology) images, the staining/preparation method, and for fluorescence microscope images the channel-specific information (fluorophore/labeled biomolecule) should be annotated (Fig. 3E, also see next section). In fluorescence microscope images, the channels can be assigned a user-defined color scheme, often referred to as a lookup table (LUT), which should be chosen such that the imaged structures are easily distinguishable from the background and accessible to color-blind audiences 12. Grayscale color schemes are best for single channels due to being uniformly perceived, allowing unbiased interpretations of the intensity values in a given image. Inverting image LUTs, to display intensities on a white instead of a black background may enhance signal contrast further, but be aware that different software handles this calculation differently.
A few steps may overall improve the understandability of colors. For multi-colored fluorescent images, showing individual channels in separate, grayscale panels provides the best visibility and highest contrast of detailed structures. Grayscale images are also accessible to all audiences, including colorblind persons (Fig. 3F). If several channels must be merged into one image, choose a color combination visible also to colorblind audiences 12,51. A separate, linear-adjusted grayscale version may also help when images were adjusted with non-linear adjustments or pseudo-colored LUTs (e.g., ‘jet,’ ‘viridis,’ and ‘union-jack’), which map intensity values to a dual- or multiple color sequential scheme. Annotation of intensity values with an intensity scale bar (‘calibration bar’) helps to orient readers and is essential for pseudo-colored and non-linear color schemes (Fig. 3G). Calibration bars should indicate absolute intensity values to inform audiences about the displayed intensity range, and can be prepared with Imaris and ImageJ/FIJI (see ImageJ user guide).
Image Annotation.
The image acquisition must be described in detail in the methods section. Additionally, to best communicate image data, some of this information is required or at least beneficial associated within the figure itself as caption or annotation. Light microscopy images show objects sized from submicron to millimeter resolution. As physical size is not obvious without context, annotating the scale for publication is therefore necessary. Including a scale bar of a given size (in or next to image) is needed to orient audiences (Fig. 4A). The corresponding size statement/dimension, e.g., “0.5 mm”, can be placed next to the scale bar (when not possible in the figure legend). To avoid quality changes (pixelated/illegible text) when adapting (re-size, compress) figures for publication, annotations should be added as vector graphics. Statements about the physical dimensions of the entire image are acceptable alternatives to scale bars. Magnification statements should be avoided as pixel size can be determined by a number of factors e.g., sampling rate or binning, and does not only depend on the objective magnification.
Fig. 4. Image Annotation.
(A) Possible ways to provide scale information. (B) Features in images can be annotated with symbols, letters, or region-of-interest. (C, D) For advanced image publication, information on anatomical view or intervals in time-lapses may be required. Image Availability. (A) Currently, image data is often shared ‘upon request’. (B) More images along with the image metadata should be available for download in public databases, and in the future (C) also archived in dedicated, added-value databases, in which images are machine searchable or curated.
Many images include further annotations such as symbols (arrows, asterisks), letter codes, or regions-of-interest (dashed shapes) which must be explained in the figure or figure legend (Fig. 4B). Symbols that resemble image data should be avoided, and note that symbols with clear vertical/horizontal arrangement are easier to distinguish than randomly oriented symbols on busy backgrounds (for examples 12). At times additional annotations may help readers to interpret the figure when the anatomical section of a 3-dimensional object or the imaging frequency for time-lapse data is provided to (Fig. 4C,D). All annotations, including scale bars and regions of interest indications, must be legible, i.e., have sufficient size/point size, line widths, and colors in high contrast to the background. In addition to being legible, scale bars should have a meaningful length with regards to the object shown. Annotations placed on top of images should not obscure key image data and be legible to color-blind persons.
Image Availability.
Image processing operations should not overwrite the original microscope image 42,43 and upon publication, both the original image (or a lossless compressed version) and the image-file shown in the published figure should be available. The specific file type of the original image depends on the microscope type and the vendor. The definition of ‘original data’ or ‘raw data’, and whether its storage is feasible, depends on the specific microscopy technique. In data-heavy techniques (e.g., light-sheet microscopy) it may be acceptable if the cropped, binned, or lossy compressed images, which faithfully captures the key scientific content, are made available. To retain the metadata, a conversion into standard formats or open formats such as OME-TIFF 52 which support uncompressed, lossless, but also lossy (compressed) files, is compatible with broad applications to allow re-analysis of image data. If only a compressed version may be kept (i.e., a file in which image channels and annotations are irretrievably merged), PNG files are superior to the JPEG format as they allow lossless compression 43.
As a minimal requirement the image files shown in figures or used for quantification should be available. When possible (see limitations above), lossless compressed files which allow replication of the analysis workflow should be shared or made available (Fig. 4E–G). We strongly discourage author statements that images “are available upon request” since this has been shown to be inefficient 53,54, however at present infrastructure is not sufficiently in place to ban this option. A clear advancement is depositing both the published and the original images in a public repository with an open license that permits re-use in the scientific context (CC-BY, CC0). Zenodo, OSF, figshare are currently essentially free options also for image data, however some of these have file size limitations (see Fig. 4E–G, Extended Data Fig. 1). OMERO servers (https://www.openmicroscopy.org/omero/institution/) enable institutions but also individual labs to host public or private (access controlled) image sharing databases (see Extended Data Fig. 1 for overview of current repositories). In the long term (“Ideal”), uploading of images with all experimental metadata to dedicated, specialized or fully searchable image databases has the potential to unlock the full power of image data for automated image and/or metadata searches, and the possibility of image data re-use. The databases allowing such functionalities and more include the BioImage Archive (a primary repository which accepts any image data in publications), the Image Data Resource (which publishes specific reference image datasets), or EMPIAR (a dedicated resource for Electron Microscopy datasets). At present, most of the available options are free of charge for 20-50Gb datasets, however, dedicated image databases have strict requirements regarding file type and metadata (Fig. 4E–G, Extended Data Fig. 1). To be inclusive we do not enforce the use of online repositories but require as a minimal measure that scientists are prepared to share image data. Costly and expert-driven storage solutions are at present not accessible to all scientists in the global imaging community.
Checklists for publication of image analysis workflows
Image analysis workflows usually combine several processing steps carried out in a specific sequence to mathematically transform the input image data into a result (i.e., image for visualization or data for a plot; Fig. 5, 39). As images are numerical data, image processing invariably changes these data and thus needs to be transparently documented 31,39,43. We developed separate checklists for scientists wishing to publish results originating from image processing and image analysis workflows (Fig. 6). Focusing on easy implementation of the checklists we propose three categories:
Established workflows or workflow templates: workflows available in the scientific literature or well established in the respective fields.
Novel workflows: established or new image analysis components (available in software platforms or libraries) are assembled by researchers into a novel workflow.
Machine learning (ML) workflows: ML uses an extended technical terminology and ML workflows that utilize deep neural networks (‘deep learning’) face unique challenges with respect to reproducibility. Given the rapid advancements in this field, we created a separate ML checklist.
Fig. 5. Image analysis.
(A) An established workflow template is applied on new image data to produce a result (plot). (B) A new sequence of existing image analysis components is assembled into a novel workflow for a specific analysis (image segmentation). (C) Machine learning workflows learn specific tasks from data, and the resulting model is applied to obtain results.
Established workflows.
Examples of well-established workflows are published pipelines for cell profiler (CellProfiler published pipelines, CellProfiler examples), workflows in KNIME 55, specialized plugins and easy-to-use scripts in ImageJ 56–58, tools and plugins that solve generic image analysis problems such as tracking 59 or pixel classification 60,61. For these workflows extensive expertise, documentation, and tutorials already exist that allow others (e.g., reviewers, readers) to reproduce the workflow and to judge the validity of the results. Scientists publishing images or image analysis results processed with established workflows thus can focus on documenting key parameters only.
Minimal.
The authors must cite the used workflow. The specific software platform or library needs to be cited if the workflow is not available as a stand-alone tool. Key processing parameters must be reported. To validate the performance of the workflow and its settings, example input and output data must be provided. Any manual interventions (e.g., ROIs) must be documented. To ensure proper reproduction of the workflow, the precise version numbers of the workflow and the software platform used are vital and should be documented in the methods. If the used software does not allow the researcher to easily define and retrieve a specific version number, the exact version used should be deposited as a usable executable or code.
Recommended.
Authors should state all settings in the methods or the supplements of the article. Providing data upon request is an ineffective method for data sharing 54. Thus, authors should provide the example input, output and any manual regions of interest via a public repository (see above).
Ideal.
Documenting the usage of software in the form of a screen recording, or, in the case of command line tools via reporting all executed commands in detail, greatly facilitates understanding of the workflow application and therefore reproduction. To avoid any variation arising from factors such as computer hardware or operating system authors could provide cloud-hosted solutions 62–64(kiosk-imagej-plugin) or the workflow packaged in a software container (docker, Singularity)65.
Novel workflows.
Novel image analysis workflows assemble components into a new sequence e.g., a macro in Fiji, a pipeline in CellProfiler or workflow in KNIME in an original way. To ensure reproducibility of the analysis, it is essential to report the specific composition and sequence of such novel workflows.
Minimal.
The individual components utilized in the novel workflow must be cited, named and/or described in detail in the methods section along with the software platform used. It is essential that scientists specify or provide the exact software versions of the used components and software platform in the methods whenever possible. Authors must describe the sequence in which these components have been applied. Key settings (e.g., settings that deviate from default settings) must be documented in the methods section. Finally, the developed workflow must be shared as code (e.g., via code repositories https://github.com/), pipelines (e.g., KNIME workflow, CellProfiler pipeline). Example input, output, and any manually generated inputs (i.e., ROIs), must be made available (See Image Availability). For novel workflows that were created using software that does not allow scripting, the workflow steps should be carefully described by text.
Recommended.
Disclose and describe all settings of the workflow to help the reproduction of the analysis. Provided example input, output, and manual inputs (ROIs) via public repositories such as Zenodo (European Organization For Nuclear Research and OpenAIRE 2013). The developer should describe the rational as well as the limitations of the workflow and the used components in more detail in the methods or supplements. Whenever possible evidence of the adequacy and efficiency of the used algorithms on the published data and potentially even comparisons to related established workflows, facilitate such a documentation.
Ideal.
To further promote reproducibility, add documentation such as a screen recording or a text-based tutorial of the application of the workflow. To enable the efficient reproduction of an analysis with a novel workflow, provide easy installs (e.g., update sites, packages) or easy software reproduction (e.g., via software containers), and easy-to-use user interfaces of software (i.e., graphical user interfaces). Publish the novel workflow as independent methods papers with extensive documentation and online resources 55–61. Taken together, extensive documentation, ease of installation and use will ultimately contribute to the novel workflow becoming well-established and reproduced within the community (a future established and published workflow template) 66.
Machine learning Workflows.
Machine learning, and especially deep learning, have recently become capable of surpassing the quality of results of even the most sophisticated conventional algorithms and workflows and are continuing to advance 67. Deep learning procedures are quickly adapted to microscopy image tasks such as U-net 68 for cell segmentation 69, Noise2Void for image reconstruction 47, StarDist 70,71, Cellpose 62 for instance segmentation, DeepProfiler 72 for feature extraction, and Piximi (https://www.piximi.app/) for image classification.
In machine learning workflows (supervised, unsupervised, self-supervised, shallow or deep learning), the input image data is transformed by one or multiple distinct mathematical operations into a scientific result. The instructions for this transformation are learned from provided data (e.g., labeled data for supervised learning, and unlabeled data for unsupervised learning) to produce a machine learning model. However, the precise makeup of this model is not easily accessible to a user and depends strongly on the quality and nature of the supplied training data as well as the specific training parameters. Biases in the training data/errors in the labels of ground truth for supervised machine learning will bias machine learning models 73–75. Reporting is thus even more critical for reproducibility and understandability when ML applications are applied for image analysis.
Three major approaches are widely used in ML-based image analysis today, which require different documentation : 1) pre-trained models are directly applied to new image data and referral to existing references is sufficient. 2) pre-trained models are re-trained (transfer learning) with novel image data to improve the application, and in this case more information must be provided. 3) models are trained de-novo, in which case extensive documentation is required for reproducibility.
Minimal (All models).
The precise machine learning method needs to be identifiable. Thus, the original method must be cited. At the minimum, access to the model that has been produced in the particular learning approach must be provided as well as validation input and output data. If a pre-trained model has been used, it must be clearly identifiable. For both supervised and unsupervised machine learning applications, the provided example or validation data must not be part of the training and testing data.
Recommended (Re-trained and novel models).
To facilitate the reproduction and validation of results from either models trained from scratch or pre-trained models that were re-trained, the full training and testing data and any training metadata (e.g., training time) should be made available. The code used for training the model should be provided. Code, as well as data, should be provided via public repositories (e.g., Zenodo, Github). The authors should discuss and ideally test how well the model has performed and show any limitations of the used machine learning approach on their data. The application of machine learning models will particularly benefit from being deployed in a cloud-hosted format or via software containers.
Ideal (Novel models).
Further standardization promotes ease of reproduction and validation by the scientific community by making use of emerging online platforms. Thus, novel models could be created conforming to standardized formats (e.g., Model zoo) if they become more readily available in the future.
Discussion
Herein we have presented recommendations in the form of checklists to increase the understandability and reproducibility of published image figures and image analyses. While our checklists were initially intended for bioimages from light microscopes, we believe that its many principles are applicable more widely. Our checklists include recommendations for image formatting, annotation, color display, and data availability, which at the minimal level can largely be achieved with commercial or open-source software (e.g., ‘include scale bar’). Likewise, the minimal suggestions for image analysis pipelines can be implemented readily with today’s options (e.g., code repositories). We believe that, once included in microscopy core facility training and microscopy courses, and introduced as guidelines from publishers, the recommendations will present no additional burden. On the contrary, transparent requirements for publishing images and progress monitoring checklists will ease the path from starting a microscopy experiment to producing reproducible 76 understandable image figures for all scientists.
Recommendations extending the “Minimal” level are introduced in the “Recommended” and “Ideal” reporting levels and at times go beyond what is easy to implement with standard tools today. They are meant to encourage a continuous strive towards higher quality standards in image publishing. Before all of these advanced standards can become a new norm, technologies, software and research infrastructure must still be improved. At present no image database is used widely enough to become a go-to solution, although dedicated resources exist and are slowly gaining traction, and publishers are experimenting with parallel solutions (e.g., EMBO source data). Also, while funding agencies increasingly require data to be deposited in repositories, few guidelines are provided for publishing terabytes to petabytes of raw data. While publishers may mandate data deposition or availability, they are not always reviewing its implementation. Combined with a lack of recognition of efforts put into publishing original image data, scientists are often discouraged to make data openly available. Commercial solutions for data storage are becoming increasingly available. For instance the AWS Open Data has already been used to host image data (https://registry.opendata.aws/cellpainting-gallery/) and we believe that, ultimately, images presented in most publications should be linked to a losslessly compressed image amenable to re-analysis.
The checklists and recommendations for image analyses will naturally be dynamic and require regular updates to reflect new developments in this active research domain. Moreover, it is possible that generation of publication quality images will also become a standardized ‘workflow’ in and of itself. It was previously suggested that images should be processed through scripting, with every step, from microscope output to published figure, stored in a metadata file 39. Another challenge is the continuous availability of image analysis software and workflows, which requires software maintenance and updates to stay usable. Beyond technical developments, it is important to create inclusive standards for image publication that are achievable for our diverse global scientific community, which differs greatly in access to training and support, imaging infrastructure, and imaging software. Our explicit intention is that the minimal level, which we believe must be met, does not pose an additional monetary or skill burden on scientists and is achievable with attention to detail. Our further reporting levels, Recommended and Ideal, should encourage scientists to improve the accessibility and explanatory power of their images in publications.
We envision that the present checklists will be continuously updated by the scientific community and adapted to future requirements and unforeseen challenges. We are currently working on V1.0 of a web-based Jupyter Book as a home for the ongoing development of ideas, extensions, and also discussions on image publication. This consortium will continue future work in close alliance with similar initiatives such as NEUBIAS 66,77, BINA and German BioImaging, initiatives that members of the authorship are involved with alongside their participation in QUAREP-LiMi. Collectively we will work towards providing educational materials and tutorials based on the presented checklists and to continuously lobby to integrate its contents in general resources for better images 78. We ask that all readers consider how their work will be seen and used in the future and join us in building a stronger scientific foundation for everyone. The presented checklists, version 1.0, will already make images in publications more accessible, understandable and reproducible, providing a valuable resource that may be used to build a solid foundation within today’s research that will benefit future science and scientists.
Extended Data
Extended Data figure 1.
Overview of current repositories that accept image data. 82
Acknowledgements, Funding statements, Contributions
We acknowledge the support and discussions with all our colleagues and QUAREP-LiMi Working Group 12 members. We additionally thank Oliver Biehlmeier, Martin Bornhäuser, Oliver Burri, Caron Jacbos, Alex Laude, Kenneth Ho, Rocco D’Antuono for further feedback and discussions on this manuscript and their endorsement of the checklists. Icons: designed by the authors and from Fontawesome.com; Images: ImageJ sample images 79, 80 and 81.
C.S. was supported by a grant from the Chan Zuckerberg Initiative napari Plugin Foundation #2021-24038.
C.B. was supported by grants ANID (PIA ACT192015; Fondecyt 1210872; Fondequip EMQ210101; Fondequip EQM210020) and PUENTE-2022-13 from Pontificia Universidad Católica de Chile.
B.A.C. was funded by NIH P41 GM135019 and grant 2020-225720 from the Chan Zuckerberg Initiative DAF, an advised fund of the Silicon Valley Community Foundation.
F.J. was supported by AI4Life (European Unions’s Horizon Europe research and innovation programme, #101057970) and by the Chan Zuckerberg Initiative napari Plugin Foundation #2021-240383 and #2021-239867.
M.H. and D.G. were supported by NSF award 1917206 and NIH award U01 CA200059.
R.N. was supported by grant NI 451/10-1 from the German Research Foundation and grant 03TN0047B “FluMiKal” from the German Federal Ministry for Economic Affairs and Climate Action.
A.P-D. was supported by EPSRC grant EP/W024063/1
C.S.D.C was supported by grant #2019-198155 (5022) awarded by the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation, as part of their Imaging Scientist Program. She was also funded by NIH grant #U01CA200059.
C.T. was supported by a grant from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation (grant number 2020-225265).
H.K.J. was supported by MSNZ funding of the Deutsche Krebshilfe.
C. S. Conceptualization, Methodology, Writing Original Draft, Visualization, Supervision, Project administration.
M. S. N. Conceptualization, Methodology, Writing Original Draft, Visualization, Writing - Review and Editing.
S. A. Endorsement & Reviewing.
G.-J. B. Endorsement & Reviewing.
C. B. Critical review - commentary in pre-writing stage - Review & Editing.
J. Bischof Endorsement & Reviewing
U. B. Conceptualization, Methodology, Writing - Review & Editing, Endorsement.
J. Brocher. Endorsement, Writing - Review & Editing, Visualization.
M. T. C. Conceptualization, Methodology, Endorsement, Writing - Review & Editing.
C. C. Endorsement & reviewing.
J. C. Writing - Review & Editing.
B. A. C. Writing - Review & Editing, Endorsement.
E. C.-S. Endorsement & reviewing.
M. E. Writing Review and Editing.
R. E. Endorsement & reviewing.
K. E. Endorsement & reviewing.
J. F.-R. Endorsement & reviewing.
N. G. Endorsement & reviewing.
L. G. Endorsement & reviewing
D. G. Writing - Review and Editing.
T. G. Writing - Review and Editing.
N. H. Endorsement, review & editing.
M. Hammer Review and Editing.
M. Hartley Endorsement & reviewing.
M. Held Endorsement & reviewing.
F. J. Endorsement & reviewing.
V. K. Writing - Review and Editing.
A. A. K. Endorsement & reviewing and Editing
J. L. Endorsement & reviewing.
S. LeD. Endorsement & reviewing.
S. LeG. Writing - Review & Editing.
P. L. Endorsement & reviewing.
G. G. M. Conceptualization, Methodology, Writing - Review & Editing.
A. M. Reviewing and Editing, Endorsement.
K. M. Conceptualization, Methodology, Writing - Review and Editing.
P. M. L. Endorsement & Reviewing.
R. N. Conceptualization, Supervision, Project administration, Endorsement.
A. N. Endorsement & Reviewing.
A. C. P. Conceptualization, Methodology, Writing - Review and Editing.
A. P.-D. Writing - Review and Editing, Visualization.
L. P. Writing - Review and Editing.
R. A. Endorsement & Reviewing.
B. S.-D. Endorsement & Reviewing.
L. S. Endorsement & Reviewing.
R. T. S. Endorsement & reviewing.
A. S. Review and Editing.
O. S. Endorsement, Writing - Review & Editing.
V. P. S. Endorsement, reviewing.
M. S. Endorsement & reviewing, Writing – Review and Editing.
S. S. Resources, Writing - Review and Editing.
C. S.-D.-C. Conceptualization, Endorsement, Validation.
D. J. T. Writing - Review and Editing.
C. T. Conceptualization, Methodology, Writing - Review & Editing.
H. K. J. Conceptualization, Methodology, Writing Original Draft, Visualization, Project administration, Supervision
Glossary
- Image
Used here for image data from a microscope experiment, principles described may also apply to medical images, electron microscopy images.
- Original image
Output files/source image data of the microscope; depending on microscope type and the vendor these may be essentially “raw”, i.e., what is visible through the ocular, or pre-processed.
- Workflow
A series of image processing and analysis steps to generate a meaningful result for only a specific application, without reusability in mind. The individual steps typically use existing image analysis components. A workflow exists usually as a script or plugin within a software platform or as a stand-alone. See also workflow template
- Image analysis component
Computer vision methods and algorithms that are available as functions or classes in software platforms for image analysis
- Software platform/library
Software that bundles many algorithms, tools, and workflows (e.g., Fiji, CellProfiler).
- Workflow template
A workflow that is engineered such that it can be reused for more applications and different users. Typically is created with more flexibility and accessibility in mind. Thus, provides more options to modify for a different use case and exposes settings in an easy-to-use manner (e.g., GUI).
- GUI
Graphical user interface
- Machine learning model
A program that makes a decision (classifier) or returns an output (regression) based on some input, with the ability to process previously unseen data.
- Channel adjustment
Change to the brightness, contrast, or gamma correction.
- Contrast
The difference between the brightest and darkest pixels in an image.
- Supervised machine learning
Training a machine learning model with labeled data, for example the inputs for training have been previously classified by a human.
- Unsupervised machine learning
Training a machine learning model with unlabeled data, often to perform tasks such as clustering
- Deep learning
Machine learning using deep neural networks.
- Ground truth
Labeled data. While often described as ground truth, mistakes are often made, especially in large data sets, and should not be assumed to be the actual truth.
- Software containers
A versioned, reproducible, and reusable computing system (such as an operating system visualizer such as Docker (https://www.docker.com/) or Singularity (https://docs.sylabs.io/guides/3.5/user-guide/introduction.html) or an otherwise reusable virtual machine system) that allows arbitrary numbers of users to access one or more software tools in a controlled and defined environment.
Contributor Information
Christopher Schmied, Fondazione Human Technopole, Viale Rita Levi-Montalcini 1, 20157 Milano/Italy; Present address: Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP), Robert-Rössle-Str. 10, 13125 Berlin.
Michael S. Nelson, Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, 53706, USA
Sergiy Avilov, Max Planck Institute of Immunobiology and Epigenetics, 79108 Freiburg, Germany.
Gert-Jan Bakker, Medical BioSciences department, Radboud University Medical Centre, Nijmegen, Netherlands.
Cristina Bertocchi, Laboratory for Molecular mechanics of cell adhesions, Pontificia Universidad Católica de Chile Santiago. Osaka University, Graduate School of Engineering Science, Japan.
Johanna Bischof, Euro-BioImaging ERIC, Bio-Hub, Meyerhofstr. 1, 69117 Heidelberg/Germany.
Ulrike Boehm, Carl Zeiss AG, Carl-Zeiss-Straße 22, 73447 Oberkochen, Germany.
Jan Brocher, BioVoxxel, Scientific Image Processing and Analysis, Eugen-Roth-Strasse 8, 67071 Ludwigshafen, Germany.
Mariana T. Carvalho, Nanophotonics and BioImaging Facility at INL, International Iberian Nanotechnology Laboratory, 4715-330 - Portugal
Catalin Chiritescu, Phi Optics, Inc., 1800 S. Oak St, Ste 106, Champaign, IL 61820, USA.
Jana Christopher, Biochemistry Center Heidelberg, Heidelberg University, Germany.
Beth A. Cimini, Imaging Platform, Broad Institute, Cambridge, MA 02142
Eduardo Conde-Sousa, i3S, Instituto de Investigação e Inovação Em Saúde and INEB, Instituto de Engenharia Biomédica, Universidade do Porto, Porto, Portugal.
Michael Ebner, Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP), Robert-Rössle-Str. 10, 13125 Berlin.
Rupert Ecker, Translational Research Institute, Queensland University of Technology, 37 Kent Street, Woolloongabba, QLD 4102, Australia; School of Biomedical Sciences, Faculty of Health, Queensland University of Technology, Brisbane, QLD 4059, Australia; TissueGnostics GmbH, 1020 Vienna, Austria.
Kevin Eliceiri, Department of Medical Physics and Biomedical Engineering, University of Wisconsin at Madison, Madison WI 53706.
Julia Fernández-Rodríguez, Centre for Cellular Imaging Core Facility, Sahlgrenska Academy, University of Gothenburg, Sweden.
Nathalie Gaudreault, Allen Institute for Cell Science, Seattle, WA, USA.
Laurent Gelman, Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
David Grunwald, RNA Therapeutics Institute, University of Massachusetts Chan Medical School, Worcester, MA 01605, USA.
Tingting Gu, University of Oklahoma, Norman, OK, USA.
Nadia Halidi, Advanced Light Microscopy Unit, Centre for Genomic Regulation, Barcelona, Spain.
Mathias Hammer, RNA Therapeutics Institute, University of Massachusetts Chan Medical School, Worcester, MA 01605, USA.
Matthew Hartley, European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK.
Marie Held, Centre for Cell Imaging, The University of Liverpool, UK.
Florian Jug, Fondazione Human Technopole, Viale Rita Levi-Montalcini 1, 20157 Milano/Italy.
Varun Kapoor, Department of AI research, Kapoor Labs, Paris, 75005, France.
Ayse Aslihan Koksoy, MD Anderson Cancer Center, Houston, TX, USA.
Judith Lacoste, MIA Cellavie Inc., Montreal, QC Canada.
Sylvia E. Le Dévédec, Division of Drug Discovery and Safety, Cell Observatory, Leiden Academic Centre for Drug Research, Leiden University, 2333 CC Leiden, The Netherlands
Sylvie Le Guyader, Karolinska Institutet, Hälsovägen 7C, 14157, Huddinge, Sweden.
Penghuan Liu, Key Laboratory for Modern Measurement Technology and Instruments of Zhejiang Province, College of Optical and Electronic Technology, China Jiliang University, Hangzhou, China.
Gabriel G. Martins, Advanced Imaging Facility, Instituto Gulbenkian de Ciência, Oeiras 2780-156 - Portugal
Aastha Mathur, Euro-BioImaging ERIC, Bio-Hub, Meyerhofstr. 1, 69117 Heidelberg/Germany.
Kota Miura, Bioimage Analysis & Research, 69127 Heidelberg/Germany.
Paula Montero Llopis, MicRoN Core, Harvard Medical School, Boston, MA, USA.
Roland Nitschke, Life Imaging Center, Signalling Research Centres CIBSS and BIOSS, University of Freiburg, Germany.
Alison North, Bio-Imaging Resource Center, The Rockefeller University, New York, NY USA.
Adam C. Parslow, Baker Institute Microscopy Platform, Baker Heart and Diabetes Institute, Melbourne, VIC, 3004, Australia
Alex Payne-Dwyer, School of Physics, Engineering and Technology, University of York, Heslington, YO10 5DD, UK.
Laure Plantard, Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland.
Rizwan Ali, King Abdullah International Medical Research Center (KAIMRC), Medical Research Core Facility and Platforms (MRCFP), King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Ministry of National Guard Health Affairs (MNGHA), Riyadh 11481, Saudi Arabia.
Britta Schroth-Dietz, Light Microscopy Facility, Max Planck Institute of Molecular Cell Biology and Genetics Dresden, Pfotenhauerstrasse 108, 01307 Dresden, Germany.
Lucas Schütz, ariadne.ai (Germany) GmbH, 69115 Heidelberg, Germany.
Ryan T. Scott, Space Biosciences Division, NASA Ames Research Center, Moffett Field, CA, 94035, USA.
Arne Seitz, BioImaging & Optics Platform (BIOP), Ecole Polytechnique Fédérale de Lausanne (EPFL), Faculty of Life sciences (SV), CH-1015 Lausanne.
Olaf Selchow, Microscopy & BioImaging Consulting, Image Processing & Large Data Handling, Tobias-Hoppe-Strassse 3, 07548 Gera, Germany.
Ved P. Sharma, Bio-Imaging Resource Center, The Rockefeller University, New York, NY, USA
Martin Spitaler, Max Planck Institute of Biochemistry, Am Klopferspitz 18, 82152 Martinsried, Germany.
Sathya Srinivasan, Imaging and Morphology Support Core, Oregon National Primate Research Center - (ONPRC - OHSU West Campus), Beaverton, Oregon 97006, USA.
Caterina Strambio-De-Castillia, Program in Molecular Medicine, University of Massachusetts Chan Medical School, Worcester, MA, 01605, USA.
Douglas J. Taatjes, Department of Pathology and Laboratory Medicine, Microscopy Imaging Center (RRID# SCR_018821), Center for Biomedical Shared Resources, University of Vermont, Burlington, VT 05405 USA
Christian Tischer, Centre for Bioimage Analysis, EMBL Heidelberg, Meyerhofstr. 1, 69117 Heidelberg/Germany.
Helena Klara Jambor, NCT-UCC, Medizinische Fakultät TU Dresden, Fetscherstrasse 105, 01307 Dresden/Germany.
Availability
The checklists can be downloaded as printable files here: https://doi.org/10.5281/zenodo.7642559
The companion Jupyter Book can be found here: https://quarep-limi.github.io/WG12_checklists_for_image_publishing/intro.html
References
- 1.North AJ Seeing is believing? A beginners’ guide to practical pitfalls in image acquisition. J. Cell Biol 172, 9–18 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Brown CM Fluorescence microscopy--avoiding the pitfalls. J. Cell Sci 120, 1703–1705 (2007). [DOI] [PubMed] [Google Scholar]
- 3.Senft RA et al. A biologist’s guide to the field of quantitative bioimaging. https://zenodo.org/record/7439284 (2022) doi: 10.5281/zenodo.7439284. [DOI] [Google Scholar]
- 4.Jonkman J Rigor and Reproducibility in Confocal Fluorescence Microscopy. Cytom. Part J. Int. Soc. Anal. Cytol 97, 113–115 (2020). [DOI] [PubMed] [Google Scholar]
- 5.Heddleston JM, Aaron JS, Khuon S & Chew T-L A guide to accurate reporting in digital image acquisition - can anyone replicate your microscopy data? J. Cell Sci 134, jcs254144 (2021). [DOI] [PubMed] [Google Scholar]; This paper provides an nicely detailed breakdown of both why complete reporting of methods in microscopy are important, who the stakeholders are, and where the changes and motivation needs to come from.
- 6.Montero Llopis P et al. Best practices and tools for reporting reproducible fluorescence microscopy methods. Nat. Methods 18, 1463–1476 (2021). [DOI] [PubMed] [Google Scholar]
- 7.Hammer M et al. Towards community-driven metadata standards for light microscopy: tiered specifications extending the OME model. Nat. Methods 18, 1427–1440 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Rigano A et al. Micro-Meta App: an interactive tool for collecting microscopy metadata based on community specifications. Nat. Methods 18, 1489–1495 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]; The Micro-Meta App demonstrates some of the actual steps being taken to provide real tools for scientists to use in order to improve microscopy methods reporting. It is not enough to simply scold scientists that something must change – rather it is important that the tools to make such change as quick and painless as possible be created and made freely available.
- 9.Laissue PP, Alghamdi RA, Tomancak P, Reynaud EG & Shroff H Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017). [DOI] [PubMed] [Google Scholar]
- 10.Kiepas A, Voorand E, Mubaid F, Siegel PM & Brown CM Optimizing live-cell fluorescence imaging conditions to minimize phototoxicity. J. Cell Sci 133, jcs242834 (2020). [DOI] [PubMed] [Google Scholar]
- 11.Sheen MR et al. Replication Study: Biomechanical remodeling of the microenvironment by stromal caveolin-1 favors tumor invasion and metastasis. eLife 8, e45120 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Jambor H et al. Creating clear and informative image-based figures for scientific publications. PLoS Biol 19, e3001161 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]; This study examined in high-impact biology publications how effectively images conveyed insights. It specifically focused on identifying the frequency of unclear images that lack crucial information like scale bars, annotation legends, or accessible colors and served as the catalyst for the current research project.
- 13.Schindelin J et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–82 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Stirling DR et al. CellProfiler 4: improvements in speed, utility and usability. BMC Bioinformatics 22, 433 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Dietz C et al. Integration of the ImageJ Ecosystem in the KNIME Analytics Platform. Front. Comput. Sci 2, 8 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Perkel JM Python power-up: new image tool visualizes complex data. Nature 600, 347–348 (2021). [DOI] [PubMed] [Google Scholar]
- 17.Eliceiri KW et al. Biological imaging software tools. Nat. Methods 9, 697–710 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Haase R et al. A Hitchhiker’s guide through the bio-image analysis software universe. FEBS Lett. 596, 2472–2485 (2022). [DOI] [PubMed] [Google Scholar]
- 19.Aaron J & Chew T-L A guide to accurate reporting in digital image processing - can anyone reproduce your quantitative analysis? J. Cell Sci 134, jcs254151 (2021). [DOI] [PubMed] [Google Scholar]
- 20.Miura K & Tosi S Epilogue: A Framework for Bioimage Analysis. in (eds. Wheeler A & Henriques R.) 269–284 (John Wiley & Sons, Ltd, 2017). doi: 10.1002/9781119096948.ch11. [DOI] [Google Scholar]
- 21.Ellenberg J et al. A call for public archives for biological image data. Nat Methods 15, 849–854 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Hartley M et al. The BioImage Archive - Building a Home for Life-Sciences Microscopy Data. J. Mol. Biol 434, 167505 (2022). [DOI] [PubMed] [Google Scholar]
- 23.Williams E et al. The Image Data Resource: A Bioimage Data Integration and Publication Platform. Nat Methods 14, 775–781 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Ouyang W et al. BioImage Model Zoo: A Community-Driven Resource for Accessible Deep Learning in BioImage Analysis. 2022.06.07.495102 Preprint at 10.1101/2022.06.07.495102 (2022). [DOI] [Google Scholar]
- 25.Boehm U et al. QUAREP-LiMi: a community endeavor to advance quality assessment and reproducibility in light microscopy. Nat. Methods 18, 1423–1426 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]; This paper describes the network QUAREP-LiMi, in which these publication’s authors are embedded and how their work is interconnected to the other QUAREP-LiMi working groups with related topics.
- 26.Nelson G et al. QUAREP-LiMi: A community-driven initiative to establish guidelines for quality assessment and reproducibility for instruments and images in light microscopy. J Microsc 284, 56–73 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Wilkinson MD et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 3, 160018 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Faklaris O et al. Quality assessment in light microscopy for routine use through simple tools and robust metrics. J. Cell Biol 221, e202107093 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Bik EM, Casadevall A & Fang FC The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications. mBio 7, (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]; This paper is a thorough quantitative and systematic analysis of image manipulations in publications. The paper has had a profound impact on the scientific communities and highlighted the need to improve image quality.
- 30.Bik EM, Fang FC, Kullas AL, Davis RJ & Casadevall A Analysis and Correction of Inappropriate Image Duplication: the Molecular and Cellular Biology Experience. Mol. Cell. Biol 38, e00309–18 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Cromey DW Digital images are data: and should be treated as such. Methods Mol Biol 931, 1–27 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.CSE . CSE’s White Paper on Promoting Integrity in Scientific Journal Publications http://www.councilscienceeditors.org/wp-content/uploads/entire_whitepaper.pdf (2012). [Google Scholar]
- 33.Rossner M & Yamada KM What’s in a picture? The temptation of image manipulation. J Cell Biol 166, 11–5 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Van Noorden R Publishers launch joint effort to tackle altered images in research papers. Nature (2020) doi: 10.1038/d41586-020-01410-9. [DOI] [PubMed] [Google Scholar]
- 35.Koppers L, Wormer H & Ickstadt K Towards a Systematic Screening Tool for Quality Assurance and Semiautomatic Fraud Detection for Images in the Life Sciences. Sci. Eng. Ethics 23, 1113–1128 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Bucci EM Automatic detection of image manipulations in the biomedical literature. Cell Death Dis. 9, 400 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Van Noorden R Journals adopt AI to spot duplicated images in manuscripts. Nature 601, 14–15 (2022). [DOI] [PubMed] [Google Scholar]
- 38.Martin C & Blatt M Manipulation and misconduct in the handling of image data. Plant Cell 25, 3147–3148 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Miura K, Norrelykke SF Reproducible image handling and analysis. EMBO J 40, e105889 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]; This work demonstrates with many examples the importannce or proper image analysis to avoid misleading images. The authors also make a strong case for creating reproducible figures with (e.g. IJ-Macro) scripting.
- 40.Marques G PT, Sanders MA . Imaging methods are vastly underreported in biomedical research. Elife 2020, (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Nature Guidelines. Nature guidelines https://www.nature.com/documents/nprot-guide-to-preparing-final-artwork.pdf.
- 42.Schmied C & Jambor HK Effective image visualization for publications - a workflow using open access tools and concepts. F1000Res 9, 1373 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Cromey DW Avoiding twisted pixels: ethical guidelines for the appropriate use and manipulation of scientific digital images. Sci. Eng. Ethics 16, 639–667 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]; This article provides the first set of guidelines on how to properly treat digital images in scientific publications.
- 44.Russ JC The Image Processing Handbook. (CRC Press, 2006). doi: 10.1201/9780203881095. [DOI] [Google Scholar]
- 45.Zuiderveld K Contrast Limited Adaptive Histogram Equalization. in 474–485 (Elsevier, 1994). doi: 10.1016/B978-0-12-336156-1.50061-6. [DOI] [Google Scholar]
- 46.Richardson WH Bayesian-Based Iterative Method of Image Restoration*. J. Opt. Soc. Am 62, 55 (1972). [Google Scholar]
- 47.Krull A, Buchholz T-O & Jug F Noise2Void - Learning Denoising From Single Noisy Images. 2019 IEEECVF Conf. Comput. Vis. Pattern Recognit. CVPR 2124-2132 (2019) doi: 10.1109/CVPR.2019.00223. [DOI] [Google Scholar]
- 48.Weigert M et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018). [DOI] [PubMed] [Google Scholar]
- 49.Fish DA, Brinicombe AM, Pike ER & Walker JG Blind deconvolution by means of the Richardson–Lucy algorithm. JOSA A 12, 58–65 (1995). [Google Scholar]
- 50.Crameri F, Shephard GE & Heron PJ The misuse of colour in science communication. Nat. Commun 11, 5444 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Keene DR A Review of Color Blindness for Microscopists: Guidelines and Tools for Accommodating and Coping with Color Vision Deficiency. Microsc. Microanal 21, 279–289 (2015). [DOI] [PubMed] [Google Scholar]
- 52.Linkert M et al. Metadata matters: access to image data in the real world. J. Cell Biol 189, 777–782 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Tedersoo L et al. Data sharing practices and data availability upon request differ across scientific disciplines. Sci. Data 8, 192 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]; Data sharing and availability is crucial for reproducibility. This paper clearly documents how current data sharing practices shortfall and discusses ways to improve.
- 54.Gabelica M, Bojčić R & Puljak L Many researchers were not compliant with their published data sharing statement: a mixed-methods study. J. Clin. Epidemiol 150, 33–41 (2022). [DOI] [PubMed] [Google Scholar]
- 55.Fisch DH et al. An Artificial Intelligence Workflow for Defining Host-Pathogen Interactions. Preprint at 10.1101/408450 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Erguvan Ö, Louveaux M, Hamant O & Verger S ImageJ SurfCut: a user-friendly pipeline for high-throughput extraction of cell contours from 3D image stacks. BMC Biol. 17, 38 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Klickstein JA, Mukkavalli S & Raman M AggreCount: an unbiased image analysis tool for identifying and quantifying cellular aggregates in a spatially defined manner. J. Biol. Chem 295, 17672–17683 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Schmied C, Soykan T, Bolz S, Haucke V & Lehmann M SynActJ: Easy-to-Use Automated Analysis of Synaptic Activity. in Frontiers in Computer Science vol. 3 777837 (2021). [Google Scholar]
- 59.Tinevez J-Y et al. TrackMate: An open and extensible platform for single-particle tracking. Methods San Diego Calif 115, 80–90 (2017). [DOI] [PubMed] [Google Scholar]
- 60.Arganda-Carreras I et al. Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification. Bioinforma. Oxf. Engl 33, 2424–2426 (2017). [DOI] [PubMed] [Google Scholar]
- 61.Arzt M et al. LABKIT: Labeling and Segmentation Toolkit for Big Image Data. Front. Comput. Sci 4, (2022). [Google Scholar]
- 62.Stringer C, Wang T, Michaelos M & Pachitariu M Cellpose: a generalist algorithm for cellular segmentation. Nat. Methods 18, 100–106 (2021). [DOI] [PubMed] [Google Scholar]
- 63.Berginski ME & Gomez SM The Focal Adhesion Analysis Server: a web tool for analyzing focal adhesion dynamics. F1000Research 2, 68 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Hollandi R et al. nucleAIzer: A Parameter-free Deep Learning Framework for Nucleus Segmentation Using Image Style Transfer. Cell Syst. 10, 453–458.e6 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.da Veiga Leprevost F et al. BioContainers: an open-source and community-driven framework for software standardization. Bioinforma. Oxf. Engl 33, 2580–2582 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Cimini BA et al. The NEUBIAS Gateway: a hub for bioimage analysis methods and materials. F1000Res 9, 613 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Laine RF, Arganda-Carreras I, Henriques R & Jacquemet G Avoiding a replication crisis in deep-learning-based bioimage analysis. Nat. Methods 18, 1136–1144 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Ronneberger O, Fischer P & Brox T U-Net: Convolutional Networks for Biomedical Image Segmentation. 9351, 234–241 (2015). [Google Scholar]
- 69.Falk T et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67–70 (2019). [DOI] [PubMed] [Google Scholar]
- 70.Schmidt U, Weigert M, Broaddus C & Myers G Cell Detection with Star-Convex Polygons. in Medical Image Computing and Computer Assisted Intervention – MICCAI 2018 (eds. Frangi AF, Schnabel JA, Davatzikos C, Alberola-López C & Fichtinger G.) 265–273 (Springer International Publishing, 2018). doi: 10.1007/978-3-030-00934-2_30. [DOI] [Google Scholar]
- 71.Weigert M, Schmidt U, Haase R, Sugawara K & Myers G Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy. in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) 3655–3662 (2020). doi: 10.1109/WACV45572.2020.9093435. [DOI] [Google Scholar]
- 72.Moshkov N et al. Learning representations for image-based profiling of perturbations. 2022.08.12.503783 Preprint at 10.1101/2022.08.12.503783 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Obermeyer Z, Powers B, Vogeli C & Mullainathan S Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453 (2019). [DOI] [PubMed] [Google Scholar]
- 74.Seyyed-Kalantari L, Liu G, McDermott M, Chen IY & Ghassemi M CheXclusion: Fairness gaps in deep chest X-ray classifiers. Preprint at 10.48550/arXiv.2003.00827 (2020). [DOI] [PubMed] [Google Scholar]
- 75.Larrazabal AJ, Nieto N, Peterson V, Milone DH & Ferrante E Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl. Acad. Sci. U. S. A 117, 12592–12594 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Baker M 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454 (2016). [DOI] [PubMed] [Google Scholar]; This news article nicely summarizes the general problem that QUAREP-LiMi is attempting to handle within the field of microscopy. While the article covers a much broader scope, it provides an excellent overview of the impact of the reproducibility issue with clear graphics and a palatable length for any scientist.
- 77.Martins G et al. Highlights from the 2016-2020 NEUBIAS training schools for Bioimage Analysts: a success story and key asset for analysts and life scientist. Preprint at 10.12688/f1000research.25485.1 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Collins S, Gemayel R & Chenette EJ Avoiding common pitfalls of manuscript and figure preparation. FEBS J. 284, 1262–1266 (2017). [DOI] [PubMed] [Google Scholar]
- 79.Schneider CA, Rasband WS & Eliceiri KW NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9, 671–675 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Jambor H et al. Systematic imaging reveals features and changing localization of mRNAs in Drosophila development. Elife 4, (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Sarov M et al. A genome-wide resource for the analysis of protein localisation in Drosophila. Elife 5, e12068 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Cimini B A comparison of repositories for deposition of light microscopy data. (2023) doi: 10.5281/zenodo.7628604. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The checklists can be downloaded as printable files here: https://doi.org/10.5281/zenodo.7642559
The companion Jupyter Book can be found here: https://quarep-limi.github.io/WG12_checklists_for_image_publishing/intro.html