Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2024 Dec 1.
Published in final edited form as: Nat Methods. 2024 May 17:10.1038/s41592-024-02295-6. doi: 10.1038/s41592-024-02295-6

DL4MicEverywhere: Deep learning for microscopy made flexible, shareable, and reproducible

Iván Hidalgo-Cenalmor 1, Joanna W Pylvänäinen 2, Mariana G Ferreira 1, Craig T Russell 3, Alon Saguy 4, Ignacio Arganda-Carreras 5,6,7,8, Yoav Shechtman 4; AI4Life Consortium *, Guillaume Jacquemet 2,9,10,11,, Ricardo Henriques 1,12,, Estibaliz Gómez-de-Mariscal 1,
PMCID: PMC7616093  EMSID: EMS196336  PMID: 38760611

Deep learning enables the transformative analysis of large multidimensional microscopy datasets, but barriers remain in implementing these advanced techniques (1, 2). Many researchers lack access to annotated data, high performance computing (HPC) resources, and expertise to develop, train, and deploy deep learning models. In recent years, several approaches have been developed to democratise deep learning usage in microscopy (2). Tools like the BioImage Model Zoo facilitate sharing and reusing pre-trained models, distributing them as one-click image processing solutions (3, 4). Yet often, deep learning models need to be trained or finetuned on the end-user dataset to perform well (2, 3, 5). We previously released ZeroCostDL4Mic (6), an online platform relying on Google Colab that helped democratise deep learning by providing a zero-code interface to train and evaluate models capable of performing various bioimage processing tasks, such as segmentation, object detection, denoising, super-resolution microscopy, and image-to-image translation. Here, we introduce DL4MicEverywhere, a major advancement of the ZeroCostDL4Mic (6) framework (Fig.1).

Fig. 1. DL4MicEverywhere platform.

Fig. 1

a) DL4MicEverywhere eases deep learning workflow sharing, deployment, and showcasing by providing a user-friendly interactive environment to train and use models. Cross-platform compatibility ensures reproducible deep-learning model training. DL4MicEverywhere contributes to deep learning standardisation in bioimage analysis by promoting transferable, FAIR, and transparent pipelines. The platform exports models compatible with the BioImage Model Zoo (3) and populates free and open source (FOSS) container images in Docker Hub for developers to reuse. b) DL4MicEverywhere accepts three types of notebook contributions: ZeroCostDL4Mic (6) notebooks, bespoke notebooks inspired by ZeroCostDL4Mic (6), and notebooks hosted in external repositories that are compliant with our format. The requirements and format of these contributions are automatically tested. c) In the DL4MicEverywhere GUI, the user chooses a notebook, images and output folder, and chooses a GPU-running model if possible. d) DL4MicEverywhere automatically identifies the system architecture and requirements, checks if the corresponding Docker images is available in Docker Hub to download, and it builds it otherwise. This image is used to create a Docker container: a functional instance of the image that gathers the code environment to use the chosen notebook. e) A Jupyter lab session is launched inside the Docker container to train, evaluate or use the chosen model within an interactive notebook, equivalent to ZeroCostDL4Mic (6) notebooks. f-h) DL4MicEverywhere enables the use of the same notebooks for f) super-resolution, g) artificial-labelling or h) segmentation pipelines, among many others, in different local or remote infrastructures such as workstations, the cloud or high-performance computing clusters.

DL4MicEverywhere is a platform that lets users train and implement their models in different computational environments. These environments include Google Colab, personal computational resources like a desktop or laptop, and HPC systems. This flexibility is achieved by encapsulating each deep learning technique in an interactive Jupyter Notebook within a Docker container, enabling others to replicate analyses consistently across multiple platforms. Building upon ZeroCostDL4Mic, DL4MicEverywhere enables users to install and interact with a large offer of standardised user-friendly deep learning workflows, away from the limitations of proprietary platforms such as Google Colab, in a safe environment (https://github.com/HenriquesLab/DL4MicEverywhere). DL4MicEverywhere can be launched graphically, via X11 forwarding, or directly through a command-line (headless mode), supporting HPC usage. This cross-platform containerisation technology boosts the long-term platform’s sustainability and reproducibility, enhancing user convenience (7).

Importantly, DL4MicEverywhere features a zero-code interface that handles all behind-the-scenes complexities, so users no longer need to deal with Docker configuration and deployment through the terminal. The intuitive interface abstracts away these technical details while providing a standardised Docker encapsulation for executing advanced techniques reliably. Researchers can select a notebook, choose computing resources, and run the corresponding deep learning-powered analysis with just a few clicks (Figure 1c-e). This allows users to train and apply models on various computing resources they control, eliminating reliance on third-party platforms. Furthermore, researchers can launch a notebook on local or remote systems with GPU acceleration whenever available, without worrying about complex software dependencies, Docker container management, or losing access to deep-learning frameworks (Fig. 1f-h). Compared to ZeroCostDL4Mic, DL4MicEverywhere doubles the number of deep learning approaches and provides new bioimaging analysis tasks, such as semantic segmentation, interactive instance segmentation, image registration, 3D single molecule localization microscopy, temporal and spatial upsampling, and image generation. The platform is designed to encourage the sharing and reuse of deep learning workflows provided as Jupyter Notebooks, which are then integrated into the BioImage Model Zoo. DL4MicEverywhere is strengthened by automated build pipelines (8) that allow for tracked versioning of ZeroCostDL4Mic notebooks and the seamless integration of new trainable models contributed by the community as user-friendly notebooks independently of the original ZeroCostDL4Mic framework (Fig. 1b). DL4MicEverywhere handles the corresponding testing and building of fully documented and open-source containers, making it easy for researchers to share not just the latest method, but the full software environment required to run it reliably.

DL4MicEverywhere is an open-source initiative that aims to make deep learning accessible to everyone by providing a flexible and community-driven platform. Encapsulating software in Docker containers makes it possible to integrate new methods without worrying about complex installation procedures and enrich the microscopy community through participatory innovation. Users can rely on shared techniques while customising models across diverse hardware, retaining control over data and analysis. The platform sets a baseline for the developing and using of cutting-edge foundation models (9). By bundling these sophisticated models into shareable containers, researchers can easily exploit them in their microscopy applications. It is noteworthy that containerisation approaches can increase local storage usage. Compared to proprietary platforms, which can create technological and cultural obstacles, DL4MicEverywhere simplifies complex deep learning workflows through open easy-to-use GUIs and automated pipelines. It leverages local computational resources, HPC, and cloud-based solutions, which provides valuable flexibility for sensitive biomedical data, where privacy risks may limit reliance on public cloud platforms. It also helps with continuously scaling data, such as high-throughput high-content imaging data, whose storage, dissemination, and access often rely on institutional infrastructures with specific data-sharing protocols. Containerising notebooks is secure, as Jupyter Notebook ports are virtualised, private, and protected with tokens. DL4MicEverywhere also adheres to FAIR principles, enhancing data-driven scientific discoverability (10). We expect DL4MicEverywhere to represent an important step towards reliable, transparent, and participatory artificial intelligence in microscopy.

Acknowledgements

I.H.C., M.G.F., C.T.R., R.H., and E.G.M. received funding from the European Union through the Horizon Europe program (AI4LIFE project with grant agreement 101057970-AI4LIFE, and RT-SuperES project with grant agreement 101099654-RTSuperES to R.H.). I.H.C., M.G.F., E.G.M. and R.H. also acknowledge the support of the Gulbenkian Foundation (Fundação Calouste Gulbenkian) and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001332 to R.H.). Funded by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. This work was also supported by the European Molecular Biology Organization (EMBO) Installation Grant (EMBO-2020-IG-4734 to R.H.), the EMBO Postdoctoral Fellowship (EMBO ALTF 174-2022 to E.G.M.), the Chan Zuckerberg Initiative Visual Proteomics Grant (vpi-0000000044 with DOI:10.37921/743590vtudfp to R.H.). R.H. also acknowledges the support of LS4FUTURE Associated Laboratory (LA/P/0087/2020). This work is partially supported by grant GIU19/027 (to I.A.C.) funded by the University of the Basque Country (UPV/EHU), grant PID2021-126701OB-I00 (to I.A.C.) funded by the Ministerio de Ciencia, Innovación y Universidades, AEI, MCIN/AEI/10.13039/501100011033, and by "ERDF A way of making Europe" (to I.A.C.). This study was also supported by the Academy of Finland (338537 to G.J.), the Sigrid Juselius Foundation (to G.J.), the Cancer Society of Finland (Syöpäjärjestöt; to G.J.), and the Solutions for Health strategic funding to Åbo Akademi University (to G.J.). This research was supported by InFLAMES Flagship Programme of the Academy of Finland (decision number: 337531). We would like to thank Amin Rezaei, Ainhoa Serrano, Pablo Alonso, Urtzi Beorlegui, Andoni Rodriguez, Erlantz Calvo, Soham Mandal, and Virginie Uhlmann for their contributions to the ZeroCostDL4Mic notebook collection.

Footnotes

Author Contributions

I.H.C., G.J., R.H. and E.G.M. conceived, designed and wrote the source code of the project with contributions from all co-authors; I.H.C., J.P.W., M.G.F., C.R., A.S., Y.S., G.J., R.H., and E.G.M. tested the platform; I.H.C., J.P.W., M.G.F., G.J., R.H., and E.G.M. wrote the user documentation; I.H.C., G.J., R.H. and E.G.M. wrote the paper with input from all co-authors.

Contributor Information

AI4Life Consortium:

Arrate Muñoz-Barrutia, Beatriz Serrano-Solano, Caterina Fuster Barcelo, Constantin Pape, Craig T Russell, Emma Lundberg, Estibaliz Gómez-de-Mariscal, Florian Jug, Joran Deschamps, Iván Hidalgo-Cenalmor, Mariana G Ferreira, Matthew Hartley, Mehdi Seifi, Ricardo Henriques, Teresa Zulueta-Coarasa, Vera Galinova, and Wei Ouyang

Code availability

The source code, documentation, and tutorials for DL4MicEverywhere are available at https://github.com/HenriquesLab/DL4MicEverywhere under the Creative Commons CC-BY-4.0 license.

Bibliography

  • 1.Moen Erick, Bannon Dylan, Kudo Takamasa, Graf William, Covert Markus, Van Valen David. Deep learning for cellular image analysis. Nature Methods. 2019 December;16(12):1233–1246. doi: 10.1038/s41592-019-0403-1. ISSN 1548-7091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Pylvänäinen Joanna W, Gómez-de Mariscal Estibaliz, Henriques Ricardo, Jacquemet Guillaume. Live-cell imaging in the deep learning era. Current Opinion in Cell Biology. 2023 December;85:102271. doi: 10.1016/j.ceb.2023.102271. ISSN 0955-0674. [DOI] [PubMed] [Google Scholar]
  • 3.Ouyang Wei, Beuttenmueller Fynn, Gómez-De-Mariscal Estibaliz, Pape Constantin, Burke Tom, Garcia-López-De-Haro Carlos, Russell Craig, Moya-Sans Lucía, De-La-Torre-Gutiérrez Cristina, Schmidt Deborah, Kutra Dominik, et al. bioRxiv. Cold Spring Harbor Laboratory; 2022. Jun, BioImage Model Zoo: A community-driven resource for accessible deep learning in bioimage analysis; 2022.06.07.495102. [DOI] [Google Scholar]
  • 4.Gómez-de Mariscal Estibaliz, García-López-de Haro Carlos, Ouyang Wei, Donati Laurène, Lundberg Emma, Unser Michael, Muñoz-Barrutia Arrate, Sage Daniel. Nature Methods. 10. Vol. 18. Nature Publishing Group; 2021. Oct, DeepImageJ: A user-friendly environment to run deep learning models in ImageJ; pp. 1192–1195. ISSN 1548-7105 Number:10. [DOI] [PubMed] [Google Scholar]
  • 5.Laine Romain F, Arganda-Carreras Ignacio, Henriques Ricardo, Jacquemet Guillaume. Nature Methods. 10. Vol. 18. Nature Publishing Group; 2021. Oct, Avoiding a replication crisis in deep-learning-based bioimage analysis; pp. 1136–1144. ISSN 1548-7105 2021 18:10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.von Chamier Lucas, Laine Romain F, Jukkala Johanna, Spahn Christoph, Krentzel Daniel, Nehme Elias, Lerche Martina, Hernández-Pérez Sara, Mattila Pieta K, Karinou Eleni, Holden Séamus, et al. Democratising deep learning for microscopy with ZeroCostDL4Mic. Nature Communications. 2021 December;12(1):2276. doi: 10.1038/s41467-021-22518-0. ISSN 20411723. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Moreau David, Wiebels Kristina, Boettiger Carl. Nature Reviews Methods Primers. 1. Vol. 3. Nature Publishing Group; 2023. Jul, Containers for computational reproducibility; pp. 1–16. ISSN 2662-8449 Number: 1. [DOI] [Google Scholar]
  • 8.Beaulieu-Jones Brett K, Greene Casey S. Nature Biotechnology. 4. Vol. 35. Nature Publishing Group; 2017. Apr, Reproducibility of computational workflows is automated using continuous analysis; pp. 342–346. ISSN 1546-1696 Number: 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Bommasani Rishi, Hudson Drew A, Adeli Ehsan, Altman Russ, Arora Simran, von Arx Sydney, Bernstein Michael S, Bohg Jeannette, Bosselut Antoine, Brunskill Emma, Brynjolfsson Erik, et al. On the opportunities and risks of foundation models. arXiv:2108.07258 [cs] 2022 July [Google Scholar]
  • 10.Wang Hanchen, Fu Tianfan, Du Yuanqi, Gao Wenhao, Huang Kexin, Liu Ziming, Chandak Payal, Liu Shengchao, Van Katwyk Peter, Deac Andreea, Anandkumar Anima, et al. Nature. 7972. Vol. 620. Nature Publishing Group; 2023. Aug, Scientific discovery in the age of artificial intelligence; pp. 47–60. ISSN 1476-4687 Number: 7972. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The source code, documentation, and tutorials for DL4MicEverywhere are available at https://github.com/HenriquesLab/DL4MicEverywhere under the Creative Commons CC-BY-4.0 license.

RESOURCES