Skip to main content
. 2024 Jul 18;5(8):101024. doi: 10.1016/j.patter.2024.101024

Figure 4.

Figure 4

Comparison of computation times on Slurm with different compute configurations

We show the computation time (in minutes) for workflow CellPose (https://github.com/TorecLuik/W_NucleiSegmentation-Cellpose) on dataset S-BSST26513 (duplicated 12 times for a total of 948 images) for different Slurm configurations: local 1 CPU, remote 1 GPU, and 2 variants with 4 batches each, remote 4 GPU 4 batches and remote 4 GPU 4 batches priority. The local 1 CPU is executed on our workstation with local Slurm containers (https://github.com/Cellular-Imaging-Amsterdam-UMC/NL-BIOMERO-Local-Slurm; no GPU support) and mimics running CellPose locally after downloading data from OMERO. Note that it is not just 1 CPU but 1 CPU node (with 4 CPU cores and 5 GB memory). The remote 1 GPU is executed on our remote HPC cluster with default BIOMERO settings for CellPose (1 job on 1 GPU node). The remote 4 GPU 4 batches are using the batch BIOMERO script to automatically run 4 jobs with 1/4th of the data each (manually chosen) in parallel on our remote HPC cluster, including practical delays like a batch waiting in the Slurm queue for other (people’s) jobs. Finally, remote 4 GPU 4 batches priority is a theoretical extrapolation where we show the maximum time of any of the batches as the job’s duration, a scenario that would be possible on an HPC cluster with 4 available GPUs. The batches were limited to 4 since our HPC account has a limitation of 4 GPUs in parallel, so in other HPC scenarios, the computation could speed up even more by making smaller batches. We show a steady decrease in workflow computation time by leveraging all facets of BIOMERO and Slurm together. The raw experimental data behind the boxplots are shown in Table S1.