Skip to main content
. 2024 Feb 20;80(Pt 3):174–180. doi: 10.1107/S2059798324000986

Table 1. Processing components deployed for the cryoET pipeline at eBIC along with approximate resource requirements and limits.

These requirements are designed to be able to accommodate live data analysis on four Titan Krios microscopes. Note that Kubernetes resource requirements can be specified in fractions of CPU cores, in which case the CPU clock cycles are divided between applications. A100, V100 or P100 GPUs are used as the GPU resource, but lower CUDA core-count GPUs would be adequate. Some motion-correction services submit to a separate HPC cluster operating a Slurm scheduler. A larger number of motion-correction services than necessary are allowed for the pipeline to be able to deal with backlogs. Typical usage only sees approximately four service instances in use.

Service component Maximum No. of instances Minimum requested CPU cores (per instance) Maximum CPU cores (per instance) GPU resources (per instance)
Motion-correction service (MotionCor2) 8 0.5 1 1 GPU, V100 or P100
CTF-estimation service (CTFFind4) 4 0.25 1 None
Reconstruction service (AreTomo) 4 0.5 1 1 GPU, V100 or A100
Tomogram denoising (Topaz) 2 0.5 1 1 GPU, V100 or A100
ISPyB connector service 4 0.25 1 None
Images service (for thumbnail creation and data-format conversion) 4 0.25 1 None
Dispatcher (workflow-triggering service) 2 0.25 1 None
RabbitMQ server 1 0.5 1 None
PostgreSQL database servers (used by Murfey) 3 0.5 2 None
PgPool PostgreSQL middleware 2 1 2 None

Part of a standard high-availability deployment. The relevant Helm chart is available at https://artifacthub.io/packages/helm/bitnami/postgresql-ha.