Approximate scaling behavior of the CHARMM atom decomposition (AD) model. The
table lists the percent parallel efficiency ranges of the AD model for
various numbers of processors carrying out MD simulations of proteins in an
explicit water environment (50,000–400,000 atoms total) on a) a
shared-memory supercomputer (Cray XT4, 2.6 GHz dual-core AMD Opteron nodes)
and b) a distributed memory cluster (dual-core 2.8 GHz AMD Opteron nodes,
w/8 Gb/s Infiniband interconnects). The simulations were carried out with
periodic boundary conditions, PME for long-range electrostatics, an update
frequency of 25 steps, an image update frequency of 50 steps, and the BYCB
listbuilder. The “COLFFT” columns gives the results
with the recently introduced COLFFT code for faster PME calculations on
large numbers of CPUs. On the larger systems and for smaller numbers of CPUs
(1–4), the default code has faster
(2–10%) absolute times (not shown).