Skip to main content
. Author manuscript; available in PMC: 2010 Jul 30.
Published in final edited form as: J Comput Chem. 2009 Jul 30;30(10):1545–1614. doi: 10.1002/jcc.21287

Table 2.

Approximate scaling behavior of the CHARMM atom decomposition (AD) model. The table lists the percent parallel efficiency ranges of the AD model for various numbers of processors carrying out MD simulations of proteins in an explicit water environment (50,000–400,000 atoms total) on a) a shared-memory supercomputer (Cray XT4, 2.6 GHz dual-core AMD Opteron nodes) and b) a distributed memory cluster (dual-core 2.8 GHz AMD Opteron nodes, w/8 Gb/s Infiniband interconnects). The simulations were carried out with periodic boundary conditions, PME for long-range electrostatics, an update frequency of 25 steps, an image update frequency of 50 steps, and the BYCB listbuilder. The “COLFFT” columns gives the results with the recently introduced COLFFT code for faster PME calculations on large numbers of CPUs. On the larger systems and for smaller numbers of CPUs (1–4), the default code has faster (2–10%) absolute times (not shown).

a)
COLFFT DEFAULT
1 100 100
2 91–95 90–95
4 87–91 78–90
8 82–95 78–83
16 71–79 66–74
32 56–63 50–60
64 39–45 28–38
128 20–28 12–21
b)
COLFFT DEFAULT
1 100 100
2 94–99 93–97
4 91–96 88–94
8 86–89 82–86
16 73–80 69–75
32 61–68 56–65
64 17–53 24–47
128 27–40 22–25