Skip to main content
. 2017 May 16;206(3):1285–1295. doi: 10.1534/genetics.116.197491

Table 5. Comparison of the proposed approach implemented in MultiPoint-UDM (MUDM) software with MST on a simulated double haploid population (of size 200 with 2000 markers per chromosome).

ms% 0 10
pe 0.001 0.005 0.01 0.001 0.005 0.01
MST LcM 180 294 750 186 408 691
Bins 146 291 708 177 513 862
MUDM LcM 134 134 137 131 135 132
Nsk 81 75 58 75 86 64
nr 1 3 1 6 20 14
lf % 4.8 14.3 32.1 17.9 21.4 40.5

pe, level of genotyping errors per marker; ms, simulated rate of missing data per marker; LcM, map length (in centimorgans) of a chromosome or LG; lf (%), loss factor, which represents the percentage of lost (noncharacterized) map unique positions in the constructed skeletal map compared to the simulated map; it is calculated as lf = 100 [Nskef − (Nsknr)]/Nskef, = 100 (NskefNsk + nr)/Nskef, where Nskef is the number of intervals in the map built for the simulated error-free data, while Nsk and (Nsknr) are the number of noisy markers in the skeletal map, noncorrected and corrected for the number of repeats, respectively; nr, the number of “repeats” resulting from fission of the initial groups of cosegregating markers into subgroups due to genotyping errors and missing marker scores; such repeats will appear in the constructed map at separate (usually, but not necessarily, adjacent) positions.