Skip to main content
. 2022 Apr 19;22(Suppl 15):625. doi: 10.1186/s12859-022-04668-0

Table 3.

Results on NCBI catalogs

Pipeline minlin maxlin MALVIRUS bcftools lofreq
Precision Recall Precision Recall Precision Recall
A 20 50 .951 .919 .909 .932 .778 .856
B 20 50 .972 .955 .921 .909 .788 .823
A 50 100 .951 .924 .907 .931 .757 .856
B 50 100 .967 .959 .916 .909 .763 .821
A 100 100 .952 .924 .908 .932 .738 .857
B 100 100 .968 .962 .916 .909 .744 .822

For each catalog, we report the precision and recall achieved by MALVIRUS, BCFtools, and lofreq in calling the variations available in the catalog. The results are shown in terms of average over all the 41 considered samples. We highlighted in bold the best results. We considered 6 different catalogs built using pipeline A or B on the set of assemblies retrieved from NCBI, prefiltered using τN=1% and then subsampled using different combinations of parameters minlin and maxlin