Abstract
Background
We study a preprocessing routine relevant in pan-genomic analyses: consider a set of aligned haplotype sequences of complete human chromosomes. Due to the enormous size of such data, one would like to represent this input set with a few founder sequences that retain as well as possible the contiguities of the original sequences. Such a smaller set gives a scalable way to exploit pan-genomic information in further analyses (e.g. read alignment and variant calling). Optimizing the founder set is an NP-hard problem, but there is a segmentation formulation that can be solved in polynomial time, defined as follows. Given a threshold L and a set of m strings (haplotype sequences), each having length n, the minimum segmentation problem for founder reconstruction is to partition [1, n] into set P of disjoint segments such that each segment has length at least L and the number of distinct substrings at segment [a, b] is minimized over . The distinct substrings in the segments represent founder blocks that can be concatenated to form founder sequences representing the original such that crossovers happen only at segment boundaries.
Results
We give an O(mn) time (i.e. linear time in the input size) algorithm to solve the minimum segmentation problem for founder reconstruction, improving over an earlier .
Conclusions
Our improvement enables to apply the formulation on an input of thousands of complete human chromosomes. We implemented the new algorithm and give experimental evidence on its practicality. The implementation is available in https://github.com/tsnorri/founder-sequences.
Keywords: Pan-genome indexing, Founder reconstruction, Dynamic programming, Positional Burrows–Wheeler transform, Range minimum query
Background
A key problem in pan-genomics is to develop a sufficiently small, efficiently queriable, but still descriptive representation of the variation common to the subject under study [1]. For example, when studying human population, one would like to take all publicly available variation datasets (e.g. [2–4]) into account. Many approaches encode the variation as a graph [5–10] and then one can encode the different haplotypes as paths in this graph [11]. An alternative has been proposed [12] based on a compressed indexing scheme for a multiple alignment of all the haplotypes [13–17]. In either approach, scalability is hampered by the encoding of all the haplotypes.
We suggest to look for a smaller set of representative haplotype sequences to make the above pan-genomic representations scalable.
Finding such set of representative haplotype sequences that retain the original contiguities as well as possible, is known as the founder sequence reconstruction problem [18]. In this problem, one seeks a set of d founders such that the original m haplotypes can be mapped with minimum amount of crossovers to the founders. Here a crossover means a position where one needs to jump from one founder to another to continue matching the content of the haplotype in question. Unfortunately, this problem is -hard even to approximate within a constant factor [19].
For founder reconstruction to be scalable to the pan-genomic setting, one would need an algorithm to be nearly linear to the input size. With this is mind, we study a relaxation of founder reconstruction that is known to be polynomial time solvable: Namely, when limiting all the crossovers to happen at the same locations, one obtains a minimum segmentation problem specific to founder reconstruction [18]. A dynamic programming algorithm solves this problem in time [18], where m is the number of haplotypes and n is the length of each of them.
In this paper, we improve the running time of solving the minimum segmentation problem of founder reconstruction to O(mn) (linear in the input size).
We also implement the new algorithm, as well as a further heuristic that aims to minimize crossovers over the segment boundaries (yielded by the optimal solution to the minimum segmentation problem). In our experiments, we show that the approach is practical on human genome scale setting. Namely, we apply the implementation on a multiple alignment representing 5009 haplotypes of human chromosome 6, and the result is 130 founder sequences with the average distance of two crossovers being 9624 bases. Preserving such long contiguities in just 2.5% of the original input space is promising for the accuracy and scalability of the short read alignment and variant calling motivating our study.
The main technique behind the improvement is the use of positional Burrows–Wheeler transform (pBWT) [20], and more specifically its extension to larger alphabets [21]. While the original dynamic programming solution uses O(nm) time to look for the best preceding segment boundary for each column of the input, we observe that at most m values in pBWT determine segment boundaries where the number of distinct founder substrings change. Minimums on the already computed dynamic programming values between each such interesting consecutive segment boundaries give the requested result. However, it turns out that we can maintain the minimums directly in pBWT internal structures (with some modifications) and have to store only the last L computed dynamic programming values, thus spending only additional space, where L is the input threshold on the length of each segment. The segmentation is then reconstructed by standard backtracking approach in O(n) time using an array of length n.
Preliminary version of this work appeared in WABI 2018 [22].
Methods
Notation and problem statement
For a string , denote by |s| its length n. We write s[i] for the letter of s and s[i, j] for the substring . An analogous notation is used for arrays. For any numbers i and j, the set of integers (possibly empty) is denoted by [i, j].
The input for our problem is the set of strings of length n, called recombinants. A set of strings of length n is called a founder set of if for each string , there exists a partition of the segment [1, n] into disjoint subsegments such that, for each , the string is equal to for some . The partition together with the mapping of the segments to substrings is called a parse of in terms of , and a set of parses for all is called a parse of in terms of . The integers a and , for , are called crossover points; thus, in particular, 1 and are always crossover points.
It follows from the definition that, in practice, it makes sense to consider founder sets only for pre-aligned recombinants. Throughout the paper we implicitly assume that this is the case, although all our algorithms, clearly, work in the unaligned setting too but the produce results may hardly make any sense.
We consider the problem of finding a “good” founder set and a “good” corresponding parse of according to a reasonable measure of goodness. Ukkonen [18] pointed out that such measures may contradict each other: for instance, a minimum founder set obviously has size , but parses corresponding to such set may have unnaturally many crossover points; conversely, is a founder set of itself and the only crossover points of its trivial parse are 1 and , but the size m of this founder set is in most cases unacceptably large. Following Ukkonen’s approach, we consider compromise parameterized solutions. The minimum founder set problem is, given a bound L and a set of recombinants , to find a smallest founder set of such that there exists a parse of in terms of in which the distance between any two crossover points is at least L (the crossover points may belong to parses of different recombinants, i.e., for and , where and are parses of and , we have either or ).
It is convenient to reformulate the problem in terms of segmentations of . A segment of is a set . A segmentation of is a collection S of disjoint segments that covers the whole , i.e., for any distinct and from S, [j, k] and do not intersect and, for each , there is from S such that . The minimum segmentation problem [18] is, given a bound L and a set of recombinants , to find a segmentation S of such that is minimized and the length of each segment from S is at least L; in other words, the problem is to compute
1 |
where is the set of all segmentations in which all segments have length at least L.
The minimum founder set problem and the minimum segmentation problem are connected: any segmentation S with segments of length at least L induces in an obvious way a founder set of size and a parse in which all crossover points are located at segment boundaries (and, hence, at distance at least L from each other); conversely, if is a founder set of and is the sorted set of all crossover points in a parse of such that for , then is a segmentation of with segments of length at least L and .
Our main result is an algorithm that solves the minimum segmentation problem in O(mn) time (linear in the input size). The solution normally does not uniquely define a founder set of : for instance, if the built segmentation of is , then the possible founder sets induced by S are and . In other words, to construct a founder set, one concatenates fragments of recombinants corresponding to the found segments in a certain order. We return to this ordering problem in the section describing experiments and now focus on the details of the segmentation problem.
Hereafter, we assume that the input alphabet is the set of size O(m), which is a natural assumption considering that the typical alphabet size is 4 in our problem. It is sometimes convenient to view the set as a matrix with m rows and n columns. We say that an algorithm processing the recombinants is streaming if it reads the input from left to right “columnwise”, for each k from 1 to n, and outputs an answer for each set of recombinants immediately after reading the “column” . The main result of the paper is the following theorem.
Theorem 1
Given a bound L and recombinants , each having length n, there is an algorithm that computes (1) in a streaming fashion in O(mn) time and space. Using an additional array of length n, one can also find in O(n) time a segmentation on which (1) is attained, thus solving the minimum segmentation problem.
Minimum segmentation problem
Given a bound L and a set of recombinants each of which has length n, Ukkonen [18] proposed a dynamic programming algorithm that solves the minimum segmentation problem in time based on the following recurrence relation:
2 |
It is obvious that M(n) is equal to the solution (1); the segmentation itself can be reconstructed by “backtracking” in a standard way [18]. We build on the same approach.
For a given , denote by the sequence of all positions in which the value of changes, i.e., and for . We complement this sequence with and , so that can be interpreted as a splitting of the range into segments in which the value stays the same: namely, for , one has provided . Hence, and, therefore, (2) can be rewritten as follows:
3 |
Our crucial observation is that, for and , one has . Therefore, and . Hence, M(k) can be computed in O(m) time using (3), provided one has the following components:
-
i.
the sorted sequence
-
ii.
the numbers , for
-
iii.
the values , for
In the remaining part of the section, we describe a streaming algorithm that reads the strings “columnwise” from left to right and computes the components (i), (ii), and (iii) immediately after reading each “column” , for , and all in O(mn) total time and space.
To reconstruct a segmentation corresponding to the found solution M(n), we build along with the values M(k) an array of size n whose kth element, for each , stores 0 if , and stores a number such that otherwise; then, the segmentation can be reconstructed from the array in an obvious way in O(n) time. In order to maintain the array, our algorithm computes, for each , along with the values , for , positions j on which these minima are attained (see below). Further details are straightforward and, thence, omitted.
Positional Burrows–Wheeler transform
Let us fix . Throughout this subsection, the string , which is the reversal of , is denoted by , for . Given a set of recombinants each of which has length n, a positional Burrows–Wheeler transform (pBWT), as defined by Durbin [20], is a pair of integer arrays and such that:
is a permutation of [1, m] such that lexicographically;
, for , is an integer such that is the longest common suffix of and , and if either this suffix is empty or .
Example 1
Consider the following example, where , , and . It is easy to see that the pBWT implicitly encodes the trie depicted in the right part of Fig. 1, and such interpretation drives the intuition behind this structure: The trie represents the reversed sequences (i.e., read from right to left) in lexicographic order. Leaves (values ) store the corresponding input indices. The branches correspond to values (the distance from the root subtracted from ). Our main algorithm in this paper makes implicitly a sweep-line over the trie stopping at the branching positions.
Durbin [20] showed that and can be computed from and in O(m) time on the binary alphabet. Mäkinen and Norri [21] further generalized the construction for integer alphabets of size O(m), as in our case. For the sake of completeness, we describe in this subsection the generalized solution [21] (see Algorithm 1), which serves then as a basis for our main algorithm. We also present a modification of this solution (see Algorithm 2), which, albeit seems to be slightly inferior in theory (we could prove only time upper bound), showed better performance in practice and thus, as we believe, is interesting by itself.
Lemma 2
The arrays and can be computed from and in O(m) time, assuming the input alphabet is with .
Proof
Given and , we are to show that Algorithm 1 correctly computes and . Since, for any , we have iff either , or and lexicographically, it is easy to see that the array can be deduced from by radix sorting the sequence of pairs . Further, since, by definition of , the second components of the pairs are already in a sorted order, it remains to sort the first components by the counting sort. Accordingly, in Algorithm 1, the first loop counts occurrences of letters in the sequence using an auxiliary array ; as is standard in the counting sort, the second loop modifies the array C so that, for each letter , is the first index of the “bucket” that will contain all such that ; finally, the third loop fills the buckets incrementing the indices , for , and performing the assignments , for . Thus, the array is computed correctly. All is done in time, which is O(m) since the input alphabet is and .
The last three lines of the algorithm are responsible for computing . Denote the length of the longest common prefix of any strings and by . The computation of relies on the following well-known fact: given a sequence of strings such that lexicographically, one has . Suppose that the last loop of the algorithm, which iterates through all i from 1 to m, assigns , for a given and some . Let j be the maximum integer such that and (if any). The definition of implies that if such j exists. Hence, if such number j does exist, and otherwise. Therefore, since equals , we have either or according to whether the required j exists. To find j, we simply maintain an auxiliary array such that on the ith loop iteration, for any letter , P[b] stores the position of the last seen b in the sequence , or if b occurs for the first time. Thus, is computed correctly.
In order to calculate the maximums in O(1) time, we build a range maximum query (RMQ) data structure on the array in O(m) time [23]. Therefore, the running time of Algorithm 1 is O(m).
In practice the bottleneck of the algorithm is the RMQ data structure, which, although answers queries in O(1) time, has a sensible constant under the big-O in the construction time. We could naively compute the maximums by scanning the ranges from left to right but such algorithm works in quadratic time since same ranges of might be processed many times in the worst case. Our key idea is to store the work done by a simple scanning algorithm to reuse it in future queries. We store this information right in the arrays and rewriting them; in particular, since is accessed sequentially from left to right in the last loop, the range is free to use after the ith iteration.
More precisely, after the ith iteration of the last loop, the subarrays and are modified so that the following invariant holds: for any , and , where denotes the original array before modifications; note that the invariant holds if one simply puts without altering . Then, to compute , we do not have to scan all elements but can “jump” through the chain and use maximums precomputed in ; after this, we redirect the “jump pointers” in to and update the maximums in accordingly. This idea is implemented in Algorithm 2. Notice the new line in the main loop (it is commented), which erases and makes it a part of the “jump table”. The correctness of the algorithm is clear. But it is not immediate even that the algorithm works in time. The next lemma states that the bound is actually even better, .
Lemma 3
Algorithm 2 computes the arrays and from and in time, assuming the input alphabet is with .
Proof
Fix . The ith iteration of the last loop in the algorithm computes the maximum in a range , where is the original array before modifications and for some b and P. Let . Denote , the “average query length”. We are to prove that the running time of the algorithm is , which implies the result since and . The latter inequality follows from the fact that the query ranges correponding to the same symbol are non-overlapping.
We say that a position j is touched if the function is called with its first argument equal to j. Since for each i the first call to is with different j, it suffices to prove that the total number of touches is . While processing the query , we may have touched many positions. Denote the sequence of all such position, for the given i, by ; in other words, at the time of the query , we have , for , , and hence . We say that, for , the touch of in the query is scaling if there exists an integer q such that and (see Fig. 2). We count separately the total number of scaling and non-scaling touches in all i.
For position j, denote by p(j) the number of non-scaling touches of j. We are to prove that . Let denote the value of in the hth non-scaling touch of j, for . Suppose that this hth touch happens during the processing of a query . By the definition, follows j in the sequence of touched positions. Since the touch of j is non-scaling, we have , where q is the largest integer such that . Since , there holds . Since assigns , we have after the query. In other words, we had before the query and have after. This immediately implies that , for , and, therefore, every position can be touched in the non-scaling way at most times, implying . But we can deduce a stronger bound. Since the sum of all values for all positions j touched in a query is equal to , we can bound the total sum of values by . On the other hand, we have . The well-known property of the convexity of the exponent is that the sum is minimized whenever all p(j) are equal, i.e., . Hence, once , we obtain , which is larger than for (for the case the claim follows directly), contradicting . Thus, .
It remains to consider scaling touches. The definition implies that each query performs at most scaling touches. Thus, it suffices to upperbound . Since the function is concave, the sum is maximized whenever all are equal, i.e., , hence the result follows.
Modification of the pBWT
We are to modify the basic pBWT construction algorithm in order to compute the sequence of all positions in which , and to calculate the numbers and , for (assuming and ); see the beginning of the section. As it follows from (3), these numbers are sufficient to calculate M(k), as defined in (2) and (3), in O(m) time. The following lemma reveals relations between the sequence and the array .
Lemma 4
Consider recombinants , each having length n. For and , one has iff for some .
Proof
Suppose that . It is easy to see that , which implies that there are two indices h and such that and . Denote by the number x such that . Without loss of generality, assume that . Then, there exists such that and . Hence, .
Suppose now that and , for some . Since and , we have . Then, by definition of , and , i.e., can be “extended” to the left in two different ways, thus producing two distinct strings in the set . Therefore, .
Denote by r the number of distinct integers in the array . Clearly, r may vary from 1 to m. For integer , define if , and otherwise ( is introduced for purely technical reasons). Our modified algorithm does not store but stores the following four arrays (but we still often refer to for the sake of analysis):
contains all distinct elements from in the increasing sorted order;
: for , is equal to the unique index such that ;
: for , is equal to the number of times occurs in ;
: for , , assuming .
The arrays and together emulate . The array will be used to calculate some numbers required to compute M(k).
Example 2
In Example 1, where , , and , we have , , , . It is easy to see that the array marks positions of the branching nodes in the trie from Fig. 1 in the increasing order (in the special case , does not mark any such node). Suppose that , so that . Then, , , since , and . The use of is discussed in the sequel.
For convenience, let us recall Eq. (3) defined in the beginning of this section:
3 revisited |
where , , and is the increasing sequence of all positions in which . In order to compute M(k), one has to find the minima and calculate . As it follows from Lemma 4 and the definition of , all positions in which are represented by the numbers such that (in the increasing order); hence, the sequence corresponds to either or , depending on whether . Then, the minima are stored in the corresponding elements of (assuming ): , provided . It is clear that only if the segment intersects the range and, thus, corresponds to a segment , for . Therefore, since for and and, thus, such values do not affect, in a sense, the minima stored in , one can rewrite (3) as follows:
4 |
It remains to compute the numbers , for .
Lemma 5
Consider a set of recombinants , each of which has length n. For and , one has .
Proof
Denote , so that . Suppose that . Note that . Since iff either or , it is easy to see that , the number of distinct letters , is equal to the number of time occurs in , i.e., .
Suppose that . It suffices to show that . For , denote by the string . Fix . Since lexicographically, there are numbers h and such that iff . Further, we have . Thus, by definition of , for , we have iff . Note that . Therefore, the number of strings from having suffix w is equal to one plus the number of integers in the range , which implies .
By (4) and Lemma 5, one can calculate M(k) in O(m) time using the arrays and .
It remains to describe how we maintain .
Lemma 6
Algorithm 3 computes the arrays from the numbers and , and from the arrays in O(m) time, assuming the input alphabet is with .
Proof
Let us analyze Algorithm 3 that computes . By definition, for . The first line of the algorithm initializes so that , for , and . Since after this initialization , obviously, is in the sorted order, one has, for , iff and, therefore, for , one has iff . Based on this observation, we fill in lines 3–12 so that , for , using exactly the same approach as in Algorithm 1, where is computed, but instead of the assignment , we have since . Here we also compute in the same way as in Algorithm 1.
The loop in line 13 fills so that, for , is the number of occurrences of the integer i in ( was zero initialized in line 3). Since, for , we have at this point, is also the number of occurrences of the integer in .
By definition, must contain only elements from , but this is not necessarily the case in line 14. In order to fix and , we simply have to remove all elements for which , moving all remaining elements of and non-zero elements of to the left accordingly. Suppose that, for some h and i, we have and the number is moved to , for some , as we fix . Then, must become j. We utilize an additional temporary array to fix . The loop in lines 16–23 fixes and in an obvious way; once is moved to during this process, we assign . Then, , , ( is discussed below) are resized in line 24, and the loop in line 25 fixes using tmp.
Recall that , for , is a system of disjoint segments covering (assuming ). It is now easy to see that this system is obtained from the system , with (assuming ), by adding the new segment and joining some segments together. The second line of the algorithm copies into and adds to the end of , so that, for , is equal to the minimum of for all from the segment and is the minimum in the segment . (This is not completely correct since has changed as k was increased; namely, was equal to but now is equal to ). As we join segments removing some elements from in the loop 16–23, the array must be fixed accordingly: if is obtained by joining , for , then . We perform such fixes in line 17, accumulating the latter minimum. We start accumulating a new minimum in line 20, assigning . If at this point the ready minimum accumulated in corresponds to a segment containing the position , we have to fix taking into account the new value ; we do this in line 21. To avoid accessing out of range elements in and in line 20, we add a “dummy” element in, respectively, and in line 15.
Besides all the arrays of length m, Algorithm 3 also requires access to and, possibly, to . During the computation of M(k) for , we maintain the last L calculated numbers in a circular array, so that the overall required space is ; when k is incremented, the array is modified in O(1) time in an obvious way. Thus, Lemma 6 implies Theorem 1
If, as in our case, one does not need for all k, the arrays , , can be modified in-place, i.e., , , can be considered as aliases for , , , and yet the algorithm remains correct. Thus, we really need only 7 arrays in total: , , , , s, t, u, where s, t, u serve as , , and the array tmp can be organized in place of or . It is easy to maintain along with each value a corresponding position such that ; these positions can be used then to restore the found segmentation of using backtracking (see the beginning of the section). To compute , instead of using an RMQ data structure, one can adapt in an obvious way Algorithm 2 rewriting the arrays and during the computation, which is faster in practice but theoretically takes time by Lemma 3. We do not discuss further details as they are straightforward.
From segmentation to founder set
Now we are given a segmentation of and we wish to produce a founder set that obeys the segment boundaries. Recall that such founder set corresponds to a parse of with respect to segmentation . We conjecture that finding an optimal parse/founder set that minimizes the number of crossovers at segment boundaries is an NP-hard problem, but unfortunately we have not been able to prove this claim. Therefore, we continue by proposing three natural strategies of which two latter have interesting theoretical properties. The first of the strategies is a naive baseline, second is a greedy strategy, and third one is based on maximum weight perfect matching in a bipartite graph analogous to one by Ukkonen [18]. This latter strategy provides an optimal solution for a special case, and greedy gives a 2-approximation for the same special case. We will present all the three strategies first for the special case and then describe how to turn the general case to this special case (however loosing all optimality guarantees while doing so). We compare the naive baseline with the perfect matching in our experiments.
Assume (for our special case) that each segment in induces exactly M(n) distinct substrings in . Then the naive baseline strategy to produce a founder set is to concatenate the distinct substrings of segment 1 with the distinct substrings of segment 2 in random order, and continue this process form left to right until M(n) founder sequences of length n are produced. For the latter two strategies, the idea is that instead of a random permutation, we aim to find a permutation that gives a concatenation order that minimizes the number of crossovers at each segment boundary. For this purpose, it is sufficient to consider two consecutive segments [a, b] and as two partitions of the rows of . Namely, consider a distinct substring X of a segment [a, b] and an induced set such that for all . Analogously, consider a distinct substring Y of a segment and an induced set such that for all . If the concatenation XY forms the content F[a, c] of some founder F, then this concatenation causes crossovers. Hence, to minimize crossovers, one seeks to maximize the intersection between two partitions, studied next.
Problem of maximum intersection between two partitions. Let a be an integer. Given two partitions and of with , the problem of Maximum Intersection Between two Partitions (MIBP) is to find the bijection f from to which maximizes .
By using the bipartite graph defined between the elements of and the elements of and such that for and , the weight of this edge is , a maximum weight perfect matching of this graph gives an optimal solution of MIBP, and hence this problem can be solved in polynomial time.
We can define the greedy algorithm related to MIBP as the the greedy algorithm related to the problem of maximum weight perfect matching in the previous bipartite graph. As the greedy algorithm for maximum weight perfect matching is -approximation [24], we have the same ratio of approximation for the greedy algorithm for MIBP.
Lemma 7
Let and be two partitions of with . We can compute the greedy algorithm for MIBP of and in O(a) time.
Proof
Let E be a partition of and be a total order on E, we denote by the array of elements of E of size a such that for all i, where . Let be and . We have . It follows that the number of edges of no zero weight is at most a. By using Radix sort, we can compute in O(a) the sorted array of elements of following the order where iff or and . With this array, as for all and , we can compute (by further Radix sort and renaming steps) in O(a) the ordered list such that with . By taking the elements in the order of this list, we can compute in O(a) two arrays f and of size such that and represent the same solution of the greedy algorithm for MIBP.
Optimal founder set for the special case. Now we can solve independently the MIBP problem for each pair of consecutive segments, resulting to the following theorems, where the first one follows directly also from earlier constructions [18], and the latter from Lemma 7.
Theorem 8
([18]) Given a segmentation of such that each segment induces exactly K distinct substrings in , then we can construct an optimal parse of (and hence the corresponding set of founders) in polynomial time.
Theorem 9
Given a segmentation of such that each segment induces exactly K distinct substrings in , then we can construct a greedy parse of (and hence the corresponding set of founders) that has at most twice as many crossovers than the optimal parse in time and space.
In the general case, there are segments inducing less than M(n) distinct substrings. We turn such segments to the special case by duplicating some of the substrings. The choices made have dependencies between segments, and this is the reason we believe this general case is NP-hard to solve optimally. Hence, we aim just to locally optimize the chances of minimizing crossovers by duplicating distinct substrings in proportion they cover . That is, consider a segment inducing distinct substrings and the corresponding partitioning E of . Consider the largest set x of E. We make copies of the corresponding distinct substring. We continue by decreasing cardinality, stop when the sum of the duplication counts is greater than or equal to M(n) and update the last one such that (see Fig. 3). The fact that the corresponding partitioning is now a multi-partitioning (containing same set multiple times), does not affect the functioning of the greedy or perfect matching algorithms for the MIBP problem.
Results
We implemented the segmentation algorithm using Algorithm 2 to build the pBWT arrays and computed the minimum number of founders with the given value of L using the recursion in Eq. 3. This part of the implementation corresponds to Lemma 3, and thus the overall time complexity of the implemented approach is . After computing the minimum number of founders, we use backtracking to determine the optimal segmentation. Since we use the pBWT arrays to determine the distinct substrings in each segment, as part of the first phase of building the arrays we also store samples and now update them to the segment boundary positions in parallel. We proceed to join adjacent segments from left to right until the number of distinct substrings in one segment would exceed the minimum number of founders, and finally we concatenate the substrings to generate founder sequences. The implementation outputs for each segment the distinct founder sequence fragments, and associates to each fragment the set of haplotypes containing that fragment as a substring at that location (these are easily deduced given the segmentation and the positional BWT structures). Our implementation uses integer vectors from the SDSL library [25].
As our goal is to produce reference sequences for aligning short reads, we wanted to find a good value of L to generate a segmentation suitable for this purpose. In particular, we wanted to have the length of most segments clearly above a typical read length, such that most reads could be aligned without hitting a recombination site.
We used the chromosome 6 variants from the phase 3 data of the 1000 Genomes Project [2] as the starting point. We converted the variant data to a multiple sequence alignment with vcf2multialign,1 which resulted in 5009 haplotype sequences of equal length (including the reference sequence) of approximately 171 million characters. In order to reduce the running time of our tool, we discarded columns of identical characters as they would not affect the number of recombination sites. This reduced each sequence to approximately 5.38 million characters.
We used an increasing number of the generated sequences as an input to our tool with the value of L fixed to 10 to verify the usability of the tool in terms of running time and memory consumption. The tests were run on a Ubuntu Linux 16.04 server. The server had 96 Intel Xeon E7-4830 v3 CPUs running at 2.10GHz and 1.4 TB of memory. In addition to our own RMQ data structure, we tested with a general-purpose RMQ from the SDSL library. As seen in Fig. 4, our special-purpose RMQ data structure performed somewhat better in terms of speed compared to the general-purpose library implementation. From this experiment it is conceivable that processing of thousands of complete human genomes takes only few CPU days. As we did not optimize the memory usage of our tool, the maximum resident set size with 5009 inputs was around 257 GB which corresponds to approximately 10.25 bytes per input character. We expect that the memory consumption may be reduced without much affecting the performance.
Our second experiment was to see the effect of the minimum length L on the number of founders as well as the length of the segments. The results have been summarized in Table 1. We tested with a number of values of L ranging from 10 to 80. After generating the founders, we mapped the segment co-ordinates back to the original sequences to determine the segment lengths. The results are shown in Figs. 5 and 6. We note that while the average segment length of 2395 bases with is fitting our purpose, there is a peak of short segments of approximately 250 bases. The peak is magnified in Fig. 7. We also tested smaller values of L to conclude that decreasing L further rapidly makes the situation more difficult. On the other hand, setting resulted in only 130 founders, which makes aligning reads much faster than using all of the haplotypes for indexing.
Table 1.
L | Number of founders | Average segment length | Median number of recombinations | Average distance between recombinations |
---|---|---|---|---|
10 | 130 | 2395 | 15,794 | 9624 |
12 | 246 | 4910 | 11,716 | 14,025 |
14 | 331 | 6467 | 9759 | 17,126 |
16 | 462 | 9312 | 7801 | 21,860 |
18 | 766 | 14,383 | 5593 | 30,571 |
20 | 1057 | 20,151 | 4411 | 39,090 |
40 | 1513 | 30,228 | 3228 | 54,386 |
80 | 3093 | 67,994 | 1176 | 146,655 |
We measured the average segment length from the segmentation, median number of recombinations from mapping the input sequences to the founder sequences, and average distance between recombinations by dividing the length of the original sequences by the average number of recombinations. The last three columns report the results for the perfect matching approach
We proceeded with two tests in which we measured the number of recombinations needed to express each of the original sequences with the generated founder sequences depending on the method of concatenating the fragments into the set of founder sequences. Using the method given earlier, we began by duplicating some fragments so that each segment had exactly the same amount of fragments. For these tests, we implemented the three concatenation strategies: a Random matching which corresponds to concatenating the consecutive fragments in random order, a Perfect matching which takes an optimal solution of the maximum weight perfect matching problem as the order for the concatenation of the fragments, and a Greedy matching which solves the matching problem greedily. For evaluating the different concatenation strategies, we mapped each one of the original sequences to the founders, using a simple greedy algorithm that is also optimal [19]. In the first test, we fixed the value of L to 10 and mapped an increasing number of input sequences to a set of founder sequences generated with the same input sequences. In the second one, we used all of the 5009 input sequences and varied the value of L. The results are shown in Figs. 8 and 9. Considering the 17768 and 43333 recombinations achieved with perfect and random matching, respectively, given 5009 input sequences and (see Table 1), we conclude that the heuristic part of optimizing the concatenation of founder blocks yields an improvement of around 2.44 compared to a random concatenation of segments with duplications. Greedy approach works even slighly better than perfect matching in our experiments: the number of recombinations on the same setting is 17268. As the numbers are very close, we refer to perfect matching numbers in the sequel.
The results look promising, as using 130 founders instead of 5009 haplotypes as the input to our pan-genome indexing approach [12] will result into significant saving of resources; this solves the space bottleneck, and the preprocessing of founder reconstruction also saves time in the heavy indexing steps.
Our intention was to compare our tool to an implementation of Ukkonen’s algorithm [19]. However, initial testing with four input sequences showed that the latter implementation is not practical with a data set of this size.
Conclusions
As our experiments indicate that one can reduce 5009 haplotypes down to 130 founders with the average distance of two crossovers being 9624 bases, one can expect short read alignment and variant calling to become practical on such pan-genomic setting. We are investigating this on our tool PanVC [12], where one can simply replace its input multiple alignment with the one made of the founder sequences. With graph-based approaches, slightly more effort is required: Input variations are encoded with respect to the reference, so one first needs to convert variants into a multiple alignment, apply the founder reconstruction algorithm, and finally convert the multiple alignment of founder sequences into a directed acyclic graph. PanVC toolbox provides the required conversions. Alternatively, one can construct the pan-genome graph using other methods, and map the founder sequences afterwards to the paths of the graph: If original haplotype sequences are already spelled as paths, each founder sequence is a concatenation of existing subpaths, and can hence be mapped to a continuous path without alignment (possibly requiring adding a few missing edges).
Finally, it will be interesting to see how much the contiguity of the founder sequences can still be improved with different formulations of the segmentation problem. We are investigating a variant with the number of founder sequenced fixed.
Acknowledgements
We wish to thank the anonymous reviewers for useful suggestions that helped us to improve the presentation.
Abbreviations
- pBWT
positional Burrows–Wheeler transform
- LCP
longest common prefix
- RMQ
range maximum query
- MIBP
maximum intersection between two partitions
Authors' contributions
TN and VM initiated the study and observed that pBWT can be exploited to improve the earlier solution. BC and DK developed the details of the linear time algorithm and wrote major parts of the manuscript. TN implemented the algorithm and conducted the experiments. All authors read and approved the final manuscript.
Funding
This work was partially supported by the Academy of Finland (Grant 309048).
Availability of data and materials
Our implementation is open source and available at the URL https://github.com/tsnorri/founder-sequences.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Tuukka Norri, Email: tuukka.norri@helsinki.fi.
Bastien Cazaux, Email: bastien.cazaux@helsinki.fi.
Dmitry Kosolobov, Email: dkosolobov@mail.ru.
Veli Mäkinen, Email: veli.makinen@helsinki.fi.
References
- 1.Computational Pan-Genomics Consortium Computational pan-genomics: status, promises and challenges. Brief Bioinform. 2018;19(1):118–135. doi: 10.1093/bib/bbw089. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.The 1000 Genomes Project Consortium A global reference for human genetic variation. Nature. 2015;526(7571):68–74. doi: 10.1038/nature15393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Exome Aggregation Consortium Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016;536(7616):285–91. doi: 10.1038/nature19057. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.The UK10K Consortium The UK10K project identifies rare variants in health and disease. Nature. 2015;526(7571):82–90. doi: 10.1038/nature14962. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Schneeberger K, Hagmann J, Ossowski S, Warthmann N, Gesing S, Kohlbacher O, Weigel D. Simultaneous alignment of short reads against multiple genomes. Genome Biol. 2009;10:98. doi: 10.1186/gb-2009-10-9-r98. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Huang L, Popic V, Batzoglou S. Short read alignment with populations of genomes. Bioinformatics. 2013;29(13):361–370. doi: 10.1093/bioinformatics/btt215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Sirén J, Välimäki N, Mäkinen V. Indexing graphs for path queries with applications in genome research. IEEE/ACM Trans Comput Biol Bioinform. 2014;11(2):375–388. doi: 10.1109/TCBB.2013.2297101. [DOI] [PubMed] [Google Scholar]
- 8.Dilthey A, Cox C, Iqbal Z, Nelson MR, McVean G. Improved genome inference in the MHC using a population reference graph. Nat Genet. 2015;47:682–688. doi: 10.1038/ng.3257. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Maciuca S, del Ojo Elias C, McVean G, Iqbal Z. A natural encoding of genetic variation in a Burrows–Wheeler transform to enable mapping and genome inference. In: Proceedings of the 16th international workshop on algorithms in boinformatics, WABI 2016, Aarhus, Denmark, August 22–24, 2016. Lecture Notes in Computer Science, vol. 9838; 2016. p. 222–33.
- 10.Erik Garrison, Jouni Sirén, Novak Adam M, Hickey Glenn, Eizenga Jordan M, Dawson Eric T, Jones William, Garg Shilpa, Markello Charles, Lin Michael F, Paten Benedict, Durbin Richard. Variation graph toolkit improves read mapping by representing genetic variation in the reference. Nat Biotechnol. 2018;36:875. doi: 10.1038/nbt.4227. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Sirén J, Garrison E, Novak AM, Paten B, Durbin R. Haplotype-aware graph indexes. In: 18th international workshop on algorithms in bioinformatics, WABI 2018, August 20–22, 2018, Helsinki, Finland. LIPIcs, vol. 113. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Wadern, Germany; 2018. p. 4–1413.
- 12.Valenzuela D, Norri T, Niko V, Pitkänen E, Mäkinen V. Towards pan-genome read alignment to improve variation calling. BMC Genom. 2018;19(Suppl 2):87. doi: 10.1186/s12864-018-4465-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Mäkinen V, Navarro G, Sirén J, Välimäki N. Storage and retrieval of highly repetitive sequence collections. J Comput Biol. 2010;17(3):281–308. doi: 10.1089/cmb.2009.0169. [DOI] [PubMed] [Google Scholar]
- 14.Navarro G. Indexing highly repetitive collections. In: Proceedings of 23rd international workshop on combinatorial algorithms (IWOCA). LNCS 7643; 2012. p. 274–9.
- 15.Wandelt S, Starlinger J, Bux M, Leser U. Rcsi: scalable similarity search in thousand(s) of genomes. PVLDB. 2013;6(13):1534–1545. [Google Scholar]
- 16.Ferrada H, Gagie T, Hirvola T, Puglisi SJ. Hybrid indexes for repetitive datasets. Philos Trans R Soc A. 2014;372(2016):20130137. doi: 10.1098/rsta.2013.0137. [DOI] [PubMed] [Google Scholar]
- 17.Gagie T, Puglisi SJ. Searching and indexing genomic databases via kernelization. Front Bioeng Biotechnol. 2015;3:12. doi: 10.3389/fbioe.2015.00012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Ukkonen Esko. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg; 2002. Finding Founder Sequences from a Set of Recombinants; pp. 277–286. [Google Scholar]
- 19.Rastas P, Ukkonen E. Haplotype inference via hierarchical genotype parsing. In: Proceedings of the 7th international workshop on algorithms in bioinformatics, WABI 2007, Philadelphia, PA, USA, September 8–9, 2007; 2007. p. 85–97.
- 20.Durbin R. Efficient haplotype matching and storage using the positional Burrows–Wheeler transform (PBWT) Bioinformatics. 2014;30(9):1266–1272. doi: 10.1093/bioinformatics/btu014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Mäkinen V, Norri T. Applying the positional Burrows–Wheeler transform to all-pairs hamming distance. Inf Process Lett. 2019;146:17–19. doi: 10.1016/j.ipl.2019.02.003. [DOI] [Google Scholar]
- 22.Norri T, Cazaux B, Kosolobov D, Mäkinen V. Minimum segmentation for pan-genomic founder reconstruction in linear time. In: 18th international workshop on algorithms in bioinformatics, WABI 2018, August 20–22, 2018, Helsinki, Finland. LIPIcs, vol. 113. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Wadern, Germany; 2018. p. 15–11515.
- 23.Fischer J, Heun V. Space-efficient preprocessing schemes for range minimum queries on static arrays. SIAM J Comput. 2011;40(2):465–492. doi: 10.1137/090779759. [DOI] [Google Scholar]
- 24.Karp RM, Vazirani UV, Vazirani VV. An optimal algorithm for on-line bipartite matching. In: Proceedings of the twenty-second annual ACM symposium on Theory of computing, STOC. ACM; 1990. p. 352–8.
- 25.Gog Simon, Beller Timo, Moffat Alistair, Petri Matthias. Experimental Algorithms. Cham: Springer International Publishing; 2014. From Theory to Practice: Plug and Play with Succinct Data Structures; pp. 326–337. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Our implementation is open source and available at the URL https://github.com/tsnorri/founder-sequences.