Abstract
New methods for computing eigenvectors of symmetric block tridiagonal matrices based on twisted block factorizations are explored. The relation of the block where two twisted factorizations meet to an eigenvector of the block tridiagonal matrix is reviewed. Based on this, several new algorithmic strategies for computing the eigenvector efficiently are motivated and designed. The underlying idea is to determine a good starting vector for an inverse iteration process from the twisted block factorizations such that a good eigenvector approximation can be computed with a single step of inverse iteration.
An implementation of the new algorithms is presented and experimental data for runtime behaviour and numerical accuracy based on a wide range of test cases are summarized. Compared with competing state-of-the-art tridiagonalization-based methods, the algorithms proposed here show strong reductions in runtime, especially for very large matrices and/or small bandwidths. The residuals of the computed eigenvectors are in general comparable with state-of-the-art methods. In some cases, especially for strongly clustered eigenvalues, a loss in orthogonality of some eigenvectors is observed. This is not surprising, and future work will focus on investigating ways for improving these cases.
Keywords: Block tridiagonal matrix, Eigenvector computation, Twisted factorization, Twisted block factorization, Inverse iteration
1. Introduction
Block tridiagonal and banded matrices arise in many situations, for example, in the solution of differential equations via finite difference methods or in reduction processes in the context of eigenvalue computations. In the latter case, block tridiagonal matrices can be the intermediate result of a preprocessing step for computing spectral information of general dense matrices, resulting, for example, from a block tridiagonalization process [1] or from a bandwidth reduction process [2,3]. Most existing algorithms for computing spectral information of a band matrix first tridiagonalize the matrix, since many methods are known for efficiently computing eigenvalues and eigenvectors of a tridiagonal matrix. However, the tridiagonalization process tends to dominate the computational cost and has important disadvantages in terms of data locality which make it memory-bound [4]. This motivates our attempt in computing the eigenvectors of a band or block tridiagonal matrix directly (without tridiagonalization). One approach for doing this is the block tridiagonal divide-and-conquer (BD and C) method [5,6], which efficiently approximates eigenvalues and eigenvectors of a symmetric block tridiagonal matrix without tridiagonalizing it. However, the eigenvector accumulation in the divide-and-conquer process can become the main performance limiting factor of the BD and C method, in particular, in cases where reduced accuracy approximations (with respect to the highest possible accuracy determined by the problem instance and its condition as well as by the given floating-point arithmetic) are not sufficient [6]. This motivates efforts in investigating efficient alternatives for directly computing eigenvectors of a symmetric block tridiagonal matrix (without reduction to tridiagonal form), given approximations of the corresponding eigenvalues.
We represent a generic block tridiagonal matrix as
| (1) |
where for . The block sizes determine size and shape of the subdiagonal blocks and of the superdiagonal blocks . According to this definition, is in general unsymmetric, but it has identical lower and upper bandwidths. Note that band structure can be considered a special case of block tridiagonal structure since any band matrix with identical lower and upper bandwidths has the general form (1) with upper triangular and lower triangular .
In this paper, we consider the task of computing eigenvectors of matrices of the form (1). We focus on symmetric where for , and for . The approach pursued is based on utilizing twisted block factorizations, blocked generalizations of twisted factorizations of tridiagonal matrices (see, for example, [7]) in order to represent shifted as a product of three matrices (two block tridiagonals with identities along the diagonal and one block diagonal).
First, we briefly review an algorithm for efficiently computing twisted block factorizations of which we have proposed earlier [8]. Based on these factorizations, we present and experimentally evaluate an algorithm for computing an eigenvector of , given an approximation of the corresponding eigenvalue. The underlying idea is motivated by central components of the MRRR algorithm for computing eigenvectors of a symmetric tridiagonal matrix summarized in [9]. It may not be possible to directly generalize all aspects from the tridiagonal to the block tridiagonal case, but the insights summarized in this paper illustrate that it is worthwhile to pursue an analogous approach for the block tridiagonal case.
The central questions addressed in this paper are (i) how to select a single twisted block factorization of shifted among all possible ones as the basis for an inverse iteration process, (ii) how to determine a good starting vector for this process, (iii) how the computational efficiency of this approach depends on central problem parameters (such as block sizes, etc.), and (iv) how competitive it is compared to existing approaches. We analytically motivate and compare several algorithmic variants and experimentally study their numerical accuracy as well as their computational performance.
1.1. Related work
Most of the relevant existing work focussed on the computation of eigenvectors of tridiagonal matrices. The highly accurate computation of the eigenvalues of a symmetric definite tridiagonal matrix [10,11] is an important building block for the development of very efficient methods for the calculation of eigenvectors of such matrices. Parlett and Dhillon [7] suggested the use of twisted factorizations of tridiagonal matrices for determining a good starting vector for inverse iteration. The underlying idea is that the position of the largest component of the eigenvector sought is associated with the minimal diagonal element of the twisted factorizations. The proper choice of the starting vector based on twisted factorizations leads to a stable and rapidly converging inverse iteration process. A single step of inverse iteration can be sufficient without requiring explicit reorthogonalization of the computed eigenvector [9,11,12,7,13–15].
So far, relatively little is known about how well such strategies generalize to banded or block tridiagonal matrices. Although Parlett and Dhillon [7] briefly mentioned a blocked extension of the tridiagonal case and also suggested a starting vector for the resulting inverse iteration process, they neither investigated algorithmic details nor evaluated this approach quantitatively. More recently, Vömel and Slemons [16] theoretically discussed twisted factorizations of banded or block-tridiagonal matrices. They gave a proof of the existence of two twisted factorizations of banded matrices by using a double factorization of the twisted block. They also summarized the connections to the inverse of the matrix and mentioned the potential use of their twisted factorizations for an inverse iteration process on band matrices—however, again without specifying or evaluating a concrete algorithm.
Vömel and Slemons focussed on non-blocked twisted factorizations of a band matrix. When pivoting is introduced for enhancing numerical stability, their approach does in general not preserve block tridiagonal or banded structure due to fill-in. In order to address both aspects—numerical stability and preservation of block tridiagonal structure—we utilize twisted block factorizations of as presented in [8]. Our approach is related to the twisted block factorizations indicated in [7], but beyond that we integrate localized pivoting within blocks in the factorization process without causing fill-in.
1.2. Contributions
The approach investigated in this paper comprises three algorithmic components: (i) efficient computation of twisted block factorizations of , (ii) identification of a good starting vector for iterative computation of the desired eigenvector, and (iii) an efficient inverse iteration process with this starting vector. We have already discussed the first component earlier [8]. In this paper, we work on new aspects in the second and the third component.
More specifically, in this paper we investigate the following aspects beyond [8]. We motivate and investigate two new strategies (minsvd0 and minsvd2) for determining a good starting vector for the inverse iteration process. We discuss how the inverse iteration process can be performed efficiently for the specific matrix structures arising. We compare previously mentioned and newly developed starting vector selection strategies in terms of numerical properties and in terms of computational performance. Last, but not least, so far no experimental data about the applicability and competitiveness of eigenvector computations for block tridiagonal matrices based on twisted block factorizations can be found in the literature. We fully specify, implement and evaluate a complete algorithm for this task. In addition to our theoretical and analytical investigations we also summarize the results of comprehensive experimental evaluations of different algorithmic variants based on our implementation. Considering a wide range of test matrices, we clearly illustrate for which problem settings our new methods are competitive compared to existing standard approaches.
Synopsis. In Section 2, the process of efficiently computing twisted block factorizations of is briefly reviewed, since it is one of three main components of the algorithms investigated. The identification of a well suited starting vector for the eigenvector computation is discussed in Section 3. An efficient inverse iteration process using this starting vector is the topic of Section 4. Comprehensive experimental performance evaluations based on an efficient implementation of these concepts are summarized in Section 5. Finally, conclusions and suggestions for future work are given in Section 6.
2. Component I: twisted block factorizations
In analogy to the approach pursued in the MRRR method for tridiagonal matrices [9], the first step of our approach is based on a factorization of block tridiagonal into the product
| (2) |
with a permutation matrix and block tridiagonals and . We have presented a method for computing various decompositions of this form based on twisted block factorizations with local pivoting in [8]. We first briefly summarize this method before we move on to the subsequent components of our approach in the next sections.
The twisted block factorizations of presented in [8] combine forward with backward block elimination steps. Assuming that all factorizations exist, we use the notation for a twisted block factorization with forward backward elimination steps. We denote the diagonal block at position , where forward and backward elimination steps meet, as “twisted block”. As shown in [8], the resulting factors and are both block tridiagonals, but only nonzero blocks appear above the block diagonal in and below the main diagonal in . For example, of produces
| (3) |
Superscripts are used to distinguish blocks computed in the forward direction (“+”) and blocks constructed in the backward direction (“−”).
3. Component II: identification of a starting vector
Assuming that we are given an eigenvalue of (or an approximation thereof, which will be called “shift” in the following), the twisted block factorizations of can be computed as reviewed in Section 2. Based on these, the next task is to determine a proper starting vector for an inverse iteration process (cf. Fig. 1).
Fig. 1.

Inverse iteration process.
So far, we have not specified, which one of the possible block twisted factorizations to use in the inverse iteration process. The choice of one of these factorizations also determines the starting vector . In fact, we will utilize the information provided by all the block twisted factorizations of for determining a suitable starting vector . The idea which motivates this procedure is the connection between the twisted factorizations and the inverse of a matrix. We review this connection in the following. For a properly chosen starting vector a few steps of the inverse iteration process should suffice for determining a good approximation of the eigenvector .
3.1. Analytical motivation for eigenvector approximation
In the following, we first review basic ideas given in [7]. Based on this, we then formulate various concrete algorithmic strategies for determining an eigenvector of in Section 3.2. In order to keep the notation simple, we assume in the following that all block sizes are equal, i.e., for all .
For each possible blocked twisted factorization , define and
with such that
| (4) |
Intuitively, if is small, contains good approximations to eigenvectors corresponding to . By omitting the th block row in Eq. (4), two independent homogeneous systems remain. Denoting with the arguments the respective submatrices of and in Eq. (2) which contain block rows to and block columns to , and introducing the variables for the respective parts of and , these two homogeneous systems can be written as
| (5) |
| (6) |
Assuming that the factorization exists, the matrices and as well as the matrices and must be invertible, leaving us with two equations with the system matrices and . The special structures of and allow for computing using a blockwise backward substitution process and using a blockwise forward substitution process.
Using Eq. (3), we again illustrate this for the example and : Eq. (5) translates into
| (7) |
from which and can be determined using blockwise backward substitution, and Eq. (6) translates into
| (8) |
from which can be determined.
Based on Eq. (1), the omitted th block row in Eq. (4) yields the following equation
We can now substitute and computed from Eqs. (5) and (6) (compare Eqs. (7) and (8)) yielding
| (9) |
Recalling from Eq. (3) that and that , we obtain
| (10) |
According to Eq. (3) this means that
| (11) |
3.2. Strategies for starting vector selection
The relationships reviewed in Section 3.1 motivate various new strategies for determining the starting vector for the inverse iteration process based on the twisted block factorizations of . In this section, we present them, and in Section 5 they are evaluated numerically.
As outlined in [7], if is the with the minimal singular value over all singular values for all possible and () is the corresponding minimal singular triplet, then according to Eq. (4)
| (12) |
Consequently, if is small enough, then is a good approximation to an eigenvector of corresponding to the shift .
Strategies minsvd0, minsvd1, and minsvd2. In these strategies, which are motivated by Eq. (12), the selection of the starting vector is based on the singular values of the subblocks of all twisted factorizations of . For strategy minsvd0, the singular vector corresponding to the minimal singular value of all matrices (, cf. Eq. (11)) has to be computed. The elements of the starting vector which are in the rows of are set to , all the others to zero. Strategy minsvd1 (which is the only SVD-based strategy mentioned in [8]) is a computationally cheaper approximation of minsvd0, because it does not require computing any singular vectors: motivated by the localized pivoting done in each block in the twisted block factorization process, the position of the last row of the block defines the position in which is set to one, at all others is set to zero. In both strategies minsvd0 and minsvd1, after determining one step of inverse iteration is performed for computing the eigenvector approximation as summarized in Section 4. Strategy minsvd2 is directly based on Eq. (12): it determines the matrix in Eq. (4) using the blockwise back- and forward substitution processes sketched in Section 3.1, then computes the right singular vector of , and finally computes the eigenvector approximation .
In summary: for strategy minsvd1 we need to know and its position, and we need to perform one step of inverse iteration. For strategy minsvd0, we need to know , its position and the corresponding singular vector , and we also need to perform one step of inverse iteration. For strategy minsvd2, we need to know the number of the block row of and the singular vector corresponding to . Then we need to compute the matrix and multiply it with .
Strategy minsca. This strategy is a very coarse approximation, but significantly reduces the computational cost compared to the minsvdx strategies, as it does not require the computation of any SVDs. Based on all twisted factorizations the position of the row of the minimum diagonal entry over all defines the position of the starting vector which is set to one. The factorization which contains the minimum diagonal element is used for computing this eigenvector approximation as summarized in Section 4.
Strategy random. As a reference strategy, a starting vector with random entries uniformly distributed in [0, 1] has been used.
4. Component III: efficient inverse iteration
In this section, we investigate an inverse iteration process for approximating the eigenvector of corresponding to based on the starting vector which has been determined according to one of the strategies discussed in Section 3.
If is sufficiently smaller than for all eigenvalues and if the starting vector contains a nonzero component in the desired eigenvector , the inverse iteration process depicted in Fig. 1 will produce an approximation for the desired eigenvector .
In general, a random starting vector is considered appropriate [17]. However, as indicated in [7] and discussed in detail in Section 3, it is possible to determine a better starting vector by using the twisted factorizations of shifted . Next, we discuss how to exploit the special block tridiagonal structure of the factors in the twisted block factorizations of shifted for efficiently solving the linear systems arising in Line 3. of Fig. 1.
Solution of a block tridiagonal linear system.
Given a twisted block factorization (2) of , three steps are required for solving a linear system with :
-
a.Apply the inverse of the pivoting matrix P to the right hand side:
-
b.
Solve for via a combined forward and back substitution.
-
c.
Solve for via combined back and forward substitution.
In the following, the combined substitution processes are derived. Without loss of generality, as in Section 2 we use the special case and for illustrating the concept. All vectors involved are partitioned into subvectors of length corresponding to the blocks of the matrix and their indices correspond to the respective row indices of the matrix blocks.
Combined forward/back substitution on . In the special case considered, Step b. in the solution process above has the following form:
Since both and have to be known before we can solve for , it is necessary to start substituting at both ends, gradually solving the equations towards the twisted block. Forward substitution is performed on the forward factorization part marked with the superscripts “+” by first solving for and then for . The next block is already the twisted block where the forward and backward factorizations meet, thus is required before we can proceed. In a back substitution step on is solved for . Note that within the block, this actually involves a forward substitution process, since is lower triangular. Finally, in the block row of the twisted block we can solve for .
Combined back/forward substitution on . An analogous procedure can be applied to the matrix U for computing . By introducing in order to simplify notation and by partitioning appropriately, Step c. in the solution process above translates into solving the linear system
In contrast to the combined forward/back substitution discussed before, this time the substitution process has to start at the twisted block (in our example, block number three) and proceeds towards the first and last block row of , since has to be known before the equations in block rows and can be solved. In our example of , the combined back/forward substitution takes the following form. First, we solve for (note that this is a back substitution process within the block ). Then, can be computed from the last block equation and from the second block equation . Finally, can be computed from the first block equation .
5. Experimental evaluation
In this section, we summarize extensive experimental evaluations of the five different strategies presented in Section 3.2 (minsvd0, minsvd1, minsvd2, minsca, random). The resulting algorithms for computing an eigenvector of are compared in terms of runtime performance as well as in terms of the resulting quality of the eigenvector approximation. For this purpose, we have implemented the methods discussed in this paper in Lapack-style Blas-based Fortran routines. In all cases, only one step of inverse iteration has been performed.
Test data. Seven different types of symmetric banded test matrices with constant block sizes for were used in the experiments (this corresponds to upper triangular blocks in Eq. (1)). Matrices of Type 0 have random entries uniformly distributed in , Type 1 matrices have eigenvalues clustered around the machine epsilon , Type 2 matrices have eigenvalues clustered around ±1, Type 3 matrices have eigenvalues geometrically distributed in , Type 4 matrices have eigenvalues arithmetically distributed in , Type 5 matrices have eigenvalues whose logarithms are uniformly distributed in , and Type 6 matrices have random eigenvalues which are uniformly distributed in . Matrix types 1–6 were generated using software written by Yihua Bai.
5.1. Runtime performance
We evaluated the runtime performance of the different strategies on an Intel i7-860 CPU. Comparisons are provided with the most competitive state-of-the-art tridiagonalization-based routines from Lapack [18]: The routine dsbevd reduces to tridiagonal form, then applies the tridiagonal divide-and-conquer method for computing eigenvalues and eigenvectors, and finally transforms back the eigenvectors. The routine dsbevr also reduces to tridiagonal form, then computes eigenvalues and eigenvectors based on relatively robust representations using the routine dstemr, and finally transforms back the eigenvectors. The routines dsyevd and dsyevr operate analogously, but they treat as full matrix, thus not exploiting the band structure.
In general, the runtime for the different strategies compared depends on the type of the test matrix. An exception is the strategy minsca, where the runtime is independent of the matrix type. Our experiments showed that the Lapack routines were fastest for matrices of Type 2. Consequently, our runtime comparisons focus on this matrix type, which is in this sense the “most difficult” case in terms of runtime performance for our new approaches.
Fig. 2 shows the runtimes for computing all eigenvectors for various matrix sizes with a fixed block size . The eigenvalues required in the approaches based on twisted block factorizations were computed using the routine LAPACK/dsbevd in the “eigenvalues only” mode and the sum of the times is shown in Fig. 2 (denoted as “minxxxx + dsbevd” in the legend). Fig. 2 clearly illustrates that (i) exploiting the band structure is crucial for good performance, (ii) all methods based on twisted block factorizations are asymptotically competitive with the state-of-the-art tridiagonalization-based methods, and (iii) the strategy minsca is the clear winner with high speedups especially for large problems, followed by the strategy minsvd1.
Fig. 2.
Runtime comparison for fixed block size .
Fig. 3 compares the same methods for fixed problem size and varying block sizes . As expected, the performance benefits of the methods based on twisted block factorizations diminish for increasing block sizes. Nevertheless, in particular the strategy minsca and to some extent also the strategy minsvd1 remain very competitive even for larger bandwidths. For small , all methods based on twisted block factorizations outperform the classical tridiagonalization-based approaches.
Fig. 3.
Runtime comparison for increasing block size .
5.2. Numerical accuracy
Table 1 summarizes experimental data about residuals
and about eigenvector orthogonality
resulting from the five different algorithmic strategies after one step of inverse iteration as percentages of computed eigenpairs where and , respectively. We can see that all four strategies based on twisted block factorizations yield mostly very good residuals and perform clearly better than the random strategy. As expected, producing orthogonal eigenvectors within a single step of inverse iteration is a very difficult task, in particular when eigenvalues are strongly clustered as it is the case for many of the test matrices, particularly strong in matrix types 1 and 2. Nevertheless, also in terms of eigenvector orthogonality, the strategies based on twisted block factorizations clearly outperform the random strategy. We also would like to emphasize that the minsca strategy, which was by far the fastest, is also among the winners in terms of numerical accuracy in most cases.
Table 1.
Percentage of computed eigenpairs with a relative residual and worst eigenvector orthogonality not exceeding for the different matrix types ( and ). “mx” stands for the strategy minsvdx, “ms” for the strategy minsca, and “r” for the random strategy. The best values in each row are highlighted in bold face.
| Matrix type |
(%) |
(%) |
||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| m0 | m1 | m2 | ms | r | m0 | m1 | m2 | ms | r | |
| 0 | 100.0 | 100.0 | 99.9 | 100.0 | 88.7 | 45.4 | 39.3 | 7.0 | 47.7 | 0.6 |
| 1 | 100.0 | 100.0 | 97.7 | 100.0 | 99.9 | 0.2 | 0.2 | 0.0 | 0.1 | 0.2 |
| 2 | 100.0 | 100.0 | 16.8 | 100.0 | 1.6 | 1.6 | 1.6 | 0.0 | 0.1 | 1.6 |
| 3 | 99.6 | 99.7 | 99.8 | 99.9 | 98.2 | 18.3 | 21.8 | 29.4 | 12.6 | 0.1 |
| 4 | 99.7 | 99.7 | 99.9 | 100.0 | 95.9 | 67.9 | 62.9 | 72.9 | 73.9 | 0.5 |
| 5 | 92.0 | 92.0 | 92.1 | 84.5 | 90.8 | 38.1 | 35.8 | 50.2 | 43.4 | 1.5 |
| 6 | 99.2 | 99.4 | 99.6 | 99.9 | 72.5 | 85.3 | 85.7 | 91.4 | 92.6 | 0.2 |
| Avg | 98.6 | 98.7 | 86.5 | 97.8 | 78.2 | 36.7 | 35.3 | 35.8 | 38.6 | 0.7 |
6. Conclusions and future work
Several new algorithmic variants for computing eigenvectors of symmetric block tridiagonal matrices based on twisted block factorizations have been analytically motivated, designed, implemented and evaluated experimentally. It has been shown that for very large problems and/or for small bandwidths, the methods proposed in this paper clearly outperform state-of-the-art tridiagonalization-based methods in terms of runtime. In terms of numerical accuracy, excellent residuals can be achieved within a single step of inverse iteration, but especially for test cases with a tightly clustered spectrum a certain loss of orthogonality in the computed eigenvectors has been observed. The computationally most efficient approximative strategy minsca is also among the winners in terms of numerical accuracy.
Due to its high performance potential, the twisted block factorization-based approach is an important and promising building block for alternatives to classical dense tridiagonalization-based eigensolvers. Ways for better handling the cases where a loss of orthogonality has been observed will be investigated in the future.
Acknowledgements
This work was partly supported by the Austrian Science Fund (FWF) under contract S10608-N13 (NFN SISE). We are grateful to Y. Bai for her tool for generating the test matrices.
Contributor Information
Gerhard König, Email: gerhard@mdy.univie.ac.at.
Michael Moldaschl, Email: a0607892@unet.univie.ac.at.
Wilfried N. Gansterer, Email: wilfried.gansterer@univie.ac.at.
References
- 1.Bai Y., Gansterer W.N., Ward R.C. Block tridiagonalization of effectively sparse symmetric matrices. ACM Trans. Math. Software. 2004;30:326–352. [Google Scholar]
- 2.C.H. Bischof, B. Lang, X. Sun, Parallel tridiagonalization through two-step band reduction, in: Proceedings of the 1994 Scalable High-Performance Computing Conference, Washington D.C., pp. 23–27.
- 3.Bischof C.H., Lang B., Sun X. A framework for symmetric band reduction. ACM Trans. Math. Software. 2000;26:581–601. [Google Scholar]
- 4.P. Luszczek, H. Ltaief, J. Dongarra, Two-Stage tridiagonal reduction for dense symmetric matrices using tile algorithms on multicore architectures, Technical Report 244, LAPACK Working Note, 2011.
- 5.Gansterer W.N., Ward R.C., Muller R.P. An extension of the divide-and-conquer method for a class of symmetric block-tridiagonal eigenproblems. ACM Trans. Math. Software. 2002;28:45–58. [Google Scholar]
- 6.Gansterer W.N., Ward R.C., Muller R.P., Goddard W.A., III Computing approximate eigenpairs of symmetric block tridiagonal matrices. SIAM J. Sci. Comput. 2003;25:65–85. [Google Scholar]
- 7.Parlett B.N., Dhillon I.S. Fernando’s solution to Wilkinson’s problem: an application of double factorization. Linear Algebra Appl. 1997;267:247–279. [Google Scholar]
- 8.Gansterer W.N., König G. On twisted factorizations of block tridiagonal matrices. Proceedings of the 10th International Conference on Computational Science 2010Procedia Computer Science. 2010;1:279–287. [Google Scholar]
- 9.Dhillon I.S., Parlett B.N., Vömel C. The design and implementation of the MRRR algorithm. ACM Trans. Math. Software. 2006;32:533–560. [Google Scholar]
- 10.Demmel J.W., Kahan W. Accurate singular values of bidiagonal matrices. SIAM J. Sci. Stat. Comput. 1990;11:873–912. [Google Scholar]
- 11.Dhillon I.S., Parlett B.N. Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices. Linear Algebra Appl. 2004;387:1–28. [Google Scholar]
- 12.Fernando K.V. On computing an eigenvector of a tridiagonal matrix. Part I: basic results. SIAM J. Matrix Anal. Appl. 1997;18:1013–1034. [Google Scholar]
- 13.Parlett B.N. For tridiagonals T replace T with LDLt. J. Comput. Appl. Math. 2000;123:117–130. [Google Scholar]
- 14.Parlett B.N., Dhillon I.S. Relatively robust representations of symmetric tridiagonals. Linear Algebra Appl. 2000;309:121–151. [Google Scholar]
- 15.Parlett B., Marques O. An implementation of the dqds algorithm (positive case) Linear Algebra Appl. 2000;309:217–259. [Google Scholar]
- 16.Vömel C., Slemons J. Twisted factorization of a banded matrix. BIT. 2009;49:433–447. [Google Scholar]
- 17.Ipsen I.C.F. Computing an eigenvector with inverse iteration. SIAM Rev. 1997;39:254–291. [Google Scholar]
- 18.Anderson E., Bai Z., Bischof C.H., Blackford S., Demmel J.W., Dongarra J.J., Du Croz J., Greenbaum A., Hammarling S., McKenney A., Sorensen D.C. 3rd ed. SIAM Press; Philadelphia, PA: 1999. Lapack Users’ Guide. [Google Scholar]


