Abstract
A common way of finding the poles of a meromorphic function f in a domain, where an explicit expression of f is unknown but f can be evaluated at any given z, is to interpolate f by a rational function such that at prescribed sample points , and then find the roots of q. This is a two-step process and the type of the rational interpolant needs to be specified by the user. Many other algorithms for polefinding and rational interpolation (or least-squares fitting) have been proposed, but their numerical stability has remained largely unexplored. In this work we describe an algorithm with the following three features: (1) it automatically finds an appropriate type for a rational approximant, thereby allowing the user to input just the function f, (2) it finds the poles via a generalized eigenvalue problem of matrices constructed directly from the sampled values in a one-step fashion, and (3) it computes rational approximants in a numerically stable manner, in that with small at the sample points, making it the first rational interpolation (or approximation) algorithm with guaranteed numerical stability. Our algorithm executes an implicit change of polynomial basis by the QR factorization, and allows for oversampling combined with least-squares fitting. Through experiments we illustrate the resulting accuracy and stability, which can significantly outperform existing algorithms.
Mathematics Subject Classification: 65D05 Numerical analysis, Interpolation; 65D15 Numerical analysis, Algorithms for functional approximation
Introduction
Let f be a meromorphic function in a domain , whose explicit expression is unknown but can be evaluated at any set of sample points . This paper investigates numerical algorithms for finding the poles of f, along with the associated problem of finding a rational approximant in . Finding the poles of a meromorphic or rational function f is required in many situations, including resolvent-based eigensolvers [4, 36, 40] and the analysis of transfer functions [30, 38].
One natural way of finding the poles is to first approximate f in by a rational function , then find the poles of r, i.e., the roots of q. A common approach to obtain (a rational function of type , i.e., for polynomials p, q of degree at most respectively) is to interpolate f at points in (such as the unit disk), a code for which is available in the Chebfun command ratinterp [21, 35]. However, this is a two-step process; when the poles are of primary interest, explicitly forming r is unnecessary and can be a cause for numerical errors. Moreover, the type of the rational function is usually required as input.
In this paper we develop a polefinding algorithm ratfun that essentially involves just solving one generalized eigenvalue problem, thereby bypassing the need to form r. ratfun starts by finding an appropriate type for the rational approximant from the function values: roughly, it finds the pair with a smallest possible (without taking excessively large) such that holds with ; in Sect. 3 we make this more precise. This allows the user to input just the function f to obtain the poles. The rational approximant can also be obtained if necessary.
Since polefinding for boils down to rootfinding for q, it is inevitable that the algorithm involves an iterative process (as opposed to processes requiring finitely many operations in exact arithmetic such as a linear system), and hence it is perhaps unsurprising that we arrive at an eigenvalue problem. Our algorithm has runtime that scales cubically with the type of the rational approximant, which is comparable to some of the state-of-the-art algorithms.
A key property of ratfun is its numerical stability. To our knowledge, no previous polefinding algorithm has been proven to be numerically stable. Numerical stability here means backward stability in the sense that holds exactly at the sample points, where is the computed rational approximant and are O(u) where u is the unit roundoff (throughout we write to mean for a moderate ), and is the vector norm of function values at the sample points , see Notation below. Classical algorithms such as Cauchy’s [13], Jacobi’s [26], Thiele’s continued fractions [44, Sect. 2.2.2] and Neville-type algorithms [44, Sect. 2.2.3] are known to be of little practical use due to instability [47, Ch. 26]. The more recent Chebfun’s ratinterp is based on the SVD, and combined with a degree reduction technique, ratinterp is reliable in many situations. However, as we shall see, the algorithm is still unstable when a sample point lies near a pole. Once the numerical degree is determined, the way our algorithm ratfun finds the poles is mathematically equivalent to ratinterp (and some other algorithms), but overcomes this instability by avoiding the use of the FFT and employing a diagonal scaling to attenuate the effect of an excessively large sample value .
Another practical method is a naive SVD-based interpolation algorithm (described in Sect. 2.1), and despite its simplicity and straightforward derivation, it works surprisingly well; indeed we prove stability in the above sense for obtaining r when an appropriate diagonal scaling is employed. Nonetheless, it is still based on a two-step approach, and the detour of forming the coefficients of p, q before computing the poles incurs unnecessary inaccuracy. As is well known, in rootfinding problems the choice of the polynomial basis is critical for accurate computation [47, App. 6], as Wilkinson famously illustrated in [52]. ratfun, by contrast, bypasses the coefficients and implicitly performs an appropriate change of polynomial basis.
Also worth mentioning are polefinding algorithms based on a Hankel generalized eigenvalue constructed via evaluating discretized contour integrals of the form [29, 40]. This algorithm still has a two-step flavor (computing integrals and solving eigenproblem), and it was recently shown [3] to be mathematically equivalent to rational interpolation followed by polefinding, as in Chebfun’s ratinterp. We shall see that this algorithm is also unstable.
We shall see that ratfun is also equivalent mathematically to these two algorithms, in that our eigenproblem can be reduced to the Hankel eigenproblem by a left equivalence transformation. However, numerically they are very different, and we explain why ratfun is stable while others are not.
The contributions of this paper can be summarized as follows.
Polefinding (and rootfinding if necessary) by a one-step eigenvalue problem.
Automatic determination of a type for the rational approximant. This allows the user to obtain p, q from the input f alone. In previous algorithms the type has been a required input.
Stability analysis. We introduce a natural measure of numerical stability for rational interpolation, and establish that our algorithm ratfun is numerically stable.
Table 1 compares algorithms for polefinding and indicates the stability and complexity of each method, along with the dominant computational operation. Here, RKFIT refers to the recent algorithm by Berljafa and Güttel [8], Hankel is the algorithm based on contour integration, resulting in a generalized eigenvalue problem involving Hankel matrices (summarized in Sect. 4.8), and naive is the naive method presented in Sect. 2.1. By “avoid roots(q)” we mean the algorithm can compute the poles without forming the polynomial q and then finding its roots.
Table 1.
Comparison between polefinding algorithms
| ratinterp | RKFIT | Hankel | Naive | ratfun | |
|---|---|---|---|---|---|
| p, q stability | − | ||||
| Avoid roots(q) | |||||
| Complexity | |||||
| Main computation | SVD etc | Krylov | GEP | SVD | Rectangular GEP |
GEP generalized eigenvalue problem
This paper is organized as follows. In Sect. 2.1 we review some previous algorithms, which also leads naturally to our proposed algorithm. In Sect. 3 we discuss the process of finding an appropriate type of the rational approximation. Section 4 is the main part where our eigenvalue-based algorithm is derived, and we prove its numerical stability in Sect. 5. We present numerical experiments in Sect. 6.
Notation is the set of polynomials of degree at most , and is the set of rational functions of type at most . Unless mentioned otherwise, f is assumed to be meromorphic in a region in the complex plane, and denotes the type of the rational approximant that our algorithm finds: , that is, and . When necessary, when f is rational, we denote by its exact type, that is, where p, q are coprime polynomials of degree , respectively. We define to be the vectors of their coefficients such that and , in which is a polynomial basis, which we take to be the monomials unless otherwise mentioned. When other bases are taken we state the choice explicitly. is the number of sample points, denoted by , which we assume to take distinct points in . is the diagonal matrix of function values at the sample points. We also let . denotes a norm of a function, defined via the function values at the sample points . Computed approximants wear a hat, so for example is a computed pole. V is the Vandermonde matrix generated from the sample points with (i, j)-element :
| 1.1 |
The Vandermonde matrix and its inverse play the important role of mapping between coefficient space and value space. When a non-monomial basis is used, . We denote by the matrix of first i columns of V. u denotes the unit roundoff, in IEEE double precision arithmetic. We write to mean and .
Existing methods for rational interpolation and least-squares fitting
Rational interpolation is a classical problem in numerical analysis and many algorithms have been proposed, such as those by Jacobi [26], Neville and one based on continued fractions [44, Sect. 2.2]. Here we review those that can be considered among the most practical and stable. For more information on algorithms that are not explained, we refer to [12, Ch. 5], [44, Sect. 2.2] and [37, p. 59].
We first clarify what is meant by rational interpolation and least-squares fitting.
Rational interpolation With sample points for with , the goal of rational interpolation is to find polynomials satisfying the set of equations
| 2.1 |
However, as is well known [12, Ch. 5], [47, Ch. 26], (2.1) does not always have a solution . To avoid difficulties associated with nonexistence, a numerical algorithm often starts with the linearized equation
| 2.2 |
which always has solution(s), which all correspond to the same rational function . Most methods discussed in this paper work with (2.2).
Rational least-squares fitting When we sample f at more than sample points with , (2.2) has more equations than unknowns, and a natural approach is to find p, q such that
| 2.3 |
This leads to a least-squares problem. Least-squares fitting is used throughout scientific computing, and it often leads to more robust algorithms than interpolation. For example, when function values contain random errors, polynomial least-squares fitting has the benefit of reducing the variance in the outcome as compared with interpolation [14, Sect. 4.5.5].
One main message of this paper is that the precise formulation of the least-squares problem (2.3) is crucial for numerical stability. For example, the minimizers of and are clearly different. As we describe below, our method works with for an diagonal matrix chosen so that
| 2.4 |
Here is the median value of . This choice is crucial for establishing numerical stability.
Naive method
Perhaps the most straightforward, “naive” method for rational interpolation is to find the coefficients of and by writing out (2.2) as a matrix equation
| 2.5 |
where for and are the first and columns of V, the Vandermonde matrix of size as in (1.1). To obtain (2.5), note that the (partial) Vandermonde matrices map the coefficients to value space (i.e., ), in which “multiplication by ” corresponds simply to “multiplication by F”. Equation (2.5) is thus a matrix formulation of rational interpolation (2.2) in value space.
Solving (2.5) for amounts to finding a null vector of the matrix
| 2.6 |
Sometimes the matrix C has null space of dimension larger than 1; in this case all the null vectors of C give the same rational function p / q [12, Sect. V.3.A].
To find the poles of f once is obtained, we find the roots of the polynomial , for example by the companion linearization [22]. When a non-monomial polynomial basis is chosen, other linearizations such as comrade and confederate are available [5, 22].
The above process (2.6) can be easily extended to the oversampled case, in which and the matrix C above is of size . In this case the matrix in (2.6) has at least as many rows as columns, and does not necessarily have a null vector. Then the task is to perform a least-squares fitting, which we do by finding the right singular vector corresponding to the smallest singular value of the matrix C, which for later use we state as an optimization problem:
| 2.7 |
Here the normalization is imposed to rule out the trivial solution .
We shall consider a scaled formulation of (2.7), which left-multiplies a suitably chosen diagonal matrix D by the matrix in the objective function as
| 2.8 |
Note that (2.7) and (2.8) have the same solution when the optimal objective value is zero, but otherwise they are different, and in the oversampled case this is usually the case. Numerically, they are vastly different even when .
The dominant cost is in the SVD (more precisely computing the right singular vector corresponding to the smallest singular value) of or the scaled matrix , requiring cost.
The naive method (2.5) is mentioned for example in [10], but seems to be rarely used in practice, and we are unaware of previous work that explicitly investigate the least-squares formulation (2.8) or its scaled variant (2.7). Nonetheless, in Sect. 5 we shall show that the scaled formulation (2.8) is numerically stable for rational interpolation (i.e., computing p, q) for a suitable choice of D. In this paper we refer to (2.8) as the scaled naive method (or just naive method).
Another method that relies on finding a null vector of a matrix is described in [41], whose matrix elements are defined via the divided differences. Analyzing stability for this method appears to be complicated and is an open problem.
Chebfun’s ratinterp
Chebfun [18] is a MATLAB package for working with functions based primarily on polynomial interpolation, but it also provides basic routines for rational functions. In particular, the ratinterp command runs a rational interpolation or least-squares fitting algorithm for the linearized equation (2.3), as outlined below.
We start again with the matrix equation in the naive method (2.6), which we rewrite as . Expanding the matrices to form a full Vandermonde matrix V, the equation becomes
| 2.9 |
Now when the sample points are at roots of unity for , and using the monomial basis , we can use the FFT to efficiently multiply by V or ( denotes that Hermitian conjugate), and left-multiplying (2.9) by gives
| 2.10 |
The multiplication by brings the equation back to coefficient space, and so unlike the naive method (2.5) given in value space, (2.10) is a formulation of rational interpolation in coefficient space. Note that the matrix can be formed in operations using the FFT. An analogous result holds for Chebyshev points using the Chebyshev polynomial basis [6, 46, Ch. 8].
By (2.10), is a null vector of the bottom-left part of , which has one more column than rows in the interpolation case . Then the task is to find such that
| 2.11 |
where denotes the last columns of V (as before, is the first columns).
Again, in the oversampled case a least-squares fitting can be done by finding the smallest singular value and its right singular vector of the matrix .
As in the naive method, ratinterp finds the poles by finding the roots of q via the eigenvalues of the companion (when sampled at roots of unity) or colleague (Chebyshev points) matrices.
RKFIT
The recent work by Berljafa and Güttel [7, 8] introduces RKFIT, a toolbox for working with matrices and rational functions based on rational Krylov decompositions. Given matrices and a vector , RKFIT is designed to find a rational matrix approximant r(A) to F such that by solving
| 2.12 |
where is an elementwise weight matrix, which the user can specify. The objective function in (2.12) is called the absolute misfit in [8]. In the special case where , and , RKFIT seeks to solve the optimization problem
| 2.13 |
RKFIT solves (2.13) by an iterative process: starting with an initial guess for poles (e.g. ) that determines a temporary , form a rational Krylov decomposition and solve (2.13) over via computing an SVD. Using the obtained solution, RKFIT then updates the pole estimates and , then repeats the process until convergence is achieved. See [8] for details, which shows RKFIT can deal with more general problems, for example with multiple vectors b and matrices F.
Note that (2.13) has the flavor of dealing with the original rational approximation problem (2.1) rather than the linearized version (2.2). We observe, nonetheless, that (2.13) becomes very close to (2.8) (same except for the normalization) if we take . As we discuss in Sect. 5, the choice of D (and hence ) is crucial for numerical stability. Indeed, RKFIT is not stable with the default parameters when used for scalar rational approximation, but the user can input an appropriate D (which depends on f) to achieve stability.
Automatic type determination via oversampling
A significant feature of Chebfun’s polynomial approximation process for a continuous function f is that the numerical degree can be obtained automatically by oversampling. This allows the user to obtain the polynomial approximant by taking just the function f as input, without prior information on the (numerical) degree of f.
Specifically, when the user inputs chebfun(f) for a function handle f, an adaptive process is executed to find the appropriate degree: Chebfun first samples f at Chebyshev points for a modest integer s, examines the leading Chebyshev coefficients of the interpolant, and if they have not decayed sufficiently, then increments s by 1 to sample at twice as many points, and repeat until the leading Chebyshev coefficients decay to O(u). For details see [2]. We emphasize the important role that oversampling plays for determining the degree; the coefficient decay is observed only after f is sampled more than necessary to obtain the polynomial interpolant.
For rational interpolation or approximation, we argue that it is possible to determine an appropriate type for a rational approximant just as in the polynomial case by oversampling, although the process is not just to look at coefficients but rather based on the SVD of a certain matrix. Related studies exist: Antoulas and Anderson [1] find a minimum-degree interpolant in the barycentric representation by examining a so-called Lwner matrix, given a set of sample points. Similar techniques have been used in Chebfun’s ratinterp [21] and padeapprox [20], and in RKFIT [8], which are designed for removing spurious root-pole pairs, rather than to find a type of a rational approximant.
Type determination by oversampling and examining singular values
Suppose that we sample a rational function at sufficiently many points , so that is larger than both . We initially take as tentative upper bounds for the degrees. Then, as in the naive method (2.6), we compute the null space of C (which is square or tall, corresponding to the oversampled case). In “Appendix B” we examine the rank of the matrix C as the integers vary, which shows that assuming L is taken large enough so that , (recalling that is the exact type of f)
- If or , then
3.1 - If and , then
3.2
(See [8, Thm. 3.1] for a similar result in the RKFIT setting). Note how this result gives us information about the type of a rational f: By the first result, if , we need to take larger, along with . On the other hand, if , then (3.2) shows how to reduce so that there is no redundancy: should give us the correct provided that was set large enough. Even if , if singular values of C are negligible then we reduce by and repeat the process, which will eventually give us the correct provided that . Once is determined, we can find as the smallest integer such that the matrix C has a null vector. can be obtained also by looking at the leading coefficients of the computed , but we have found this SVD-based approach to be more reliable. We emphasize the important role played by oversampling, which is necessary for (3.1) and (3.2) to hold.
The above process would find the exact type in exact arithmetic if f is rational. In practice, f may not be rational, and we compute numerically by the number of singular values that are smaller than a tolerance to find a “numerical type” of f, which is the type of a rational function such that in . It is worth noting that “numerical type” is an ambiguous notion: for example, (1) and may be equally good as approximants to f in the domain , and (2) if f is analytic in , polynomials would suffice if the degree is taken large enough, but rational functions give much better approximants if singularities lie near , see [3, Sect. 6]. (1) Suggests that the “smallest” is not uniquely defined without further restriction. A natural approach is to find an approximant with the smallest possible (since we do not want unnecessary poles), but (2) suggests that this may lead to an approximant p / q of excessively high .
Given f, we attempt to find a rational approximant with as few poles as possible, within a controlled amount of computational effort. Specifically, our Algorithm 3.1 below finds a rational approximant p / q of type with the following properties:
There exists such that and , and
No rational function with and () satisfies and .
In other words, no rational function with lower denominator degree is a good approximant unless the numerator degree is more than doubled. In what follows, we set unless otherwise mentioned.
Numerically in practice, we shall show in Sect. 4.2 that it is important that a preprocessing step is carried out before examining the singular values of C. Specifically, we first scale f as so that the median of the scaled f is 1 in absolute value, and left-multiply a diagonal matrix so that each row of C has roughly the same norm:
| 3.3 |
This choice of D is the same as the one we use in the scaled naive method (2.8) for stability, to be justified in Sect. 5. Diagonal scaling has the effect of reducing the condition number (when ill-conditioning is caused by the entries having widely varying magnitudes rather than the rows being linearly dependent), and a simple scaling that scales the rows to have identical norms is known to be nearly optimal [16, 50]; the scaling in (3.3) achieves this approximately.
For further stability, we orthogonalize the two block columns by the “thin” QR factorizations.1 , where . Then we define
| 3.4 |
and determine the rational function type by the singular values of . Note that (3.2) continues to hold with C replaced with in exact arithmetic.
Summarizing, Algorithm 3.1 is the pseudocode for our type determination algorithm.
In Algorithm 3.1 we increase the number of sample points by a factor 2 until has a nontrivial null vector. Doubling the points allows us to reuse the previously sampled values when are roots of unity; for the same reason, when sampling at Chebyshev points on (this variant replaces the Lth roots of unity in step 2 by L Chebyshev points), we sample at points as in Chebfun.
We note that (3.1) and (3.2) assume that sufficiently many sample points are taken so that . If this does not hold, it is possible that although or , causing Algorithm 3.1 to wrongly conclude f is of a lower type. Fortunately, even if , it is unlikely that , as this requires that at points, where p, q together have degrees of freedom. Similarly, a tall rectangular matrix is unlikely to have nontrivial null vectors: a random rectangular matrix is full rank with probability one, and well conditioned with high probability if the aspect ratio is safely above 1 [39]. The default value was chosen to ensure is always tall rectangular.
The description of Step 3(b) is not necessarily the most efficient: we can instead take for some , if this results in . In step 4, we can use bisection to determine the smallest integer m. The worst-case cost is thus computing SVDs.
When the evaluation of f incurs nonnegligible (relative) error, tol should be adjusted accordingly. The output indicates the error bound; a successful degree detection implies .
Mathematically in exact arithmetic, the matrix C or having null space of dimension greater than 1 indicates the presence of a spurious root-pole pair, and in fact the coefficients of p, q obtained from any null vector of C are known to result in the same rational function p / q. In finite precision arithmetic, however, this property gets lost and a numerical null vector gives a function p / q that may be far from the function f. Furthermore, the accuracy of a computed singular vector is known to be inversely proportional to the gap between the corresponding singular value and the rest [43, Ch. 5]. Besides making the solution unique, finding the smallest possible degree has the additional benefit of widening the distance between the smallest and the second smallest singular values.
Experiments with oversampling for degree determination
Here we illustrate typefind through some numerical experiments. For illustration purposes, instead of doubling as in Algorithm 3.1, we formed for each integer with , and examined the resulting output type without doubling . For convenience, below we refer to this process as typefind(f,tol,L), where the number of sample points L is an input.
When f is a rational function We first examine the simplest case where f is a rational function
| 3.5 |
where and are equispaced on the circle of radius 0.9 centered at the origin. f is a rational function of exact type . Figure 1 shows the types obtained by typefind(f,tol,L) as we increase the number of sample points L.
Fig. 1.
Types of the rational approximants found by typefind(f,tol,L) for rational function (3.5), as the number of sample points L is varied (throughout, ). The red circles indicate that typefind(f,tol,L) found that the number of sample points is insufficient. The vertical black dashed line indicates the number of sampled points taken by the automatic degree determination process typefind(f,tol); here
Observe that with 13 sample points or more, the algorithm correctly finds the type (4, 5) of the rational function f. With five sample points, however, typefind(f,tol,L) erroneously concludes that the function is of lower type; this is an artifact of the symmetry of the function (which disappears e.g. by changing the location of one pole), and illustrates the importance of oversampling. We will come back to this issue in Fig. 5.
Fig. 5.
, with 50 poles inside the unit disk. Left: numerical degrees of the rational approximants. Black dashed line indicates the number of sampled points in the degree determination process Algorithm 3.1; here . ratfun(f) returns the incorrect type (0, 2); see comment in the text. Middle: error of computed poles with ratfun(f,gam) sampling 128 points. Right: sample points, poles and roots
Algorithm 3.1 samples at points to determine the type of the rational approximant. Although 16 is larger than the smallest number of sample points to theoretically obtain the rational interpolant p / q if the degree were known, we believe this is a small price to pay for an automated degree-finding algorithm.2
When f is a meromorphic function The situation becomes more complicated when f is not rational but merely meromorphic. For example consider
| 3.6 |
We take an example again with . f can be regarded as being of numerical type where , because the exponential function can be resolved to O(u) accuracy by a degree polynomial in the unit disk. Moreover, we expect that by increasing the denominator degree one can reduce the numerator degree for the same approximation quality, so we could also approximate f in the unit disk by a type rational function where are modest integers such as 1, 2.
Figure 2 shows the numerical degrees obtained by typefind(f,tol,L), which confirms this observation. Algorithm 3.1 (i.e., typefind(f,tol)) outputs the type by sampling at points. Our polefinder ratfun (described in Sect. 4) computes nine poles, five of which approximate the correct poles to within and four of which have absolute value larger than 10. The same is true of all the types found by for ; this example suggests they are all appropriate types, illustrating the nonunique nature of the numerical type.
Fig. 2.
Type found by typefind(f,tol,L) for a meromorphic function f (3.6)
When f is an analytic function with poles near Finally, we consider the function
| 3.7 |
which is analytic in the unit disk , therefore a polynomial p exists such that for any . However, as described in [3, Sect. 6], rational functions do a much better job of approximating analytic functions with a singularity lying near , and (3.7) is such an example. Indeed, to achieve for a polynomial p, we need , whereas with rationals, is achieved for a . Figure 3 shows the types obtained by typefind(f,tol,L), which outputs the type (16, 1) for . The output would become for if we take , but typefind(f,tol) terminates doubling the sample points once with , giving type (13, 3). Again, the two extra poles are far outside the unit disk.
Fig. 3.
Types found by typefind(f,tol,L) for the function f in (3.7), analytic in the unit disk
See Fig. 7 for a function with exact poles far from the unit disk, along with other experiments in Sect. 6.
Fig. 7.
f with a pole far outside the unit disk. Left: numerical degrees of the rational approximants. ratfun(f) samples at points and returns the type (4, 5). Middle: error of computed poles . The pole that eventually gets lost by ratfun and naive corresponds to , the pole far from the sample points. Right: sample points, poles and roots
Interpretations as optimization problems
We have emphasized the role of the diagonal scaling D in the discussion in Sect. 3.1. Here we reassess its significance from the viewpoint of optimization problems. Let us consider the meaning of the smallest singular value of C in (2.6), allowing for the oversampled case. As discussed in (2.7), it has the characterization
| 3.8 |
and the solution is obtained by the corresponding right singular vector. Since we have by the definition of C, its smallest singular value is equal to the optimal value of the following optimization problem:
| 3.9 |
where . Note that the constraint in (3.9) changes depending on the choice of the polynomial basis, and the norm in the constraint differs from that of the objective function .
Recall that for stability, instead of C, we work with the scaled-orthogonalized matrix in (3.4). We claim that the smallest singular value of is equal to the optimal value of the following optimization problem:
| 3.10 |
where d(z) is a function such that is equal to the ith diagonal element of D.
To verify the claim, we express as
| 3.11 |
where the last equality comes from the orthonormality of the columns of and . From the definition of and , we have
Hence, in (3.11) is equal to the optimal value of the problem given by (3.10). We note that, if the optimal value is sufficiently small, then the optimal solutions p and q are scaled so that , because .
We can also show similarly (and more easily) for the scaled (but not orthogonalized) matrix DC in (3.3) that is equal to the optimal value of
| 3.12 |
The optimization problems (3.9), (3.10) and (3.12) differ in the following respects:
The objective function in (3.10) and (3.12) is scaled so that .
The constraint in (3.10) does not depend on the choice of the polynomial basis.
In (3.10), the objective function and constraint employ the same norm .
The diagonal scaling in the objective function is crucial for numerical stability as we show in Sect. 5. The independence of from the polynomial basis is due to the QR factorization, and it “automatically” chooses polynomial bases and for p and q respectively, for which discrete orthonormality is achieved: for p, defining we have orthonormality (the Kronecker delta, if and ), and similarly for q, the vectors are orthonormal . Note that the two bases for p, q are different, and they depend on the function f and sample points . Working with orthonormal matrices have numerical benefits, as we shall illustrate in Sect. 5.3. Together with the fact that the objective function and constraint are defined with respect to the same norm , this “scaled and QR’d” approach results in a natural and numerically stable interpolation. For these reasons, we argue that (3.10) is a natural way to formulate our problem.
Note, however, that the scaled naive method (2.8) works with (3.12), not (3.10). No QR factorizations is performed in (2.8), because if one uses it, the null vector of no longer gives the coefficients as in (2.8). Although we could retrieve by applying the inverse transformation with respect to the R factors in the QR factorizations, this leads to numerical instability when are ill-conditioned. In the next section we shall overcome this difficulty by formulating an algorithm that directly computes the poles, bypassing the coefficient vector . The resulting algorithm ratfun essentially works with (3.10), but is immune to the difficulty associated with the change of polynomial basis.
Polefinding via a generalized eigenvalue problem
We now describe our eigenvalue-based algorithm for finding the poles of f. Here we take as given, assumed to be obtained by Algorithm 3.1 or given as inputs.
Formulating polefinding as an eigenproblem
We consider finding the poles of , i.e., the roots of q(z). Denote the desired poles by for .
As before we start with the linearized interpolation equation (2.2). Here we consider interpolation where ; we treat the oversampled case later in Sect. 4.3. The key idea is to make a pole , the sought quantity, appear explicitly in the equation to be solved. To this end we rewrite q(z) using , which is also a polynomial, as
| 4.1 |
We can then express (2.2) as
| 4.2 |
which is the crucial guiding equation for our algorithm. The equations (4.2) can be written as a matrix equation using the Vandermonde matrix as
| 4.3 |
where , is the vector of coefficients for the polynomial , and as before, and is the first i columns of the Vandermonde matrix. Just as in the naive method (2.5), we obtain (4.3) by mapping into value space using the Vandermonde matrix, then noting that in value space, “-multiplication” is “F-multiplication” and “-multiplication” is “-multiplication”. Thus (4.3) formulates rational interpolation again in value space, but now with shown explicitly.
Of course, is unknown in (4.3), and setting it as an unknown we arrive at the generalized eigenvalue problem
| 4.4 |
where , and , and O is the zero matrix of size .
Since the matrix clearly has null space of dimension , the eigenproblem (4.4) has eigenvalues at infinity. By construction, we expect the finite eigenvalues to contain information about the poles. The next result shows indeed that the finite eigenvalues of (4.4) are the poles of f.
Proposition 1
If has poles counting multiplicities (i.e., ), then the matrix pencil is regular, and its finite eigenvalues coincide with the poles of f.
(Proof) Since has poles, f(z) has the expression
for some , where and does not coincide with any element of for , i.e., . It suffices to show that is singular if and only if is one of the roots of q. We can easily confirm the “if” part as follows. Let for a fixed integer k. Defining the coefficient vectors and such that and , we have
for , so it follows that has a nontrivial kernel, and hence, is singular for .
Next, for the “only if” part, suppose (4.4) holds for a nonzero and , where we write and . Then, it suffices to show that is one of the roots of q. Define polynomials and by and , respectively. We shall show that for some i, and that , for some nonzero scalar C. From (4.4), we have
for . Multiplying q(z) to both sides, we obtain
| 4.5 |
for . Since the left-hand side of (4.5) is a polynomial of degree at most and take on the value 0 at distinct points, it must be the zero polynomial, i.e., (4.5) holds for arbitrary . Hence, the polynomial is equal to the polynomial . Note that these two polynomials are not the zero polynomial since . Let be the roots of and the roots of . Since has the same roots as , we have . Since , we have . Since the number of roots of is at most , we have , so it follows that .
We have thus shown that for every and such that (4.4) holds, must be a pole of f. It hence follows that for any , the matrix is nonsingular, showing the matrix pencil is regular.
As shown in the proof, the eigenvectors of (4.4) have a special structure: the eigenvector corresponding to is
| 4.6 |
In the appendix we give further analysis of the eigenproblem, revealing the Kronecker canonical form. It shows in particular that the orders of the poles are equal to the multiplicities of the eigenvalues.
Techniques for efficient and stable solution of eigenproblem
We now discuss how to solve (4.4) in practice. We employ techniques to remove undesired eigenvalues at , and to achieve numerical stability.
Projecting out eigenvalues at infinity The generalized eigenvalue problem (4.4) has eigenvalues along with eigenvalues at infinity. These eigenvalues at infinity can be projected out easily. Let be the orthogonal complement of such that . Then
| 4.7 |
is an eigenvalue problem whose eigenvalues are with corresponding eigenvectors . To see this, recall (4.6) and note that (4.4) is equivalent to
| 4.8 |
and so taking the QR factorization , by left-multiplying we obtain
| 4.9 |
from which we can deflate the eigenvalues corresponding to the top-left corner and solve for the lower-right part, to arrive at (4.7) with eigenvector . Alternatively, (4.8) shows that the “residual” , which means it is orthogonal to ; (4.7) is its representation.
Diagonal scaling Generally, given an eigenvalue problem , a well known technique of balancing the elements in the presence of widely varying elements is diagonal scaling.
As with the scaled naive method (2.8), we left-multiply a diagonal matrix D and work with the pencil so that each row of has about the same norm. In Sect. 5 we show that this scaling makes our approach numerically stable.
Orthogonalization As alluded to at the end of Sect. 3.3, the final technique that we use for improved stability, which is inapplicable in the naive method, is orthogonalization. As in (3.4), we take the thin QR factorizations , where have orthonormal columns and are of the same size as . The rationale is that numerical errors are reduced by working with orthogonal matrices. These can be computed exploiting the Vandermonde structure, as explained after (3.4).
Applying scaling and orthogonalization to (4.9), the eigenvalue problem we solve becomes
| 4.10 |
This is a eigenproblem; the eigenvalues are precisely the sought poles.
Recall that as a consequence of this orthogonalization, the eigenvector of (4.10) goes through the change of basis with respect to . This severely affects the naive method (for which the singular vector is the sought quantity), but not our algorithm (for which the eigenvalues are sought).
Use of FFT? In the practically important cases where the sample points are at roots of unity or Chebyshev points, we can use the FFT to efficiently obtain the matrices in (4.7), as discussed in Sect. 2.2.
However, we shall not use the FFT in this work, for two reasons. First, while the FFT significantly speeds up the matrix-matrix multiplication, from to , this is not essential to the overall algorithm as it inevitably invokes an eigensolver (or an SVD), which requires operations. Indeed [35] designs the algorithm to fascilitate the use of the FFT, but again the saving is attenuated by the SVD step.
The second, more fundamental, reason is stability. We shall see in Sect. 5 and through numerical experiments that diagonal scaling is crucial for stability. Unfortunately, using the FFT makes diagonal scaling inapplicable.
Pole exactly at sample point When a pole happens to exactly coincide with a sample point, and the eigenvalue problem breaks down due to infinity elements in the matrices. However, this should be a “happy” breakdown, rather than a difficulty. In this case we can simply take to be a computed pole, and work with the function , taking . An alternative and equally valid approach is to take , and proceed as usual.
Oversampling and least-squares fitting
As with previous algorithms, it is often recommended to take advantage of the oversampled values at more than points , and perform a least-squares fitting. This is true especially in our context, where the degree-finding process in Algorithm 3.1 has oversampled f to find the type, and it is natural to try to reuse the computed quantities .
Consider finding the poles of with sample points . We form the matrices as in the previous Sect. (4.4) with , and . We proceed as in (4.10) and apply projection, scaling, and orthogonalization , to obtain matrices as in (4.10), but these matrices are now nonsquare, of size : they have more rows than columns since . Under the assumption that f has poles, there exists a nonzero with if and only if is one of the poles of f; this can be shown as in Proposition 1. Hence, in theory, we can compute the poles of f by solving the rectangular eigenvalue problem
| 4.11 |
However, traditional methods for generalized eigenvalue problems such as the QZ algorithm [19, Ch. 7], [31] are not applicable to (4.11) since the pencil is rectangular.
To solve the rectangular eigenvalue problem (4.11), we use the recent algorithm by Ito and Murota [25]. The idea is to find perturbations with smallest so that the pencil has eigenpairs:
| 4.12 |
The resulting algorithm computes the SVD , then solves the square generalized eigenvalue problem
| 4.13 |
This corresponds to taking and , hence . See [25] for details.
Pseudocode
Summarizing, the following is the pseudocode for our polefinding algorithm ratfun.
By default, the sample points are the roots of unity (once is specified); other choices are allowed such as Chebyshev points on . We justify the scaling in step 2 and the choice of the diagonal matrix D in Sect. 5.2.
When the domain of interest is far from the origin, it is recommended that one work with a shifted function so that the domain becomes near the origin (this affects the Vandermonde matrix, in particular its condition number).
Efficiency
The dominant costs of Algorithm 4.1 are in the QR factorizations, forming , computing the SVD (4.14) and solving the eigenproblem (4.15). These are all or less, using standard algorithms in numerical linear algebra. This is comparable in complexity with other approaches, as we summarized in Table 1.
Input/output parameters
Combined with the degree determination process Algorithm 3.1, ratfun lets us find poles and rational interpolants with the minimum input requirement: just the function f. Our algorithm ratfun (described in detail in Sect. 4) adapts to the specifications as necessary when the user inputs more information such as the location of the sample points and type of the rational approximants. Below we detail the process for three types of inputs:
Minimum input requirement: poles = ratfun(f).
Function and sample points: poles = ratfun(f,gam).
Function, sample points and degrees: poles = ratfun(f,gam,m,n).
Minimum input requirement poles = ratfun(f) When the function f is the only input the algorithm first determines the numerical type of the rational approximant by Algorithm 3.1, then runs the polefinding algorithm to be described in Sect. 4. By default, we take the sample points to be roots of unity; Chebyshev points can be chosen by invoking ratfun(f,‘c’).
Inputs are function and sample points poles = ratfun(f,gam) When the sample points are specified by gam the algorithm first runs the degree finder typefind(f,tol,L) with , and gives a warning if the number of sample points appears to be insufficient , indicated by . Regardless, the algorithm proceeds with solving the generalized eigenvalue problem to obtain approximate poles and p, q with . We note that the backward errors and have magnitudes , which is not necessarily O(tol) in this case (see Sect. 5 for details on backward errors).
Full input: function, sample points and degrees poles = ratfun(f,gam,m,n) When the degrees are further specified the algorithm directly solves the (rectangular or square when ) generalized eigenvalue problem to obtain the poles and p, q.
Outputs
The full output information is [poles,cp,cq,type,roots]=ratfun(f), in which poles are the computed poles, cp,cq are the vectors of coefficients of the polynomials p, q in the monomial basis, type is a 2-dimensional vector of the computed type, and roots are the computed roots.
We next discuss how to compute the roots and finding .
Computing the roots
One situation that Algorithm 4.1 did not deal with is when the roots of f are sought. We suggest two approaches for rootfinding, depending on whether poles are also sought or not.
Finding roots only First, when only the roots are of interest, we can invoke Algorithm 4.1 to find the poles of 1 / f. Alternatively, the roots can be computed from f by defining and starting from the guiding equation [recall (4.2)]
| 4.16 |
which, as before, can be rewritten as a generalized eigenvalue problem with . For brevity we omit the details, as the formulation is analogous to that for (4.4) and (4.11).
Finding poles and roots When both the poles and roots are required, we suggest the following. First compute the poles as in Algorithm 4.1. Then we find the roots by solving for the equation
| 4.17 |
Here is the same as above, and we form from the expression using the poles that have previously been computed. Equation (4.17) can be rearranged to , which we write in matrix form as
| 4.18 |
This is again a rectangular generalized eigenvalue problem. This has one irrelevant eigenvalue at infinity, and the problem can again be solved via an SVD. Since the matrices involved are of smaller size than (4.4) and (4.11), this process is cheaper than finding the poles of 1 / f.
Finding
To find the coefficient vectors and , we can take advantage of the eigenvector structure (4.6) to extract from any eigenvector, along with , from which we obtain via (4.1). Note that to do this we need to give up the QR factorization in step 4 of Algorithm 4.1. Equally effective and stable is to invoke the scaled naive method (2.8), which gives directly (our current code adopts this approach). A word of caution is that eigenvectors are sensitive to perturbation if (but not only if) the corresponding eigenvalues are nearly multiple.
We note that there are many other ways of representing a rational function . Since ratfun can compute the poles and roots as described above, one effective representation is to take
| 4.19 |
in which we store the constant c and the roots and poles .
Mathematical equivalence with previous algorithms: interpolation-based and Hankel eigenproblem
Here we briefly discuss the connection between our eigenvalue problem and existing ones. We shall show that the eigenproblem (4.7), when , is equivalent in exact arithmetic to the generalized eigenvalue problem of Hankel matrices derived in [29, 40], which are in turn equivalent to Chebfun’s ratinterp as shown in [3]. Essentially, both our algorithm and ratinterp find the roots of q such that interpolates f at the sample points.
We shall show that the eigenvalues and right eigenvectors of (4.7) and those of the Hankel matrix pencil are the same. Before proving this claim, we briefly review the Hankel eigenproblem approach, which originates in work of Delves and Lyness [15] and Kravanja et al. [27, 28], see also [3]. In this algorithm, one computes the discretized moments
and then solves the generalized eigenvalue problem with Hankel matrices
![]() |
4.20 |
We write this as for simplicity. The pencil can be written using a contour integral as
| 4.21 |
If f is meromorphic in the unit disk and has poles , then the poles are eigenvalues of . Indeed, defining and letting be its coefficient vector as in (4.3) we obtain
| 4.22 |
since is analytic in the unit disk.
The contour integral (4.22) needs to be discretized in a practical computation. If we use the standard trapezoidal rule evaluating at roots of unity for to approximate , the computed pencil becomes
| 4.23 |
where and as before. Hence if f is a rational function , we have
The ith element of the final vector is for , which is equal to the evaluation of . Now the -point trapezoidal rule is exact if the integrand is polynomial of degree or below [49, Cor. 2.3]. Therefore, if then . Thus also for the discretized pencil , is again an eigenvector if with with .
This shows that the eigenproblems and in (4.9) have the same eigenvalues and eigenvectors, thus are equivalent, i.e., there exists a nonsingular matrix W such that and .
Despite the mathematical equivalence, we reiterate that the numerical behavior of the algorithms is vastly different. Crucially, the left-multiplication by in (4.23) mixes up the magnitudes of , resulting in the instability due to near-pole sampling. This will be made precise in the next section.
Numerical stability
A crucial aspect of any numerical algorithm is stability [23]. It is common, and often inevitable for problems that are potentially ill-conditioned, to investigate backward stability (as opposed to analyzing the forward error in the outcome itself), in which we ask whether a computed output is guaranteed to be the exact solution of a slightly perturbed input.
The great success of polynomial interpolation of a continuous function f at roots of unity (for approximation in the unit disk) or Chebyshev points (on an interval ) is due to its combined efficiency and stability: a degree-n polynomial interpolation can be done in operations employing the Chebyshev polynomials and FFT [6]. Moreover, since the FFT matrix has condition number 1, the process is numerically stable, and we obtain an interpolant satisfying
| 5.1 |
at every sample point ; this holds regardless of f. Suppose further the interpolation is successful (with smooth f, good points and basis ) in that , where for a domain . Then with a stable rootfinding algorithm for , one obtains stability in the computed roots: . This shows are the exact roots of a slightly perturbed input f. Rootfinding algorithms with proven stability include the companion [51] (for monomials) and colleague linearizations [32] (for Chebyshev).
For rational interpolation and polefinding, to our knowledge, stability in the context of polefinding and rational interpolation has been rarely discussed; [35], which connects the inaccuracies with the presence of ill-conditioned matrices, is one of the few, but their argument does not treat the backward stability of the rational interpolants . Here we attempt to make a step forward and analyze backward stability for rational interpolation algorithms.
First we need to elucidate our goal. The presence of poles complicates the situation because, for example is infinity unless we compute the poles exactly, and this is true even for the linearized version . For this reason, sometimes rational interpolation is thought to be inherently ill-posed for a stable computation.
There is a natural workaround here: we allow for perturbation in both the numerator and denominator polynomials and . We then analyze whether the rational interpolation is satisfied with small backward errors, that is,
| 5.2 |
for . As before, we work with the linearized formulation.
Definition 1
Let f be a meromorphic function. Given sample points and computed polynomials , , we say that is a stable rational interpolant of f if there exist functions such that
| 5.3 |
We note that the requirement here is a rather weak condition: for example, it does not require that are close to the correct p, q when . Nonetheless, we shall see that many previous algorithms fail to satisfy them. We now give a necessary and sufficient condition for stability that is easy to work with.
Lemma 1
| 5.4 |
is a necessary and sufficient condition for to be a stable rational interpolant at satisfying (5.3), for .
(Proof) Suppose (5.4) is satisfied. Then, defining and by
| 5.5 |
we obtain (5.3). Conversely, if and satisfy (5.3), then we have
| 5.6 |
This proves the claim.
Below we analyze the stability of algorithms based on Lemma 1. In Sects. 5.1 and 5.2, to avoid the jarring complications due to the ill-conditioning of the Vandermonde matrix, we discuss the case where the sample points are the roots of unity and the polynomial basis is the monomials . Essentially the same argument carries over to other sets of sample points employed with an appropriate polynomial basis , such as Chebyshev-points sampling employing the Chebyshev polynomial basis.
Instability of previous algorithms
Here we illustrate with the example of Chebfun’s ratinterp that previous algorithms can be numerically unstable, i.e., they do not necessarily satisfy (5.4) in Lemma 1. Recall that ratinterp computes in (2.11) as the null vector of .
Let us explain the numerical issue here. Let be the computed null vector. Consider the Eq. (2.10) left-multiplied by the Vandermonde matrix V, which is unitary times . Taking into account the numerical errors, the equation can be written as
| 5.7 |
which we rewrite using as
| 5.8 |
The vectors and are zero when is equal to the exact , but due to numerical errors. Indeed, we see that the ith element of is , which is precisely the linearized interpolation residual in (5.4).
Now, the computed null vector of the matrix in (2.11) obtained by a stable algorithm such as the SVD generally satisfies the normwise condition
| 5.9 |
Now since , we have . Thus , which indicates that if for some j, then the interpolation residual for the jth equation is (for a constant )
which violates the condition (5.4) for stability.
Although we do not present the details, such instability is present in most algorithms, including the unscaled naive method and RKFIT (with default weight ).
Diagonal scaling and stability of ratfun and scaled naive method
Let us reconsider the eigenvalue problem (4.4) from a similar viewpoint, and we shall show that our approach of solving (4.4) employing diagonal scaling is immune to the instability just discussed, and ratfun gives a stable rational interpolation.
For simplicity we rewrite the eigenvalue problem (4.4) with diagonal scaling.3 as . By the backward stability of the standard QZ algorithm, each computed eigenpair satisfies
| 5.10 |
in which denotes a constant of magnitude O(u).
To establish stability we need two preparations. First, we use an appropriate scaling of f. We can clearly scale for any without changing the poles and roots, and the analysis below will show that a good choice is one such that . To be precise, it suffices to have
| 5.11 |
which means and This means we expect holds at most of the sample points. In practice, we achieve (5.11) by sampling at sufficiently many points and taking to be the median value ; this is adopted in the pseudocode of ratfun, Step 2 of Algorithm 4.1.
Second, as mentioned before, we choose the diagonal scaling matrix D as in (2.4), so that (since we scale f s.t. ) the jth diagonal is
| 5.12 |
We are now ready to state our main stability result.
Theorem 1
Let A, B be as defined in (4.4) with , where and . Let be as in (5.12), and let with be a computed eigenpair such that (5.10) holds. Partition , where . Defining , and with coefficient vector , suppose that . Then is a stable rational interpolant of f, that is, (5.4) is satisfied.
(Proof) By (5.10) we have
| 5.13 |
where we used the fact that and are all O(1). Now recalling (4.2), the ith element of is
This represents a scaled interpolation error, which is by (5.13). Since are roots of unity we have and , so in view of Lemma 1, it suffices to show that
| 5.14 |
Now since , using the assumption and the fact , which follows from , we have and . Using these in Eq. (5.14) divided by , we see that it suffices to establish , which indeed holds due to the choice of diagonal scaling (5.12).
Since the above argument is valid for every , we conclude from Lemma 1 that is a stable rational interpolant of f.
We emphasize the crucial role that diagonal scaling plays in the stability analysis. We also note that the scaling such that is actually not necessary for the reduced eigenproblem (4.7) (without the diagonal scaling D), which is invariant under the scaling .
Stability of scaled naive method The scaled naive method can also be proven to be stable. In this case the analysis is even simpler as the jth row of , where C is as in (2.6), represents
| 5.15 |
That is, the residual of each row is exactly the scaled interpolation error. Thus a null vector computed in a stable manner under the same assumptions as above [(5.10) and (5.11)] is automatically a stable rational interpolant.
However, for finding the poles, the additional process of finding the roots of q is necessary, and this can be a cause for further numerical instability. We discuss this further in Sect. 5.3.
Barycentric formula Finally, we mention the rational interpolation based on the barycentric formula [9–11, 41]
| 5.16 |
where are called the barycentric weights. For a general w (e.g. for randomly chosen w) the rational function r(z) in (5.16) is of type . However, by choosing appropriate weights w one obtains an interpolant of desired type; Berrut and Mittelmann [10] show how such w can be found by a null vector of a matrix related to as in (2.6). Antoulas and Anderson [1] introduce an algorithm for computing w to interpolate , where are taken to be half of the sample points and hence interpolation is achieved at 2n points. The recent AAA algorithm [33] chooses the points in a greedy manner reduce the linearized error in the rational approximant.
As noted in [10], at the sample points the barycentric formula (5.16) essentially gives an exact interpolation function, in that at all (this holds regardless of the choice of w as long as ). However, this is due to the representation of the rational function; finding the poles and obtaining p, q from (5.16) would induce further numerical errors. Below we focus our attention on algorithms that work with the coefficients of the rational interpolant.
Accuracy of polefinder and effect of orthogonalization
For rational interpolation, we have described and identified two stable methods: ratfun and the scaled naive method.
Let us now turn to polefinding, and focus on the accuracy of the computed poles. ratfun finds the poles while simultaneously computing the rational approximant. By contrast, in the scaled naive method (or practically any other existing method for rational approximation) we first find the denominator polynomial q, then compute its roots. Intuitively this two-step approach should be more susceptible to numerical instability, and here we illustrate that this is indeed the case.
We compare two algorithms: scaled naive and ratfun. In ratfun, the QR factorization in step 3 of Algorithm 4.1 is implicitly performing a change of basis for the polynomial q, so that discrete weighted orthogonality is achieved in the new basis. This has the possible disadvantage that the computed eigenvector in (4.15) contains the coefficients in the changed basis. However, crucially, for polefinding, this is not an issue at all, because the eigenvalues are unaffected. The change-of-basis is rather a benefit, because in the new basis the matrices are well-conditioned, which reduces numerical errors.
By contrast, this change-of-basis cannot be done for the naive method, because it requires the coefficients of q in a polynomial basis that is easy to work with for computing the roots.
Numerical example We illustrate the above discussion with an example. Let f be a rational function of the form (3.5) with poles in equispaced points on with . We use the monomial basis but sample at Chebyshev points, whose number we vary. Therefore we are employing a “wrong” polynomial basis for the sample points; the “correct” one is the Chebyshev polynomials.
Figure 4 shows the result with the two algorithms, showing the errors in the computed poles for , as the number of sample points is varied. The diagonal scaling (5.12) is used for both algorithms; without this, the accuracy is significantly worse than in the figures.
Fig. 4.
Error of 20 computed poles for ratfun and the naive method using the monomial basis. ratfun implicitly and automatically uses the appropriate polynomial basis to obtain accurate poles
ratfun clearly gives significantly more accurate results. The inaccuracy of the naive method is mainly due to the wrong choice of basis used to represent q. For example, by choosing the “correct” Chebyshev basis, the red plots become only slightly worse than ratfun. Indeed it is known that when looking for real roots of a polynomial, one is often advised to use Chebyshev polynomials instead of monomials.
The point here is that ratfun automatically finds an appropriate basis for the particular problem given: If a function f and a set of sample points are given, the QR factorization finds the appropriate basis. Indeed, if the QR factorization is not used in ratfun, the accuracy deteriorates significantly.
Numerical experiments
We present further numerical experiments to illustrate the behavior of ratfun. We compare
ratfun: Algorithm 4.1,
The scaled naive method (2.8) with diagonal scaling (5.12), shown as naive,
Chebfun’s ratinterp command, shown as chebfun,
RKFIT [7, 8] with diagonal inputs , and , with the default choice and maximum number of iterations set to 10 (increasing this made no noticeable difference).
All the experiments were conducted in MATLAB R2014a using IEEE double precision arithmetic with , on a desktop machine with an Intel Core i7 Processor with four cores, and f16GB RAM.
For “easy” problems like those in Sect. 3.2, all the algorithms compute the poles and approximants reliably. Below we thus focus on more challenging problems.
High-degree examples We consider a moderately high-degree example, where we take f as in (3.5) with . The results are in Fig. 5.
With a sufficient number of sample points , ratfun finds the type of the rational approximant and computes the poles and roots stably. Here and below, the roots are computed simply by finding the poles of 1 / f by ratfun; the other processes described in Sect. 4.6 had similar performances.
It is worth observing that most computed roots turned out to lie on a circle of radius about 0.5. This may look bizarre at first sight, as one easily sees that the only zero of f is at . This can be explained by eigenvalue perturbation theory: the zero of f has multiplicity , so the eigenvalue problem for computing it attempts to find the eigenvalue of algebraic multiplicity 49. Now it is well known [43, Ch. 5] that the eigenvalues of algebraic multiplicity k and geometric multiplicity 1, which are essentially eigenvalues of a Jordan block, can be perturbed by by a perturbation in the matrix of norm . The QZ algorithm computed the roots in a backward stable manner, but the small perturbation is enough to perturb them by . The reason the roots appear to lie systematically on a circle is that the eigenvalues of a Jordan block are extremely sensitive to perturbation in the bottom-left element, but much less so in other positions.
Note in the left figure that with insufficient sample points the type finder outputs an incorrect output. In view of Lemma 4, this happens when the number of sample points L was less than the necessary , but the function f and sample points happened to (e.g. by symmetry) make the matrix C rank-deficient, and so at the function behaved as if it is a lower-type rational function. The same was observed in Fig. 1, and the problem is pronounced here. Indeed the degree determination Algorithm 3.1 indicated a numerical degree of (0, 2) after sampling initially at the eigth roots of unity. We have not overcome this issue completely; indeed such difficulty is present even in the polynomial case [48]; for example when a highly oscillatory function f happened to be 0 at all the initial sample points. Perhaps some further insurance policy is needed to ensure that the type obtained by Algorithm 3.1 is appropriate, such as sampling f at a few more random points [2]. One could also try typefind(f,tol,L) for neighboring values of L and accept only if they are the same. These remedies, while effective, cannot be proven to be fool-proof.
Nonetheless, this is a rather contrived example with high symmetry, which generically would not happen. For example, if we take the residues of each term in (3.6) to be random numbers, we obtain Fig. 6, for which an appropriate type is chosen. In both cases, once a sufficient number of sampled points is taken, ratfun finds the correct poles and rational intepolant.
Fig. 6.
with random residues . Left: numerical degrees of the rational approximants. ratfun(f) samples at points and returns the correct type (49, 50). Middle: error of computed poles . Right: sample points, poles and roots
When f has poles far away from sample points Another possible difficulty is when f has a pole far away from the sample points.
To examine the behavior of our algorithm in such cases we take f as in (3.5) of type (4, 5), but we now set one pole to be far by taking . Figure 7 shows the results.
Again, with a sufficient number of sample points we obtain a rational approximant of correct type (4, 5). In the middle error plot in Fig. 7, the poles inside the unit disk are computed accurately to machine precision. By contrast, the pole is computed with poorer accuracy. Loss of accuracy for poles outside the unit disk is a typical phenomenon, and the accuracy worsens rapidly if we take larger or let f be of higher type. This observation can be explained via eigenvalue conditioning analysis, which shows the condition numbers of the eigenvalues of (4.4) corresponding to poles outside the unit disk grow exponentially with base and exponent , whereas those of eigenvalues inside the unit circle decrease (slowly) with . The analysis is presented in “Appendix C”.
Recall that ratfun finds a numerical type by Algortihm 3.1. As explained in Sect. 3.1, there can be other numerical types for f that may be appropriate: Indeed, if we sample at many more points than necessary (i.e., typefind(f,tol,L) with ), ratfun eventually ignores the outlying pole and converges to a rational function of type where is large. That is, the computed outcome has lower denominator degree than that of the exact type 5; recall the experiment with (3.7) with a similar discussion. This can be explained as follows. By a standard result in complex analysis [34, Ch. 9], inside the unit disk a meromorphic function f can be written as
| 6.1 |
The sum is taken over the poles inside the unit disk. Here is a power series, obtained e.g. as a Taylor series of , which converges inside a disk centered at the origin and of radius , where is the pole closest to the origin besides those with ; here . Therefore, near the sample points (the unit circle), f behaves as if it is a sum of terms with , and an analytic function.
From a practical viewpoint, this example suggests that we should locate the sample points near the poles of interest. For example, we can find the pole accurately in the above example by taking the sample points to lie on a circle centered around 10.
When a sample point is near a pole This example illustrates how existing algorithms lose accuracy when a sample point is near a pole.
We form a rational function where the roots and poles are generated randomly to lie in the unit disk. Here we take , and let the sample points be equispaced points on the unit circle. We then reset one pole to be , forcing it to lie close to a sample point.
ratfun and the naive method compute the poles much more accurately than the other methods. This is largely due to the diagonal scaling discussed in Sect. 5; indeed, if we turn off the diagonal scaling and take , the accuracy deteriorates for both ratfun and the naive methods to about the same as Chebfun’s ratinterp and RKFIT.
We believe that with RKFIT, which allows for tuning various inputs and parameters, it is possible to obtain accurate results if appropriate parameters are provided, such as ; recall the discussion in Sect. 2.3. The point here is that our analysis revealed an appropriate choice (Fig. 8).
Fig. 8.
f with a pole close to a sample point. ratfun(f) samples at points and returns the correct type. Left: numerical degrees of the rational approximants. Middle: error of computed poles . Right: sample points, poles and roots
Rational functions with poles of order When f is memorophic but has poles of order , the generalized eigenvalue problems (4.4) and (4.15) have an eigenvalue of the same multiplicity . Here we examine the behavior of our algorithm in such cases.
We generate the function f simply by squaring the function in (3.5), that is, with . Then f has 5 poles, all of which are of order 2.
Observe that all the algorithms, including ratfun, find the poles with accuracy , which is what one would expect from a backward stable algorithm: the poles of order 2 result in an eigenvalue with Jordan block of size 2, and perturbation of in the matrices perturb such eigenvalues by (Fig. 9).
Fig. 9.
f with double poles. Left: numerical degrees of the rational approximants. Middle: error of computed poles . Right: sample points, poles and roots
Non-meromorphic functions Although we have started our discussion assuming f is a meromorphic function in the unit disk, our algorithm can be applied to f with singularities other than poles, as long as f can be evaluated at the sample points. We now explore such cases by examining functions with a branch cut, or an essential singularity.
First let f have a log-type branch cut
| 6.2 |
which has a branch cut on . Figure 10 shows the results. Observe that spurious poles and roots appear along the branch cut; we suspect this is related to a similar phenomenon known for Padé approximants of functions with branch cuts [42].
Fig. 10.
f with a log-type branch cut. ratfun(f) samples at points and determines the numerical type (14, 14). Left: numerical degrees of the rational approximants. Right: sample points, poles and roots
For a function with an essential singularity, we examine the standard example
| 6.3 |
The results are in Fig. 11. Again, spurious poles and roots appear near the singularity point 0, but away from the singularity f is bounded and analytic, and is well approximated by the rational interpolant. This is no surprise as behaves as a completely analytic function on the unit circle.
Fig. 11.
f with an essential singularity. Left: numerical degrees of the rational approximants. ratfun(f) samples at points and determines the numerical type (7, 7). Right: sample points, poles and roots
Sample points at Chebyshev points In this example the sample points are taken to be the Chebyshev points and the polynomial basis is Chebyshev polynomials . This is numerically recommended when most poles lie on the real axis. In this example f is again as in (3.5), with 6 equispaced poles on , along with complex poles at and . The results are in Fig. 12. For such functions, sampling at Chebyshev points give better accuracy than roots of unity.
Fig. 12.
Sampled at Chebyshev points. ratfun(f) samples at points and determines the correct type (8, 7). The pole at loses accuracy as we sample more and increase
Although not shown in the figure, the accuracy of poles far from worsens rapidly as the poles lie farther away, or the function type increases. This is analogous to the observation made in Fig. 7: the poles far from the sample points will eventually get ignored (here the poles that converge are those within a narrow ellipse that covers the real interval ).
Speed illustration Here we examine the speed and accuracy of ratfun for high-degree rational functions. We take f to be as in (3.5) with poles being the Chebyshev points, scaled by , and vary the number of points (i.e., the degree of q) from 100 to 1000. We sample at the Chebyshev points. In order to examine the breakdown of the runtime we present the runtime for (1) ratfun(f), which inputs only the function (hence starts by finding the type), and (2) ratfun(f,m,n), which inputs the correct type (and hence bypasses the type determination). The results are in Fig. 13. This example illustrates that ratfun can work with rational functions of quite high degree, and that the degree determination step often takes up a dominant part of the runtime.
Fig. 13.
High-degree example, accuracy of computed poles (left) and runtime (right)
Eigenvalues of a matrix via the resolvent One use of rational approximation and polefinding that has been attracting recent interest [4] is in finding eigenvalues of a matrix A or matrix pencil via finding the poles of the projected resolvent or , where u, v are some vectors (usually random). We have applied our algorithm to this problem, and observed that it works. However, usually it is not superior to the algorithm presented in [4], which combines a rational filter function with a block subspace whose dimension is proportional to the estimated number of eigenvalues in the region of interest. The distinct feature in [4] (and also the FEAST eigensolver [36]) is that the algorithm works with the subspaces instead of the function , and this is crucial to overcome the difficulty associated with a nearly multiple eigenvalue. We suspect that an extension of our algorithm to work with block subspaces would be possible; we leave this for future work.
Acknowledgements
We thank Nick Trefethen for providing many suggestions, particularly on clarifying what is meant by a numerical type. We are grateful to Amit Hochman for suggesting the use of Arnoldi orthogonalization for the QR factorizations, Stefan Guettel for discussions on RKFIT, and Anthony Austin and Olivier Séte for their comments on an early draft. We thank the referees for their valuable suggestions.
The Kronecker canonical form
Here we analyze in detail the generalized eigenvalue problem (4.4) and derive its Kronecker canonical form [17, Ch. 4.5.2]. It shows in particular that multiple poles (if any) can be computed along with their multiplicities, at least in exact arithmetic.
Here, let f be a rational function , where p(z) and q(z) have no common divisors except for constants (i.e., is an irreducible expression). Then p(z) and q(z) are in the following form:
| A.1 |
where are distinct and . Each is a root of f of multiplicity and each is a pole of order . Note that and since . For simplicity we analyze the case where .
Proposition 2
The matrix pencil is strictly equivalent to the matrix pencil
where
| A.3 |
| A.4 |
(Proof) It suffices to show that there exist nonsingular constant matrices satisfying . We will construct such P and Q in two steps: (1) construct satisfying
| A.5 |
for , and (2) construct , and such that (using MATLAB notation)
| A.6 |
| A.7 |
Then, P and Q defined by
| A.8 |
satisfy .
Before discussing Step (1), let us introduce some notation. For functions g(z) in z and sample points , define the “vector of values” as
| A.9 |
Similarly, for and , define the “vector of coefficients” as
| A.10 |
Then, it holds for arbitrary that
| A.11 |
(1) We will construct satisfying (A.5) for each . For , define polynomials by
| A.12 |
where we define . Then, for , the polynomial satisfies
From this equation and (A.11), we see that defined by
| A.13 |
satisfy
which means that .
(2) Define , and by
| A.14 |
| A.15 |
| A.16 |
By substituting into (A.11), we obtain for an arbitrary , which implies (A.6). Since (A.11) gives
| A.17 |
for , we have (A.7).
From (A.5), (A.6) and (A.7), P and Q defined by (A.8) satisfy .
It remains to show that P and Q are nonsingular. This is proven in the next lemma.
Lemma 2
The matrices P and Q defined by (A.13), (A.16), (A.14), (A.15) and (A.8) are nonsingular.
(Proof) Since is a zero matrix and is an upper triangular matrix with non-zero diagonal entries, is nonsingular if and only if is nonsingular. We shall prove that R is nonsingular by showing that implies . Write . If , from the definitions (A.13), (A.15) of and , all the coefficients of the polynomial
| A.18 |
are equal to zero, and hence is the zero polynomial. Therefore, the rational function
| A.19 |
is also the zero function. This means that
| A.20 |
| A.21 |
which implies that all the elements in x are zero. It follows from the above argument that P is nonsingular.
We prove that Q is nonsingular similarly by showing that implies . Write . If , from the definitions (A.13) and (A.16) of and S, we have
| A.22 |
for . By multiplying q(z) to both sides, we have
| A.23 |
for . Since is a polynomial of degree at most and take on the value 0 at distinct points, must be the zero polynomial. Using this fact we obtain as in the case of P, and hence Q is nonsingular.
Corollary 1
The Kronecker canonical form of the pencil is as follows:
![]() |
A.24 |
where , , ,
| A.25 |
| A.26 |
and is defined by (A.3) .
(Proof) We can transform the lower-right block of to obtain the Kronecker canonical form as follows:
Rank of the rational interpolation matrix
Here we analyze the rank of the matrix in (2.6) and derive (3.2). For a rational function sampled at , define by , where . Then we have
| B.1 |
where we define for functions h.
In this section we focus on the case where r is a rational function, and assume that r does not have poles coinciding with the sample points . When is an irreducible expression, the degrees of p, q are uniquely determined, dependening only on r. For these and , we say that r is of exact type .
Below we summarize the properties of the matrix when r is a rational function of exact type and has an irreducible expression . Note that r is not necessarily in .
From (B.1), we have
| B.2 |
Let . Defining a subspace of by
| B.3 |
we obtain
| B.4 |
Lemma 3
If , i.e., and , then .
(Proof) It suffices to show that . Since , we have
| B.5 |
Since p and q are relatively prime, any polynomial in is a multiple of pq. Hence, we have , where . This means , and hence by (B.5) .
Lemma 4
If r does not belong to , i.e., or , then .
(Proof) This follows from (B.5) and the fact that .
We now prove results on the rank of the matrix that we use for finding the type of the rational approximant r in our algorithm. The first result shows that if we take m, n large enough so that , then the rank of gives the information on .
Proposition 3
Assume that is of exact type and is an irreducible expression of r. Then, we have
| B.6 |
where .
(Proof) This follows immediately from (B.4) and Lemma 3.
Note that is implied by the assumptions in Proposition 3. From (B.6) we see that the rank of is
| B.7 |
as long as , which always holds in our algorithm in which . Thus we obtain , which is (3.2).
The next result shows that if we do not take m, n large enough then this is indicated by not having a null vector, provided we still sample at sufficiently many points.
Proposition 4
Suppose that r of exact type does not belong to , i.e., or . If , then the rank of is equal to , i.e., has full column rank.
(Proof) This follows from (B.4) and Lemma 4.
The above two propositions indicate that we can obtain the type of r by combining (1) sufficient sample points, and (2) adjusting m, n so that has null space of exactly dimension 1. This is the crux of the degree determination process described in Sect. 3.1.
Analysis of our generalized eigenvalue problem
The above results provide information on the building-block eigenvalue problem (4.4) in terms of its regularity and eigenvalues. We say that a (possibly rectangular) matrix pencil is regular if the matrix has full column rank for some value of .
Proposition 5
Suppose that f is a rational function of exact type with and , and . If or , then the matrix pencil is regular and its finite eigenvalues coincide with the poles of f.
(Proof) The matrix pencil is equal to .
If is a pole of f, then is of exact type , and consequently, we have
by Proposition 3, hence is rank deficient.
Conversely, if is not a pole of f, then is of exact type , and hence by Proposition 4, has full column rank.
We are thus able to correctly compute the poles of f provided that we take one of to be the correct value .
Condition number of eigenvalues
Here we analyze the condition number of the eigenvalues of the matrix pencil , which here we write simply as . For simplicity we focus on the case where the sample points are roots of unity, and examine the conditioning as is fixed and the number of sample points grow along with . We shall show that the eigenvalues outside the unit disk become increasingly ill-conditioned, which explains the observation in Fig. 7.
Assume that is irreducible and f has simple poles . We consider the square generalized eigenvalue problem (4.4) where (i.e., the denominator degree is fixed to the correct value), and . For , has eigenvalue equal to for each k. We will investigate how the condition number of each eigenvalue change, when we increase (and hence also ).
The absolute condition number of a simple eigenvalues of a matrix pencil is known [45] to be proportional to , where y and x are the corresponding left and right eigenvectors. Thus we need to identify the eigenvectors for .
The right eigenvector such that is given by , where . This satisfies .
To find the left eigenvector, first note that as in Proposition 3 we have
Now since are th roots of unity, the vector
satisfies , indicating is the left eigenvector.
Hence, we have
| C.1 |
This implies that is an approximate value of
| C.2 |
In fact, the trapezoidal rule approximation to I with sample points is given by
| C.3 |
Now suppose that , that is, lies outside the unit disk. Then, the integrand is analytic in the disc and have a simple pole on the circle , and hence the trapezoidal rule approximation satisfies (see [49] for the trapezoidal rule and its properties)
| C.4 |
From (C.2), (C.3) and (C.4), we have
| C.5 |
Furthermore, since and , the condition number of the eigenvalue satisfies
| C.6 |
This means that the condition numbers of eigenvalues outside the unit circle grow exponentially as we use more sample points.
On the other hand, if the eigenvalue is inside the unit disk, i.e., , then the condition number of behaves differently. Indeed, since we have
| C.7 |
we see that the condition number of the eigenvalue satisfies
| C.8 |
This is in sharp contrast to (C.6). Equations (C.6) and (C.8) imply that, when the number of sample points grows, the computed eigenvalues outside the unit disk lose accuracy exponentially, while those inside do not.
We confirmed this in our numerical experiments as the figures in Sect. 6 show.
Footnotes
We note that these QR factorizations can be computed exploiting the Vandermonde-like structure of . Namely, when the basis is degree-graded, i.e., , then the column space of is equal to the Krylov subspace where b is the first column of . An orthonormal space for the Krylov subspace can thus be computed using the Arnoldi process [19, Sect. 10.5], as done for example in [24, App. A]. The same holds for .
When reducing the number of sample points is of primary importance (i.e., when sampling f is expensive), we can proceed as follows: having sampled f at L points, take for integers with , re-form and examine the condition ; if this is satisfied for some , there exists an acceptable rational approximant of type .
For simplicity, we mainly analyze the scaled version of (4.4), without employing the projection (4.7) and QR factorization.
Yuji Nakatsukasa is supported by JSPS as an Overseas Research Fellow.
Contributor Information
Shinji Ito, Email: s-ito@me.jp.nec.com.
Yuji Nakatsukasa, Email: nakatsukasa@maths.ox.ac.uk.
References
- 1.Antoulas AC, Anderson BDQ. On the scalar rational interpolation problem. IMA J. Math. Control Inf. 1986;3(2–3):61–88. doi: 10.1093/imamci/3.2-3.61. [DOI] [Google Scholar]
- 2.Aurentz, J.L., Trefethen, L.N.: Chopping a Chebyshev Series. ArXiv e-prints 1512.01803. Submitted to ACM Trans. Math. Softw. (2015)
- 3.Austin AP, Kravanja P, Trefethen LN. Numerical algorithms based on analytic function values at roots of unity. SIAM J. Numer. Anal. 2014;52(4):1795–1821. doi: 10.1137/130931035. [DOI] [Google Scholar]
- 4.Austin AP, Trefethen LN. Computing eigenvalues of real symmetric matrices with rational filters in real arithmetic. SIAM J. Sci. Comput. 2015;37(3):A1365–A1387. doi: 10.1137/140984129. [DOI] [Google Scholar]
- 5.Barnett S. Polynomials and Linear Control Systems. New York: Marcel Dekker Inc.; 1983. [Google Scholar]
- 6.Battles Z, Trefethen LN. An extension of MATLAB to continuous functions and operators. SIAM J. Sci. Comput. 2004;25(5):1743–1770. doi: 10.1137/S1064827503430126. [DOI] [Google Scholar]
- 7.Berljafa M, Güttel S. Generalized rational Krylov decompositions with an application to rational approximation. SIAM J. Matrix Anal. Appl. 2015;36(2):894–916. doi: 10.1137/140998081. [DOI] [Google Scholar]
- 8.Berljafa, M., Güttel, S.: The RKFIT algorithm for nonlinear rational approximation, MIMS EPrint 2015.38 (2015)
- 9.Berrut, J.-P., Baltensperger, R., Mittelmann, H.D.: Recent developments in barycentric rational interpolation. In: Trends and Applications in Constructive Approximation, pp. 27–51. Springer (2005)
- 10.Berrut J-P, Mittelmann HD. Matrices for the direct determination of the barycentric weights of rational interpolation. J. Comput. Appl. Math. 1997;78(2):355–370. doi: 10.1016/S0377-0427(96)00163-X. [DOI] [Google Scholar]
- 11.Berrut J-P, Trefethen LN. Barycentric Lagrange interpolation. SIAM Rev. 2004;46(3):501–517. doi: 10.1137/S0036144502417715. [DOI] [Google Scholar]
- 12.Braess D. Nonlinear Approximation Theory. Berlin: Springer; 1986. [Google Scholar]
- 13.Cauchy, A.L.: Sur la formule de Lagrange relative á l’interpolation. Analyse algebraique, Paris (1821)
- 14.Dahlquist G, Björck A, Anderson N. Numerical Methods. Englewood Cliffs: Prentice-Hall; 1974. [Google Scholar]
- 15.Delves LM, Lyness JN. A numerical method for locating the zeros of an analytic function. Math. Comput. 1967;21:543–560. doi: 10.1090/S0025-5718-1967-0228165-4. [DOI] [Google Scholar]
- 16.Demmel J. The condition number of equivalence transformations that block diagonalize matrix pencils. SIAM J. Numer. Anal. 1983;20(3):599–610. doi: 10.1137/0720040. [DOI] [Google Scholar]
- 17.Demmel J. Applied Numerical Linear Algebra. Philadelphia: SIAM; 1997. [Google Scholar]
- 18.Driscoll TA, Hale N, Trefethen LN. Chebfun Guide. Oxford: Pafnuty Publications; 2014. [Google Scholar]
- 19.Golub GH, Van Loan CF. Matrix Computations. 4. Baltimore: The Johns Hopkins University Press; 2012. [Google Scholar]
- 20.Gonnet P, Güettel S, Trefethen LN. Robust padé approximation via SVD. SIAM Rev. 2013;19(2):160–174. [Google Scholar]
- 21.Gonnet P, Pachón R, Trefethen LN. Robust rational interpolation and least-squares. Electron. Trans. Numer. Anal. 2011;38:146–167. [Google Scholar]
- 22.Good IJ. The colleague matrix, a Chebyshev analogue of the companion matrix. Q. J. Math. 1961;12(1):61–68. doi: 10.1093/qmath/12.1.61. [DOI] [Google Scholar]
- 23.Higham NJ. Accuracy and Stability of Numerical Algorithms. 2. Philadelphia: SIAM; 2002. [Google Scholar]
- 24.Hochman A, Leviatan Y, White JK. On the use of rational-function fitting methods for the solution of 2d laplace boundary-value problems. J. Comput. Phys. 2013;238:337–358. doi: 10.1016/j.jcp.2012.08.015. [DOI] [Google Scholar]
- 25.Ito S, Murota K. An algorithm for the generalized eigenvalue problem for nonsquare matrix pencils by minimal perturbation approach. SIAM J. Matrix Anal. Appl. 2016;37(1):409–419. doi: 10.1137/14099231X. [DOI] [Google Scholar]
- 26.Jacobi CGJ. Über die Darstellung einer Reihe gegebner Werthe durch eine gebrochne rationale Function. Journal für die reine und angewandte Mathematik. 1846;30:127–156. doi: 10.1515/crll.1846.30.127. [DOI] [Google Scholar]
- 27.Kravanja P, Sakurai T, Van Barel M. On locating clusters of zeros of analytic functions. Bit Numer. Math. 1999;39(4):646–682. doi: 10.1023/A:1022387106878. [DOI] [Google Scholar]
- 28.Kravanja P, Van Barel M. A derivative-free algorithm for computing zeros of analytic functions. Computing. 1999;63(1):69–91. doi: 10.1007/s006070050051. [DOI] [Google Scholar]
- 29.Kravanja P, Van Barel M. Computing the Zeros of Analytic Functions. Number Lecture Notes in Math. Berlin: Springer; 2000. p. 1727. [Google Scholar]
- 30.Martins N, Lima LTG, Pinto HJCP. Computing dominant poles of power system transfer functions. IEEE Trans. Power Syst. 1996;11(1):162–170. doi: 10.1109/59.486093. [DOI] [Google Scholar]
- 31.Moler CB, Stewart GW. An algorithm for generalized matrix eigenvalue problems. SIAM J. Numer. Anal. 1973;10(2):241–256. doi: 10.1137/0710024. [DOI] [Google Scholar]
- 32.Nakatsukasa, Y., Noferini, N.: On the stability of computing polynomial roots via confederate linearizations, MIMS EPrint 2014.49. To appear in Math. Comput. (2014)
- 33.Nakatsukasa, Y., Sète, O., Trefethen, L.N.: The AAA algorithm for rational approximation. Technical report. Submitted to SIAM J. Sci. Comput. (2016)
- 34.Needham T. Visual Complex Analysis. Oxford: Oxford University Press; 1998. [Google Scholar]
- 35.Pachón R, Gonnet P, Van Deun J. Fast and stable rational interpolation in roots of unity and Chebyshev points. SIAM J. Numer. Anal. 2012;50(3):1713–1734. doi: 10.1137/100797291. [DOI] [Google Scholar]
- 36.Polizzi E. Density-matrix-based algorithm for solving eigenvalue problems. Phys. Rev. B. 2009;79(11):115112. doi: 10.1103/PhysRevB.79.115112. [DOI] [Google Scholar]
- 37.Powell MJD. Approximation Theory and Methods. Cambridge: Cambridge University Press; 1981. [Google Scholar]
- 38.Rommes J, Martins N. Efficient computation of transfer function dominant poles using subspace acceleration. IEEE Trans. Power Syst. 2006;21(3):1218. doi: 10.1109/TPWRS.2006.876671. [DOI] [Google Scholar]
- 39.Rudelson, M., Vershynin, R.: Non-asymptotic theory of random matrices: extreme singular values. In: Proceedings of the International Congress of Mathematicians, vol. III, pp. 1576–1602. Hindustan Book Agency, New Delhi (2010). ArXiv:1003.2990
- 40.Sakurai T, Sugiura H. A projection method for generalized eigenvalue problems using numerical integration. J. Comput. Appl. Math. 2003;159(1):119–128. doi: 10.1016/S0377-0427(03)00565-X. [DOI] [Google Scholar]
- 41.Schneider C, Werner W. Some new aspects of rational interpolation. Math. Comput. 1986;47(175):285–299. doi: 10.1090/S0025-5718-1986-0842136-8. [DOI] [Google Scholar]
- 42.Stahl H. The convergence of Padé approximants to functions with branch points. J. Approx. Theory. 1997;91(2):139–204. doi: 10.1006/jath.1997.3141. [DOI] [Google Scholar]
- 43.Stewart GW, Sun J-G. Matrix Perturbation Theory (Computer Science and Scientific Computing) Cambridge: Academic Press; 1990. [Google Scholar]
- 44.Stoer J, Bulirsch R. Introduction to Numerical Analysis. Berlin: Springer; 2002. [Google Scholar]
- 45.Tisseur F. Backward error and condition of polynomial eigenvalue problems. Linear Algebra Appl. 2000;309(1):339–361. doi: 10.1016/S0024-3795(99)00063-4. [DOI] [Google Scholar]
- 46.Trefethen LN. Spectral Methods in MATLAB. Philadelphia: SIAM; 2000. [Google Scholar]
- 47.Trefethen LN. Approximation Theory and Approximation Practice. Philadelphia: SIAM; 2013. [Google Scholar]
- 48.Trefethen, L.N.: The doublelength flag. Chebfun examples (2015). http://www.chebfun.org/examples/cheb/DoublelengthFlag.html
- 49.Trefethen LN, Weideman JAC. The exponentially convergent trapezoidal rule. SIAM Rev. 2014;56(3):385–458. doi: 10.1137/130932132. [DOI] [Google Scholar]
- 50.Van Der Sluis A. Condition numbers and equilibration of matrices. Numer. Math. 1969;14(1):14–23. doi: 10.1007/BF02165096. [DOI] [Google Scholar]
- 51.Van Dooren P, Dewilde P. The eigenstructure of an arbitrary polynomial matrix: computational aspects. Linear Algebra Appl. 1983;50:545–579. doi: 10.1016/0024-3795(83)90069-1. [DOI] [Google Scholar]
- 52.Wilkinson JH. The perfidious polynomial. In: Golub GH, editor. Studies in Numerical Analysis. Washington, DC: Mathematical Association of America; 1984. pp. 1–28. [Google Scholar]















