Abstract
The Loewner framework is one of the most successful data-driven model order reduction techniques. If N is the cardinality of a given data set, the so-called Loewner and shifted Loewner matrices and can be defined by solely relying on information encoded in the considered data set and they play a crucial role in the computation of the sought rational model approximation.In particular, the singular value decomposition of a linear combination of and provides the tools needed to construct accurate models which fulfill important approximation properties with respect to the original data set. However, for highly-sampled data sets, the dense nature of and leads to numerical difficulties, namely the failure to allocate these matrices in certain memory-limited environments or excessive computational costs. Even though they do not possess any sparsity pattern, the Loewner and shifted Loewner matrices are extremely structured and, in this paper, we show how to fully exploit their Cauchy-like structure to reduce the cost of computing accurate rational models while avoiding the explicit allocation of and . In particular, the use of the hierarchically semiseparable format allows us to remarkably lower both the computational cost and the memory requirements of the Loewner framework obtaining a novel scheme whose costs scale with .
Keywords: Loewner framework, Data-driven model order reduction, Cauchy-like matrices, HSS matrices
Introduction
The Loewner framework, originally proposed in [30] for solving the generalized realization problem coupled with tangential interpolation, was successfully employed for data-driven model order reduction from frequency domain data [26]. Measurements of the frequency response are available in several communities: electrical engineering (impedance, admittance or scattering parameters [26]), mechanical and civil engineering (structural and vibro-acoustic frequency response functions [35] or frequency response measurements of thermal systems [10]), to name a few. The first step in the Loewner framework consists in setting up the data matrices and building the Loewner and shifted Loewner matrices entry-wise based on the chosen partition into right and left data, followed by computing the singular value decomposition (SVD) of a linear combination of these matrices and forming the model by projection, using the dominant singular triplets. The main advantages of the Loewner framework over existing approaches are, on the one hand, its system identification capabilities, in the sense that the order of the system can be deduced from the singular value drop, and, on the other hand, its potential in dealing with systems with a large number of inputs and outputs efficiently thanks to the incorporation of tangential interpolation. The main drawbacks, however, are the large storage requirements paired with the significant CPU cost inherent to the full SVD computation for data sets with a large number of measurements (values in the range are common in industrial applications). To bypass these inconveniences, greedy-type approaches were proposed in [26], thus reducing memory requirements, from for storing the dense Loewner and shifted Loewner matrices to , and the computational cost, from for computing the SVD to and , where N is the size of the data set and n is the order of the model.
Taking advantage of numerical linear algebra tools to reduce storage and computational requirements for the Loewner framework is another avenue worth exploring due to the inherent structure embedded in the albeit dense Loewner and shifted Loewner matrices. The factored ADI-Galerkin method for computing these matrices as solutions to certain Sylvester equations with a factored right-hand side was investigated in [18]. Such a scheme computes low-rank approximations to the dense Loewner matrix to speed-up the SVD computation. However, in [18] no results about the accuracy of the computed reduced models are reported. Moreover, the memory constraints coming from the allocation of and are still present. Alternatively, one can focus on accelerating solely the step of the SVD calculation by employing Krylov methods (see, e.g., [3, 19, 25, 41] to name a few), by using the randomized SVD [31] to compute the dominant singular triplets instead of the full SVD or other types of inexact SVD-type decompositions (adaptive cross approximation [4], particularly suited for hierarchical matrices, or a CUR decomposition [11] as in [22, 38]).
The novel approach proposed in this paper tackles the issue of the memory requirements, at the same time as reducing the CPU cost of the Loewner framework while maintaining the accuracy of the standard approach for large values of the number of measurements. As the Loewner and shifted Loewner matrices satisfy Sylvester equations with diagonal coefficient matrices, they are, in fact, Cauchy-like matrices, obtained as the Hadamard product between a Cauchy matrix and low-rank right-hand sides. Extensive research has been devoted to fully exploiting the rich structure of Cauchy matrices. Several algorithms for computing the matrix-vector product can be found in the literature and many avoid assembling the full matrix (see, e.g., [7, 15, 17, 33]). Hierarchically semiseparable matrices (HSS) have been deemed efficient for approximating Cauchy matrices with a low off-diagonal rank [33, 34]. HSS and other rank-structured matrices are widely used in developing fast algorithms for algebraic operations (matrix-vector multiplications, matrix factorizations, matrix inversion, etc., see, e.g., [8, 33, 34, 44, 47] and references therein) used as building blocks for the solution of certain problems like linear systems of equations [48], eigenvalue problems [45], linear and quadratic matrix equations [23, 27], and many more. For our application, the approximation of the Cauchy matrix in HSS format considerably decreases the computational cost of matrix-vector products involving a linear combination of the Loewner and shifted Loewner matrices needed for the partial SVD computation, while avoiding to form them. All results involving HSS-matrices presented in this paper have been obtained by means of the hm-toolbox [28].
The employment of an HSS-representation of may introduce some inexactness in our scheme and this has to be taken into account in the iterative SVD computation. The use of inexact matrix-vector products within iterative procedures has been the subject of numerous research papers: Krylov techniques for solving linear systems and matrix equations [6, 24, 32, 40, 43], eigenvalue problems [13, 39], or an inexact variant of the Lanczos bidiagonalization for the computation of leading singular triplets of a generic matrix function [14]. In our case, we do not need an accurate approximation of the singular triplets, but rather have meaningful spaces spanned by the computed left and right singular vectors so that the obtained reduced model inherits the desired approximation properties (see, e.g., [2, 21]).
The remainder of the paper is structured as follows. Section 2 provides a review of the Loewner framework, whereas Sect. 3 presents results showcasing the special structure of the Loewner and shifted Loewner matrices as Cauchy-like matrices and their approximation as hierarchically semiseparable matrices allowing for efficient, inexact matrix-vector products in the partial SVD computation. Section 4 presents the results of our numerical experiments and Sect. 5 concludes the paper.
Review of the Loewner Framework
The Loewner framework has been proposed to address the rational interpolation/approximation problem. In the control community, this is referred to as system identification from frequency domain measurements and is stated below.
Problem Statement (Rational approximation in the complex plane) Given points in the complex plane (which can represent angular frequencies if are on the imaginary axis) and the corresponding transfer function measurement for a system with q inputs and p outputs:
| 1 |
with p and q assumed to be much smaller than N, the problem amounts to finding the rational transfer function which approximates the data:
| 2 |
Thus, the transfer function evaluated for the Laplace variable should be close (in some norm) to the corresponding measurement . Several equivalent representations are possible for the rational transfer function, namely pole-residue, pole-zero, state-space or descriptor-form.
Most systems of interest are real, with their transfer function satisfying the complex conjugate condition . Hence, we add complex conjugate measurements to the set (1).
We proceed by presenting the Loewner framework as a solution scheme addressing the rational approximation problem. The first step in the Loewner framework [26, 30] is partitioning the data in two disjoint sets. This partition influences the conditioning of the problem [21][Ch. 2.1] and finding the optimal partition for each data set is beyond the scope of this paper. The most natural partitions are summarized in the following (assuming an even number of measurements N and sampling points sorted in ascending order with respect to their absolute value):
- Half&Half: the first half of the data in one set and the other half in the second set:
and, correspondingly,3 4 - Odd&Even: data with odd indices in the first set and data with even indices in the second set:
and, correspondingly,5 6
The first set on the right in (3) and (5) comprises the right points, denoted by , , while the second set comprises the left points , . This splitting into right and left points is related to the concept of tangential interpolation, which is explained in the following paragraph.
The following step in the Loewner framework is choosing tangential directions as vectors which transform matrix data into vector data: right tangential directions are column vectors such that , whereas left tangential directions are row vectors such that . The column vectors are referred to as right vector data, while the row vectors are referred to as left vector data. For simplicity, tangential directions can be chosen as alternating columns/rows of the identity matrix [26], resulting in vector data being column and row vectors of the original matrix data in (1).
Remark 1
For scalar data obtained from single-input single-output (SISO) systems (), tangential directions , are simply equal to 1.
Remark 2
If the loss of information due to utilizing a single tangential direction per measurement, instead of the whole matrix , does not allow one to obtain an accurate approximation, one can employ the original matrix . This is equivalent to considering several tangential directions for the same point. To obtain block right matrix data for , the corresponding frequency should be repeated q times as a right point and all columns of the identity matrix of size should be considered as right directions. Similarly, to obtain block left matrix data for , the corresponding frequency should be repeated p times as a left point and all rows of the identity matrix of size should be considered as left directions.
With this notation in place, the Loewner matrix is defined entry-wise as
| 7 |
and the shifted Loewner matrix is defined as
| 8 |
Note that the numerators are scalar quantities as they are obtained by taking inner products.
The quantities defined previously are collected into the following matrices
| 9 |
| 10 |
By construction, the Loewner and shifted Loewner matrices satisfy the following Sylvester equations:
| 11 |
as well as the following relations:
| 12 |
which will prove useful in our proposed matrix-free matrix-vector product approach.
After introducing notation, we are ready to state the solution provided by the Loewner framework to the rational approximation problem. A (non minimal) model for the transfer function in descriptor-form is given by
| 13 |
Since we have recast the original problem as a tangential interpolation problem, this transfer function satisfies the right and left interpolation conditions [30] and , exactly. To obtain a minimal model, we perform a singular value decomposition
| 14 |
where is diagonal and , contain the left and right singular vectors, respectively. Choosing the order n of the truncated SVD (n is application-dependent), we define (in Matlab notation) and . Finally, the model of size n in descriptor form is
| 15 |
In the following section, we exploit the Cauchy-like structure of the Loewner and shifted Loewner matrices to design efficient approaches, both in terms of memory storage and CPU time, to compute the SVD in (14) by making use of hierarchical matrices.
Exploiting the Structure of and
For data sets with a sizable number N of measurements , the construction of the large, dense Loewner and shifted Loewner matrices is demanding, both in terms of computational efforts as well as storage requirements. The computation of each entry of and using (7) and (8) yields a total cost of floating point operations (FLOPs) for assembling the entire and matrices. The number of nonzero entries in and is , much larger than the memory requirements for storing the data in , , , , , and 1. Besides these excessive storage requirements, there are also considerations to be made regarding the CPU time required for the SVD computation of the matrix , in (14). Especially for large dimensional problems, for which we expect a fast decay, it is preferred to compute only the first n singular triplets, thus avoiding wasting resources in computing the full SVD. To this end, many iterative methods have been developed for computing partial SVDs; see, e.g., [3, 19, 25, 41] to name a few. The bottleneck in these approaches is the matrix-vector product with the coefficient matrix, namely in our case. This operation costs FLOPs due to the dense pattern of .
This section tackles the cost reduction of performing a matrix-vector product with while avoiding the explicit allocation of and . The proposed strategy is supported by a thorough analysis of the computational cost, showing that, for very large data sets for which carrying out the full SVD is intractable, our strategy leads to remarkable reductions in both the computational efforts and the storage demand for building minimal realizations in the Loewner framework.
Hadamard Product and Cauchy Matrices
We present novel results which exploit the particular structure of the Loewner and shifted Loewner matrices. These developments involve the Sylvester equations (11) with diagonal coefficient matrices and .
Theorem 1
The Loewner and shifted Loewner matrices and satisfying the Sylvester equations in (11) are such that
| 16 |
and
| 17 |
where denotes the following Cauchy matrix
while the vectors and denote the j-th columns of and , respectively, so that
Similarly, the vectors and are the j-th rows of and , respectively, namely
Proof
The Loewner and shifted Loewner matrices and are Cauchy-like matrices as they are obtained by taking the Hadamard product between the Cauchy matrix and the right-hand sides of the Sylvester equations in (11). In particular,
| 18 |
An important property of the Hadamard product reads as follows. For any vectors , it holds
This, along with the low-rank structure of and , yields the results in (16) and (17).
Corollary 1
Given a vector and , we have
where , with , the identity matrix.
Proof
Thanks to (12), we can write
The result follows by substituting the expression of given in Theorem 1 in the equation above.
Corollary 1 shows that the majority of the computational cost of performing the matrix-vector multiplication amounts to computing matrix-vector products with the Cauchy matrix .
Extensive research has been devoted to fully exploiting the rich structure of Cauchy matrices. Several algorithms for computing the matrix-vector product can be found in the literature and many avoid assembling the full matrix (see, e.g., [7, 15, 17, 33]). In the next section we recall the strategy presented by Pan in [33, 34] to represent in terms of a hierarchically semiseparable (HSS) matrix. Even though the novel scheme proposed in this paper does not depend on the strategy employed for performing the matrix-vector product —as long as it is efficient—we believe that the HSS framework may be advantageous as, in principle, many matrix-vector products with are needed for computing a (partial) SVD of the matrix .
We conclude this section with the following remarks.
Remark 3
The number n of singular triplets needed to be computed to achieve the minimal realization in (15) is difficult to estimate a-priori2. However, the expression of and in terms of the Hadamard product can be useful to this end. Indeed, another important property of the Hadamard product is that, for any matrices and , . Therefore,
and similarly for . Thus, we have
| 19 |
In general, the Cauchy matrix is full rank so this inequality is trivially satisfied. However, depending on the partitioning of the points into and (as in (3) and (5)), it can be numerically low-rank (see, e.g., [33][Theorem 5], [5, 9]). If denotes the numerical rank of , then is a rough estimate for the numerical rank of . Oftentimes, the underlying dynamical system is of much lower complexity, thus allowing for the computation of a minimal realization of reduced order n. One can also use insight of the system itself or count the number of peaks in the frequency response to estimate n (for systems with poles having dominant imaginary parts).
Remark 4
The expression of and in terms of the Hadamard product provides us with an upper bound of the spectral norm of the Loewner and shifted Loewner matrix. Indeed, the spectral norm is submultiplicative with respect to the Hadamard product [20][Theorem 5.5.1], hence
where denotes the Frobenius norm of . Note that can be computed cheaply, e.g., by a power method exploiting the low rank of .
Similarly,
Remark 5
Low-rank approximations to and may be computed by adaptive cross approximation [4], particularly suited for hierarchical matrices, the CUR decomposition [11] as in [22, 38], or related schemes. These approaches select a certain number of columns and rows of the original matrices in a greedy fashion based on various heuristics, and a core matrix is utilised to compute a low-rank approximation. If a given threshold on the desired accuracy of the computed approximation is provided as an input, these algorithms often construct matrices whose rank is much larger than the one of the target matrices and . On the other hand, by fixing the rank k of the approximation, - assuming we know an estimate of , - the accuracy we achieve may be very low affecting the reliability of the computed reduced models.
Hierarchically Semiseparable (HSS) Representation of a Cauchy Matrix
The literature on HSS matrices is rather vast and technical (see, e.g., [8, 33, 34, 44, 47] and references therein). Here we recall only the main properties of this class of matrices and their role in the efficient representation of Cauchy matrices. Such a technique is also closely related to the Fast Multipole Method (FMM). We refer the interested reader to, e.g., [8, 9] for more details on the interconnection between HSS matrices and FMM.
Definition 1
[34, Definition 27] Let be an matrix with being the maximum rank of all its subdiagonal blocks, namely the blocks of all sizes lying strictly below the block diagonal, and the maximum rank of all its superdiagonal blocks, namely the blocks of all sizes lying strictly above the block diagonal, respectively. Then, is -HSS if its diagonal blocks consist of entries.
The -HSS representation of a matrix is very advantageous whenever and are small. For instance, it allows us to express in terms of parameters avoiding storing its entries. Moreover, a whole, efficient HSS arithmetic has been developed in the last decades (see, e.g., [8, 47]). For instance, the computational cost of the matrix-vector product amounts to FLOPs. If is nonsingular, its inverse is also a -HSS matrix that can be computed in FLOPs (see, e.g., [34, Section6]).
To fully exploit the HSS framework for our purposes, we wish to represent the Cauchy matrix in terms of a HSS matrix with a low off-diagonal rank. In light of Corollary 1, this would considerably decrease the computational cost of the matrix-vector products involving while avoiding forming the dense matrices and .
The construction of an HSS approximation to is rather involved and the magnitude of the -rank of the computed strictly depends on the partitioning of the frequencies along with the accuracy that has been selected for the actual computation of 3. Given two parameters and , the cardinal relation underlying this construction is the following
and the approximation obtained by neglecting the error term can be respresented in low-rank format thanks to the separability (in and ) of the first term in the last step of the relation above. See, e.g., [34, Section8] for further details on the computation of an HSS-representation of a Cauchy matrix. In this paper we employ the readily available hm-toolbox [28].
Example 1
We investigate the impact of the most commonly-used frequency partitions (Half&Half and Odd&Even on the HSS-rank of the computed for a mechanical structure. We emphasize that the most effective partition is problem-dependent and is still an open problem, beyond the scope of this paper. However, in [12, 21, Ch. 2.1] the authors suggest the use of partitions with interleaving frequencies like Odd&Even in order to avoid the introduction of an “artifical” ill-conditioning.
We consider the Flexible Aircraft data set [36] from the MORwiki [42]. This dataset contains 421 frequency values expressed in rad/s and the corresponding measurements of the transfer function . We disregard the last data point and consider the remaining frequencies ranging from Hz to Hz. As this is a mechanical structure, frequencies considered are in the low spectrum, as opposed to electrical systems, for which frequencies typically span the GHz range.
To avoid complex arithmetic, it is preferred and more advantageous to perform a change of basis when dealing with sampling points on the imaginary axis: . By defining
| 20 |
we obtain matrices with real entries:
where stands for the complex conjugate transpose of the matrix and . These quantities satisfy analogous expressions as in (11) and (12). Unfortunately, and are no longer diagonal and this represents a major drawback in taking advantage of the Sylvester equations (11) for a fast computation of and . However, and are diagonal and given by
| 21 |
By multiplying the first equation in (11) by on the left and, afterwards, multiplying it by on the right and adding the results together, a new Sylvester equation with diagonal coefficient matrices is obtained:
| 22 |
By performing the same operations on the second equation in (11), a similar Sylvester equation is obtained for the shifted Loewner matrix:
| 23 |
In the following, we refer to this as the Odd&Even (real) partition4
We recall the three different partitions of the sampling points :
-
Half&Half:
.
-
Odd&Even:
.
-
Odd&Even (Real):
.
For each partition, we compute the corresponding Cauchy matrix in HSS format without assembling the full beforehand, by means of the function hss of the hm-toolbox:
where and are N dimensional vectors containing the frequencies and , respectively. We then calculate its rank by 5. In Table 1 we report the HSS-rank of the matrix for the partitions mentioned above. Thanks to the small dimension of the dataset, we are able to compute the full Cauchy matrix and document its (standard) rank along with the relative error . As expected, having two disjoint sets of frequencies like in the Half&Half partition leads to a Cauchy matrix whose (standard) rank is low. This does not happen in the other two scenarios we examine so that taking advantage of the HSS format is necessary to achieve memory-saving representations of . The results in Table 1 show that a good accuracy in terms of the relative error can be achieved for all three frequency partitions. Nevertheless, the HSS rank of is significantly lower for the Odd&Even (Real) partition, most likely due to the squaring of the frequencies performed in Odd&Even (Real), which leads to a fast decay in the magnitude of the off-diagonal entries of . Hence, for a fixed threshold, the off-diagonal blocks of the Cauchy matrix associated to the Odd&Even (Real) partition can be approximated by matrices having a smaller rank than those associated to the other two scenarios we examined.
Table 1.
Example 1. HSS-rank and relative error of the HSS representation of for different frequency partitions along with the rank of
| Half&Half | Odd&Even | Odd&Even (Real) | |
|---|---|---|---|
| 32 | 30 | 13 | |
| 36 | 420 | 210 | |
| 2.68e–12 | 2.61e–11 | 6.62e–13 |
Figure 1 we display the absolute value—on a logarithmic scale—of the entries of the Cauchy matrix stemming from the different partitions. The same scale has been used in all the three figures, enforcing the observation that the Odd & Even (Real) partition exhibits the fastest decay in the magnitude of the off-diagonal entries of .
Fig. 1.
Example 1. Absolute value—on a logarithmic scale—of the entries of the Cauchy matrix stemming from the three different frequency partitions we have examined
Efficient, Inexact Matrix-Vector Products
Whenever the matrix admits an accurate approximation in terms of a low-rank HSS matrix , the computational cost of performing the matrix-vector product can be significantly reduced.
Proposition 1
Let be an -HSS matrix that approximates the Cauchy matrix accurately. If and satisfy the Sylvester equations in (11), then
| 24 |
where . Moreover, the computational cost of performing
| 25 |
amounts to FLOPs.
Proof
From the result in Corollary 1, we can write
where . Therefore,
This proves the first part of Proposition 1. To conclude, by making use of the property that the matrix-vector product with a -HSS matrix costs FLOPs and that has rank q, a direct computation shows that the number of operations needed to perform (25) amounts to FLOPs, which proves the second claim in Proposition 1.
As before, analogous results can be obtained for and satisfying (22) and (23), respectively.
Proposition 1 shows that, whenever is small, the matrix-vector product can be well-approximated by the expression in (25) while dramatically reducing the computational complexity from FLOPs to FLOPs. However, when this approximation is used within our favorite iterative procedure for computing a partial SVD of , the inexactness introduced by neglecting the term should be taken into account.
The use of inexact matrix-vector products within certain iterative procedures has been the subject of numerous research papers: Krylov techniques for solving linear systems and matrix equations [6, 24, 32, 40, 43], eigenvalue problems [13, 39], or an inexact variant of the Lanczos bidiagonalization for the computation of some leading singular triplets of a generic matrix function can be found in [14]. With the goal to decrease the computational cost of the overall procedure, these studies show that the accuracy of the matrix-vector product can be relaxed (becoming more and more inaccurate) as iterations proceed. In our framework, the inexactness introduced by approximating with (25) is fixed throughout the entire iterative procedure and mainly depends on , which is often small, as shown in Example 1. Therefore, the approximation
does not greatly affect the accuracy of the computed singular triplets (see Sect. 4). Moreover, in our case, we do not need an accurate approximation of the singular triplets of . The main goal is to have meaningful spaces spanned by the computed left and right singular vectors so that the obtained reduced model inherits the desired approximation properties. Moreover, as shown in [21, Corollary 1.4], [2, Proposition 8.25], in the case of noise-free measurements of a low-order rational function, even general projectors, not necessarily obtained from the SVD, can be employed for identifying the underlying function.
Remark 6
In Remark 3 we suggested to use the value , where is the numerical rank of , to decide on the number n of singular triplets of needed for the reduced model. For interlaced partitions, as it is the case with Odd&Even and Odd&Even (real) (see Table 1), the numerical (standard) rank of the Cauchy matrix is large, in general. Hence, the value may instead be employed for the computation of a meaningful reduced model whenever can be well-approximated by a -HSS matrix 6. Moreover, the HSS-rank of is obtained as a byproduct of the construction of .
Remark 7
If admits an accurate approximation in terms of an -HSS matrix , the expression in Theorem 1 shows that can also be well-approximated by a HSS matrix whose rank is at most . Even though the computational cost of would still be FLOPs, using the HSS approximation of may be very advantageous whenever linear systems with need to be solved (see, e.g., the procedure presented in [12] for the pseudospectra computation of ). Indeed, as mentioned in Sect. 3.2, the computation of the inverse of costs FLOPs. Once is computed, we need only FLOPs to perform .
Remark 8
We would like to mention that we did not observe any numerical issue related to the matrix-vector product (25) and its stability during our vast numerical testing. Moreover, one may want to perform the matrix-vector product by in a parallel environment to achieve better computational performance. The strumpack package7 may be employed to this end. See also, e.g., [37]. However, such a parallel approach has not been used in the numerical experiments presented in Sect. 4.
Numerical Results
In this section we present numerical experiments illustrating the potential of the proposed approach.
In Example 2, we compare our approach to standard procedures employed in the Loewner framework. Recall that the main steps in the standard approach involve forming the full Loewner and shifted Loewner matrices and and computing the SVD of . This SVD can be either computed in full, followed by keeping only the n dominant singular vectors, or only these n singular vectors can be obtained by means of an iterative procedure, where the matrix-vector product with is needed8. In the following, we report the overall running time, considering the construction step (Construction), i.e., the computation of and in the standard approach and of in our approach, as well as the reduction step (Reduction), involving the SVD computation followed by projection to obtain the reduced matrices in (15). In terms of memory requirements, for our approach, this involves the allocation of in the HSS format, while for the standard approach, we report the storage required for and .
In Table 2 we recall the computational cost of the construction and reduction steps of both the standard approach, based on either a full or a partial SVD, and the novel one presented in this paper along with their memory requirements.
Table 2.
Computational cost of the construction (Construction) and reduction (Reduction) steps of the different approaches we test along with their storage demand (Storage). The computational cost of the construction of can be found, e.g., in [28, Table 1]
| Construction | Reduction | Storage | |
|---|---|---|---|
| Full svd | |||
| svds w/ | |||
| svds w/ |
Lastly, the accuracy of the reduced models is reported in terms of the normalized -error:
where denotes the Frobenius norm. Similar results in terms of accuracy are attained for the -error, however, we decided not to document them here, for the sake of brevity.
In Example 3, we compare our novel strategy to the one presented in [18], which makes use of the low-rank ADI-Galerkin method for computing the Loewner matrix as the solution to (11). Such a scheme computes low-rank approximations to the dense Loewner matrix to speed-up the SVD computation, however, the memory constraints originating from the allocation of and are still present.
Results were obtained by running MatlabR2020b [29] on a MacBook Pro with an Intel Core i9 processor running at 2.3GHz using 16GB of RAM. All computations involving HSS matrices employed the hm-toolbox [28] with the default settings and the threshold for off-diagonal truncation set to .
Example 2
We consider a synthetic problem for which we can control the order of the original system (n), the number of inputs and outputs (), as well as the number of measurements (N). The system dynamics is generated randomly, with poles in complex conjugate pairs. In particular:
the real part of the poles is random with mean and standard deviation ; the imaginary part is also random, with mean and standard deviation .
residues associated to each pole are rank-1 matrices, obtained as outer products between two random vectors, both having the real part with mean 0 and standard deviation 10, while the imaginary part has mean 0 and standard deviation .
Measurement points are logarithmically distributed between and rad/sec. Last, but not least, random noise with a signal-to-noise ratio was added to the transfer function evaluation to obtain the measurement matrices . We adopt the Odd&Even (real) partition of the frequencies as it achieves satisfactory approximation results while eliminating complex arithmetic. Tangential directions are chosen as unit vectors (rows and columns of the identity matrix of size p).
We compare the proposed approach to the traditional Loewner framework, in which the Loewner and shifted Loewner matrices and are formed and the full SVD of is computed, as well as the alternative approach in which, after building and , a partial SVD of using the Matlab svds function is computed for various instances of the data set described above for different values of N, p, and n. The command svds was employed with the left starting vector (same notation as in Theorem 1) instead of a random starting vector, which is the default setting.
Figure 2 presents the memory requirements for storing the Loewner and shifted Loewner matrices and (in red), as opposed to storing the HSS approximation in our approach (in blue), along with the storage needed to allocate the data in , , , , , , for increasing values in the number of inputs and outputs p (in black). We point out that for values of N larger than , we were not able to allocate the full matrices and on the employed laptop (this value, however, depends on the available RAM memory of the machine). For instances when these matrices can be allocated, Fig. 2 shows that the memory requirements for the proposed approach are always much lower than for the standard scheme. Moreover, in contrast to what happens to the memory required for the data matrices, the storage demanded by the allocation of in HSS format is independent of p.
Fig. 2.

Example 2. Memory requirements in Megabytes to store , , , and the data matrices (, , , , , ) for different values of N and p
We report the results of the comparison between the different approaches in terms of run time in Table 3 for the number of measurements N varying between to , the number of inputs and outputs p taking values 1, 5 and 10, and the number of poles being 50 or 100. The “–” is used to indicate the instances for which we were not able to compute the reduced model (15): for , we cannot allocate the full matrices and , and for we could not compute the full SVD of . Such constraints are not relevant to our proposed strategy. It is pertinent to remark the following:
the CPU time of the full SVD approach does not depend on p and n, only on N, as expected from Table 2: indeed, the cost of building and is quadratic in N whereas the full SVD demands FLOPs; the full SVD approach is rarely the fastest method (it can happen for very modest values of N in the considered range);
the CPU time of the full assembly of followed by the svds Matlab command does not depend on p, only on N and n, as expected from Table 2: the construction of and costs FLOPs, whereas the computational effort for the partial SVD depends on n, leading to a more demanding procedure for large n; it is usually the fastest approach for (very) modest values of N in the considered range and ;
the HSS rank of the Cauchy matrix approximation only depends on the frequency samples, hence on N because, in our scenario, the sampling interval is the same, but the distribution of points inside the interval is different for each N; there may be instances when, for the same samples, the HSS rank of may produce slightly different results due to the randomness induced by the adaptive cross approximation procedure used in constructing (for instance, for , , and , the rank is 28, while for the rest of the values considered for n and p, the rank is 27); moreover, the HSS rank increases with N;
our proposed approach is as accurate as the first two approaches, highlighting the fact that the HSS approximation does not lead to significant losses in the approximation properties of the reduced model (15); clearly, our approach cannot be more accurate than the traditional Loewner framework, especially when the full SVD is performed;
last, but not least, the CPU time of the proposed solution depends linearly on p, n and (Table 2), thus being the fastest method for large values of N; moreover, no memory constraints are present for N up to .
Fig. 3.
Example 2. Left: Frequency response obtained with the standard and proposed approaches (in black and blue, respectively) versus the measurements (in red) for , , and . Right: Error plots for the models obtained with the standard and proposed approaches (in black and blue, respectively) for , , and (Color figure online)
Table 3.
Example 2. Computational time (in seconds) and -error achieved by each approach for different values of N (number of samples), p (number of inputs and outputs), and n (order of the underlying system and of the model) on the employed laptop
| Full svd | svds w/ | svds w/ | |||||||
|---|---|---|---|---|---|---|---|---|---|
| N | p | n | Time (s) | -error | Time (s) | -error | Time (s) | -error | |
| 1 000 | 1 | 50 | 0.28 | 3.62e-10 | 0.20 | 3.62e-10 | 15 | 1.07 | 3.62e-10 |
| 3 000 | 10.69 | 3.71e-10 | 3.51 | 3.71e-10 | 19 | 3.16 | 3.71e-10 | ||
| 5 000 | 20.79 | 3.7e-10 | 12.29 | 3.7e-10 | 21 | 6.99 | 3.7e-10 | ||
| 10 000 | 158.41 | 3.71e-10 | 68.40 | 3.71e-10 | 22 | 15.55 | 3.71e-10 | ||
| 15 000 | 554.98 | 3.73e-10 | 209.31 | 3.73e-10 | 24 | 22.86 | 3.73e-10 | ||
| 29 000 | 4674.35 | 3.74e-10 | 1590.37 | 3.74e-10 | 26 | 52.33 | 3.74e-10 | ||
| 30 000 | – | – | 1827.45 | 3.74e-10 | 26 | 50.68 | 3.74e-10 | ||
| 40 000 | – | – | 11214.41 | 3.74e-10 | 27 | 71.44 | 3.74e-10 | ||
| 50 000 | – | – | – | – | 27 | 91.88 | 3.74e-10 | ||
| 100 000 | – | – | – | – | 30 | 189.71 | 3.75e-10 | ||
| 1 000 | 1 | 100 | 0.21 | 9.63e-11 | 0.22 | 9.63e-11 | 15 | 1.59 | 9.64e-11 |
| 3 000 | 10.66 | 1.01e-10 | 5.84 | 1.01e-10 | 19 | 5.65 | 1.01e-10 | ||
| 5 000 | 20.62 | 1.01e-10 | 18.89 | 1.01e-10 | 21 | 12.77 | 1.01e-10 | ||
| 10 000 | 156.76 | 1.01e-10 | 93.16 | 1.01e-10 | 22 | 28.47 | 1.02e-10 | ||
| 50 000 | – | – | – | – | 27 | 155.46 | 1.03e-10 | ||
| 100 000 | – | – | – | – | 30 | 321.84 | 1.03e-10 | ||
| 1 000 | 5 | 50 | 0.27 | 3.71e-10 | 0.19 | 3.71e-10 | 15 | 1.20 | 3.72e-10 |
| 3 000 | 10.69 | 3.17e-10 | 3.56 | 3.17e-10 | 19 | 4.59 | 3.17e-10 | ||
| 5 000 | 20.81 | 3.01e-10 | 12.29 | 3.01e-10 | 21 | 9.51 | 3.02e-10 | ||
| 10 000 | 157.75 | 2.92e-10 | 68.43 | 2.92e-10 | 22 | 19.67 | 2.92e-10 | ||
| 50 000 | – | – | – | – | 28 | 107.74 | 2.88e-10 | ||
| 100 000 | – | – | – | – | 30 | 230.17 | 2.88e-10 | ||
| 1 000 | 5 | 100 | 0.26 | 2.52e-10 | 0.25 | 2.52e-10 | 15 | 2.21 | 2.52e-10 |
| 3 000 | 10.69 | 1.41e-10 | 5.90 | 1.41e-10 | 19 | 8.62 | 1.41e-10 | ||
| 5 000 | 20.78 | 1.35e-10 | 18.97 | 1.35e-10 | 21 | 17.73 | 1.35e-10 | ||
| 10 000 | 157.75 | 1.33e-10 | 93.58 | 1.33e-10 | 22 | 36.84 | 1.33e-10 | ||
| 50 000 | – | – | – | – | 28 | 197.72 | 1.31e-10 | ||
| 100 000 | – | – | – | – | 30 | 421.02 | 1.3e-10 | ||
| 1 000 | 10 | 50 | 0.26 | 6.57e-10 | 0.18 | 6.57e-10 | 15 | 1.61 | 6.57e-10 |
| 3 000 | 11.05 | 3.2e-10 | 3.67 | 3.2e-10 | 19 | 5.58 | 3.2e-10 | ||
| 5 000 | 20.83 | 2.73e-10 | 12.25 | 2.73e-10 | 21 | 10.94 | 2.73e-10 | ||
| 10 000 | 159.13 | 2.68e-10 | 69.34 | 2.68e-10 | 22 | 23.09 | 2.68e-10 | ||
| 50 000 | – | – | – | – | 27 | 132.64 | 2.56e-10 | ||
| 100 000 | – | – | – | – | 30 | 293.55 | 2.54e-10 | ||
| 1 000 | 10 | 100 | 0.24 | 5.27e-10 | 0.24 | 5.27e-10 | 15 | 2.88 | 5.25e-10 |
| 3 000 | 10.68 | 1.78e-10 | 5.94 | 1.78e-10 | 19 | 10.45 | 1.78e-10 | ||
| 5 000 | 20.84 | 1.73e-10 | 18.97 | 1.73e-10 | 21 | 20.50 | 1.73e-10 | ||
| 10 000 | 157.48 | 1.65e-10 | 93.63 | 1.65e-10 | 22 | 42.97 | 1.65e-10 | ||
| 50 000 | – | – | – | – | 27 | 248.91 | 1.58e-10 | ||
| 100 000 | – | – | – | – | 30 | 552.66 | 1.58e-10 | ||
Figure 4 (left) we plot the computational time of the three approaches for , , and different values of N. Even though these are the same results as those reported in Table 3, Fig. 4 (left) clearly shows the trend of the full SVD scheme versus the trend of the svds scheme versus the behaviour of the proposed approach. Figure 4 (right) we depict, on a logarithmic scale, the running time of the proposed procedure for and different values of N and p, clearly exhibiting a linear dependency on p and an dependency with respect to N.
Fig. 4.
Example 2. Left: Computational time achieved by the different approaches for , , and N. Right: Computational time achieved by our novel procedure for , and different values of N and p
Example 3
In this example we compare the novel strategy presented in this paper to the fast Loewner SVD scheme illustrated in [18]. We consider the same data set as the one in Example 2, this time with and a random . Due to the fact that the models resulting from the Loewner framework have , a realization of size is needed to approximate the system with [26, 30].
In [18], a Galerkin-ADI method is applied to the Sylvester equation (11) satisfied by the Loewner matrix. At the k-th iteration, a low-rank approximation , , , to is thus computed. If denotes the SVD of , then the matrices and can be used in place of and in (15) to compute the reduced model. The method is stopped whenever the norm of the residual matrix , consisting of the left-hand side of the Sylvester equation with replaced by its low-rank approximation , is smaller than a certain threshold . In the results that follow we employ , as done in [18]. At each iteration step, the SVD of is truncated to keep only the significant values.
We consider the Half&Half partition of the frequencies as this is the best scenario for the scheme coming from [18]. The Half&Half partition often leads to a rather fast convergence of the Galerkin-ADI method in terms of number of iterations so that a quite small approximation space is constructed. If different partitions were used, the Galerkin-ADI method could be equipped with a quite involved divide-and-conquer scheme; see [18]. On the other hand, as illustrated in Example 1, the Half&Half partition leads to higher values of the HSS-rank of than for the Even&Odd partition with a consequent increment in the computational efforts of our scheme. In addition, as for [18], our tests employed complex arithmetic and did not solve the corresponding Sylvester equation (22) for real-coefficient matrices.
In Table 4 we report the results for , , and different values of N. Notice that even though the Galerkin-ADI approach efficiently computes the approximation spaces, the construction of the reduced model (15) still requires the allocation of both and . Therefore, also for the Galerkin-ADI scheme severe memory constraints hold and for , we are not able to allocate the and matrices with complex entries on the machine used for running the tests.
Table 4.
Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the -error achieved by the Galerkin-ADI approach. In comparison, we list the HSS-rank, the total time (in seconds) as well as the -error of the novel scheme presented in this paper for different values of N (number of samples), , and
| Galerkin-ADI with | svds w/ | ||||||
|---|---|---|---|---|---|---|---|
| # of | Scheme | Total | Total | ||||
| N | Iter. | Time(s) | Time (s) | -error | Time (s) | -error | |
| 5 000 | 5 | 2.93 | 9.54 | 1.55e-2 | 42 | 100.21 | 2.06e-9 |
| 10 000 | 5 | 5.4 | 28.61 | 2.76e-2 | 46 | 174.04 | 1.81e-9 |
| 15 000 | 5 | 10.80 | 149.02 | 4.06e-2 | 49 | 280.11 | 1.31e-9 |
| 20 000 | 5 | 14.67 | 357.01 | 9.45e-2 | 50 | 340.42 | 1.17e-9 |
| 25 000 | 6 | 23.38 | 691.94 | 1.18e-2 | 52 | 426.34 | 9.55e-10 |
| 30 000 | 6 | 30.67 | 1229.48 | 1.17e-1 | 52 | 540.20 | 9.06e-10 |
Even though the Galerkin-ADI approach is faster for , the computed approximation spaces are quite poor. Indeed, the computed reduced models are always 7 orders of magnitude less accurate than the ones constructed by our approach. The paper [18] validates the Galerkin-ADI scheme on a system with randomly generated poles for various orders n and number of samples N but does not mention the accuracy of the resulting models. Moreover, in terms of CPU time, our results are comparable to the ones in [18] when considering the computational time solely of the Galerkin-ADI iteration, disregarding the steps involving building the full matrices and projecting these to obtain the reduced model.
The remarkable difference in the accuracy attained by the two approaches make any sort of computational comparison rather pointless. However, we would like to point out that the computational time of the Galerkin-ADI approach grows quadratically with N due to the need to assemble and store the full Loewner and shifted Loewner matrices, while an dependency of the computational cost of our novel approach can be evidenced once again from the timings reported in Table 4.
Several ideas could be implemented to improve the accuracy of the models obtained with the Galerkin-ADI approach. In order to have the fairest comparisons with respect to our novel approach, each of these ideas will be tested separately to explore all the possibilities to enhance the Galerkin-ADI approach from [18].
First, the tolerance for solving the Sylvester equation via Galerkin-ADI can be chosen to a value comparable to the noise level for an SNR of 120, namely . Results are detailed in Table 5 only for the case , , and as the trend is obvious from this one example. While the accuracy of the model has slightly improved with respect to results obtained for , the number of iterations has also considerably increased, leading to matrices of much larger dimensions for which the SVD becomes costly. Hence, the CPU cost of the scheme has exploded and is no longer viable. In any case, even for a tolerance value close to the noise level, the accuracy of the model is several orders of magnitude worse than with our proposed technique ( versus ).
Table 5.
Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the -error achieved by the Galerkin-ADI approach for , , and
| # of Iter. | Scheme Time(s) | Total Time(s) | -error | |
|---|---|---|---|---|
| 5 | 2.93 | 9.54 | 1.55e-2 | |
| 51 | 1648.02 | 1655.01 | 2.20e-3 |
Second, it is always advisable to compute the projection subspaces from a linear combination of and , namely rather than only , as the Loewner matrix encodes the strictly rational part and the addition of provides all the information on the system, including its polynomial part (the -term). We apply the low-rank Galerkin-ADI method to the Sylvester equation fulfilled by thus computing a matrix such that . Results are detailed in Table 6 for the case , , , and . For all instances considered, results were comparable in terms of CPU time to those obtained when considering solely the Sylvester equation satisfied by in the Galerkin-ADI iteration (listed in the first line of Table 6 for reference), while in terms of accuracy, they are slightly worse. For this example, the sole benefit of using a linear combination might be the system identification properties as, in principle, a sharp drop in the SVD of reveals the degree of the underlying system.
Table 6.
Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the -error achieved by the Galerkin-ADI approach on the Sylvester equations satisfied by , and , for , , , and
| # of Iter. | Scheme Time(s) | Total Time(s) | -error | |
|---|---|---|---|---|
| 5 | 2.93 | 9.54 | 1.55e-2 | |
| 6 | 3.96 | 10.51 | 5.22e-2 | |
| , | 4 | 2.97 | 9.51 | 9.23e-2 |
| , | 4 | 2.89 | 9.34 | 9.23e-2 |
| , | 4 | 2.95 | 9.55 | 9.23e-2 |
The third avenue worth exploring is employing real arithmetic and the corresponding Sylvester equations (22) and (23). Table 7 shows the results obtained using real arithmetic, both for the Galerkin-ADI scheme, as well as our proposed method. For reference, the first line in Table 7 lists the results previously obtained in complex arithmetic. For the method in [18], the cost of the scheme has mostly increased, due to more complicated Sylvester equations in (22) and (23). The CPU cost of building the data matrices, the full Loewner and shifted Loewner matrices, has also increased, yielding a total cost far superior to that obtained in complex arithmetic. In some instances, the accuracy has improved slightly. On the other hand, the real arithmetic causes the HSS-rank of the Cauchy matrix approximation to be much smaller with a remarkable impact on the CPU time and almost no effects on the model accuracy when using our novel approach.
Table 7.
Example 3. Number of iterations, computational time (in seconds) solely of the Galerkin-ADI iteration scheme together with the total time (including building the data, the full Loewner and shifted Loewner matrices and the projection step) as well as the -error achieved by the Galerkin-ADI approach. In comparison, we list the HSS-rank, the total time (in seconds) as well as the -error of the novel scheme presented in this paper for different values of N (number of samples), , and when employing real arithmetic
| Galerkin-ADI with | svds w/ | ||||||
|---|---|---|---|---|---|---|---|
| # of | Scheme | Total | Total | ||||
| N | Iter. | Time(s) | Time (s) | -error | Time (s) | -error | |
| complex | 5 | 2.93 | 9.54 | 1.55e-2 | 42 | 100.21 | 2.06e-9 |
| 3 | 1.76 | 50.43 | 1.03e-2 | 24 | 81.56 | 2.05e-9 | |
| 6 | 3.61 | 52.42 | 2.66e-3 | 24 | 80.97 | 2.05e-9 | |
| , | 7 | 18.57 | 65.84 | 2.53e-3 | 24 | 80.84 | 2.05e-9 |
| , | 5 | 15.13 | 63.63 | 1.87e-2 | 24 | 82.11 | 2.05e-9 |
| , | 6 | 16.47 | 66.72 | 9.01e-2 | 24 | 82.95 | 2.05e-9 |
We conclude this example by mentioning that the use of a hybrid approach may be fruitful. In particular, our novel approach can be employed to avoid storing the large and dense Loewner and shifted Loewner matrices. Then, the Galerkin-ADI scheme can be used to compute the first dominant singular vectors of , instead of employing svds, thus also being able to identify the order of the underlying system. However, the accuracy will not be comparable to that of our proposed approach. We implemented this idea and list the CPU times of the various steps in Table 8 together with the resulting accuracy for Galerkin-ADI applied to solving the Sylvester equation (22) for in real arithmetic with for , , and . Plots of the responses of our proposed approach, together with the Galerkin-ADI scheme as proposed in [18] and the hybrid approach are shown in Fig. 5. Even though the general shape of the response is well captured, some resonances are not modeled accurately, as expected from the much higher model errors reported earlier. This can be noticed better from the error plots in Fig. 6.
Table 8.
Example 3. Computational time (in seconds) of the three individual steps in the hybrid approach: setting up of the data matrices, the Galerkin-ADI iteration scheme and projection to obtain the reduced model, together with the total time as well as the -error for , , and in real arithmetic
| Data matrices Time(s) | Galerkin-ADI Scheme Time(s) | Projection Time(s) | Total Time(s) | -error |
|---|---|---|---|---|
| 0.8 | 1.76 | 4.65 | 7.21 | 2.1e-2 |
Fig. 5.
Example 3. Frequency response of the model (in black) and the measurements (in red) for , , and using our proposed approach, Galerkin-ADI as in [18] and the hybrid approach, employing real arithmetic
Fig. 6.
Example 3. Error plots for , , and using our proposed approach, Galerkin-ADI as in [18] and the hybrid approach, employing real arithmetic
Conclusion
By exploiting the Cauchy-like structure of the Loewner and shifted Loewner matrices, a novel strategy for reducing the computational costs and the memory requirements of the Loewner framework has been proposed. In particular, the use of the HSS-format leads to tremendous savings in the storage demand and computational efforts of the overall scheme. Indeed, except for the construction of whose cost is polylogarithmic in N, both the memory requirements and the computational cost of iteratively performing the SVD now linearly depend on the cardinality of the considered data set.
The success of our procedure strongly relies on the capability of representing the Cauchy matrix in terms of an HSS-matrix with low rank of the off-diagonal blocks. Even though we restricted ourselves to showing how different, but common, partitions of the frequencies affect the HSS-rank of , a thorough analysis of their connection may be beneficial. This interesting, but tricky study will need to take into account several and diverse aspects like the compressibility in the HSS format of the matrix , the conditioning of and , and the approximation properties of the underlying partition of the frequencies.
We have always computed at high accuracy. Results very similar to the one reported in the previous sections are obtained also with as low-rank truncation threshold. However, we believe that the employment of more inexact, and thus with a lower rank, HSS-representations of and its effects on the accuracy of the overall scheme may be another interesting research direction which is worth pursuing depending on the application at hand.
The strategy presented in this paper can be applied to more sophisticated problems as long as the Loewner and shifted Loewner matrices maintain a Cauchy-like structure. In particular, our approach can be employed with minor modifications in model order reduction of parametrized [21], linear switched [16], and bilinear systems [1].
Acknowledgements
We are in debt with Leonardo Robol for some help with [28] and fruitful discussions about the topic of this paper. His assistance is greatly appreciated. We also thank Peter Benner and Jens Saak for insightful comments on earlier versions of the manuscript. The first author is member of the Italian INdAM Research group GNCS.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Declarations
Conflicts of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Ethical approval
The research presented in this paper is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while both the authors were in residence at the Institute for Computational and Experimental Research in Mathematics (ICERM) in Providence, RI, during the Model and Dimension Reduction in Uncertain and Dynamic Systems program. Even though the second half of the program had to be performed virtually due to the restrictions caused by the COVID-19 pandemic, we are extremely grateful to the organizers of the program and the whole staff of ICERM for doing whatever possible to maintain an exciting, fruitful, and high-quality working environment.
The datasets and algorithms generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Moreover, the approach presented in this paper will be included in the hm-toolbox in the near future.
Footnotes
The number of nonzero entries in the data matrices and amounts to and to for and .
In [5][Section4.3], some results on the numerical rank of are presented provided .
Roughly speaking, such a threshold is related to the computation of the low-rank approximations to the off-diagonal blocks of (see, e.g., [46, Corollary 4.3], [23, Theorem 4.7]).
Notice that all the results stated in the previous sections still hold also for the Odd&Even (real) partition with straightforward modifications due to the different expression of the right-hand side in (22)–(23).
Following Definition 1, this function returns .
We employ the Matlabfunctions svd and svds, respectively.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Davide Palitta, Email: palitta@mpi-magdeburg.mpg.de.
Sanda Lefteriu, Email: sanda.lefteriu@imt-lille-douai.fr.
References
- 1.Antoulas AC, Gosea IV, Ioniţă AC. Model reduction of bilinear systems in the Loewner framework. SIAM J. Sci. Comput. 2016;38:B889–B916. doi: 10.1137/15M1041432. [DOI] [Google Scholar]
- 2.Antoulas, A.C., Lefteriu, S., Ioniţă, A.C.: A tutorial introduction to the Loewner framework for model reduction, ch. 8, pp. 335–376
- 3.Baglama J, Reichel L. Augmented implicitly restarted Lanczos bidiagonalization methods. SIAM J. Sci. Comput. 2005;27:19–42. doi: 10.1137/04060593X. [DOI] [Google Scholar]
- 4.Bebendorf M. Approximation of boundary element matrices. Numer. Math. 2000;86:565–589. doi: 10.1007/PL00005410. [DOI] [Google Scholar]
- 5.Beckermann B, Townsend A. Bounds on the singular values of matrices with displacement structure. SIAM Rev. 2019;61:319–344. doi: 10.1137/19M1244433. [DOI] [Google Scholar]
- 6.Bouras A, Frayssé V. Inexact matrix-vector products in Krylov methods for solving linear systems: a relaxation strategy. SIAM J. Matrix Anal. Appl. 2005;26:660–678. doi: 10.1137/S0895479801384743. [DOI] [Google Scholar]
- 7.Carrier J, Greengard L, Rokhlin V. A fast adaptive multipole algorithm for particle simulations. SIAM J. Sci. Stat. Comput. 1988;9:669–686. doi: 10.1137/0909044. [DOI] [Google Scholar]
- 8.Chandrasekaran, S., Dewilde, P., Gu, M., Lyons, W., Pals, T.: A fast solver for HSS representations via sparse matrices, SIAM J. Matrix Anal. Appl. 29, 67–81 (2006/07)
- 9.Chandrasekaran S, Gu M, Sun X, Xia J, Zhu J. A superfast algorithm for Toeplitz systems of linear equations. SIAM J. Matrix Anal. Appl. 2007;29:1247–1266. doi: 10.1137/040617200. [DOI] [Google Scholar]
- 10.Derakhtenjani AS, Candanedo JA, Chen Y, Dehkordi VR, Athienitis AK. Modeling approaches for the characterization of building thermal dynamics and model-based control: a case study, Science and Technology for the. Built Environ. 2015;21:824–836. [Google Scholar]
- 11.Drineas P, Mahoney MW, Muthukrishnan S. Relative-error matrix decompositions. SIAM J. Matrix Anal. Appl. 2008;30:844–881. doi: 10.1137/07070471X. [DOI] [Google Scholar]
- 12.Embree, M., Ioniţă, A.C.: Pseudospectra of Loewner matrix pencils, To appear, Realization and Model Reduction of Dynamical Systems: A Festschrift in Honor of the 70th Birthday of Thanos Antoulas (2019)
- 13.Freitag, M.A., Spence, A.: Convergence theory for inexact inverse iteration applied to the generalised nonsymmetric eigenproblem, Electron. Trans. Numer. Anal. 28, 40–64 (2007/08)
- 14.Gaaf SW, Simoncini V. Approximating the leading singular triplets of a large matrix function. Appl. Numer. Math. 2017;113:26–43. doi: 10.1016/j.apnum.2016.10.015. [DOI] [Google Scholar]
- 15.Gohberg I, Olshevsky V. Fast algorithms with preprocessing for matrix-vector multiplication problems. J. Complexity. 1994;10:411–427. doi: 10.1006/jcom.1994.1021. [DOI] [Google Scholar]
- 16.Gosea IV, Petreczky M, Antoulas AC. Data-driven model order reduction of linear switched systems in the Loewner framework. SIAM J. Sci. Comput. 2018;40:B572–B610. doi: 10.1137/17M1120233. [DOI] [Google Scholar]
- 17.Greengard L, Rokhlin V. A fast algorithm for particle simulations. J. Comput. Phys. 1987;73:325–348. doi: 10.1016/0021-9991(87)90140-9. [DOI] [Google Scholar]
- 18.Hochman, A.: Fast singular-value decomposition of Loewner matrices for state-space macromodeling, in 2015 IEEE 24th Electrical Performance of Electronic Packaging and Systems (EPEPS), pp. 177–180 (2015)
- 19.Hochstenbach ME. A Jacobi–Davidson type SVD method. SIAM J. Sci. Comput. 2001;23:606–628. doi: 10.1137/S1064827500372973. [DOI] [Google Scholar]
- 20.Horn, R., Johnson, C.: Topics in Matrix Analysis. Cambridge Univ. Press (1991)
- 21.Ioniţă, A.C.: Lagrange rational interpolation and its applications to approximation of large-scale dynamical systems, PhD thesis, Rice University (2013)
- 22.Karachalios D , Gosea I, Antoulas A. Data-driven approximation methods applied to non-rational functions. Proc. Appl. Math. Mech. 2018;18:1. doi: 10.1002/pamm.201800368. [DOI] [Google Scholar]
- 23.Kressner D, Massei S, Robol L. Low-rank updates and a divide-and-conquer method for linear matrix equations. SIAM J. Sci. Comput. 2019;41:A848–A876. doi: 10.1137/17M1161038. [DOI] [Google Scholar]
- 24.Kürschner, P., Freitag, M.: Inexact methods for the low rank solution to large scale Lyapunov equations, BIT Numerical Mathematics (2020)
- 25.Larsen R. Lanczos bidiagonalization with partial reorthogonalization. DAIMI Rep. Ser. 1998;27:1. [Google Scholar]
- 26.Lefteriu S, Antoulas AC. A new approach to modeling multiport systems from frequency-domain data. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2010;29:14–27. doi: 10.1109/TCAD.2009.2034500. [DOI] [Google Scholar]
- 27.Massei S, Palitta D, Robol L. Solving rank-structured Sylvester and Lyapunov equations. SIAM J. Matrix Anal. Appl. 2018;39:1564–1590. doi: 10.1137/17M1157155. [DOI] [Google Scholar]
- 28.Massei S, Robol L, Kressner D. hm-toolbox: MATLAB software for HODLR and HSS matrices. SIAM J. Sci. Comput. 2020;42:C43–C68. doi: 10.1137/19M1288048. [DOI] [Google Scholar]
- 29.MATLAB, version 9.9.0.1467703 (R2020b), The MathWorks Inc., Natick, Massachusetts (2020)
- 30.Mayo AJ, Antoulas AC. A framework for the solution of the generalized realization problem. Linear Algebra Appl. 2007;405:634–662. doi: 10.1016/j.laa.2007.03.008. [DOI] [Google Scholar]
- 31.Nakatsukasa, Y.: Fast and stable randomized low-rank matrix approximation (2020)ArXiv preprint: arXiv:2009.11392
- 32.Palitta D, Kürschner P. On the convergence of low-rank Krylov methods. Numer. Algorithm. 2021;88:1383–1417. doi: 10.1007/s11075-021-01080-2. [DOI] [Google Scholar]
- 33.Pan, V.Y.: Fast approximate computations with Cauchy matrices, polynomials and rational functions, in Computer Science - Theory and Applications, Hirsch, E.A., Kuznetsov, S.O., Pin, J.-É., Vereshchagin, N.K. eds., Cham, Springer International Publishing, pp. 287–299 (2014)
- 34.Pan VY. Transformations of matrix structures work again. Linear Algebra Appl. 2015;465:107–138. doi: 10.1016/j.laa.2014.09.004. [DOI] [Google Scholar]
- 35.Peeters B, Van der Auweraer H, Guillaume P, Leuridan J. The PolyMAX frequency-domain method: a new standard for modal parameter estimation? Shock. Vib. 2004;11:395–409. doi: 10.1155/2004/523692. [DOI] [Google Scholar]
- 36.Poussot-Vassal, C., Quero, C., Vuillemin, P.: Data-driven approximation of a high fidelity gust-oriented flexible aircraft dynamical model, IFAC-PapersOnLine, 51 , pp. 559 – 564. 9th Vienna International Conference on Mathematical Modelling (2018)
- 37.Rouet F-H, Li XS, Ghysels P, Napov A. A distributed-memory package for dense hierarchically semi-separable matrix computations using randomization. ACM Trans. Math. Softw. 2016;42:1. doi: 10.1145/2930660. [DOI] [Google Scholar]
- 38.Sahouli, M., Dounavis, A.: Iterative Loewner matrix macromodeling using CUR decomposition for noisy frequency responses. In: 2019 IEEE 28th Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS), pp. 1–3 (2019)
- 39.Simoncini V, Eldén L. Inexact Rayleigh quotient-type methods for eigenvalue computations. BIT. 2002;42:159–182. doi: 10.1023/A:1021930421106. [DOI] [Google Scholar]
- 40.Simoncini V, Szyld DB. Theory of inexact Krylov subspace methods and applications to scientific computing. SIAM J. Sci. Comput. 2003;25:454–477. doi: 10.1137/S1064827502406415. [DOI] [Google Scholar]
- 41.Stoll M. A Krylov–Schur approach to the truncated SVD. Linear Algebra Appl. 2012;436:2795–2806. doi: 10.1016/j.laa.2011.07.022. [DOI] [Google Scholar]
- 42.The MORwiki Community, MORwiki - Model Order Reduction Wiki. http://modelreduction.org
- 43.van den Eshof J, Sleijpen GLG. Inexact Krylov subspace methods for linear systems. SIAM J. Matrix Anal. Appl. 2004;26:125–153. doi: 10.1137/S0895479802403459. [DOI] [Google Scholar]
- 44.Vandebril R, Van Barel M, Golub G, Mastronardi N. A bibliography on semiseparable matrices. Calcolo. 2005;42:249–270. doi: 10.1007/s10092-005-0107-z. [DOI] [Google Scholar]
- 45.Vogel J, Xia J, Cauley S, Balakrishnan V. Superfast divide-and-conquer method and perturbation analysis for structured eigenvalue solutions. SIAM J. Sci. Comput. 2016;38:A1358–A1382. doi: 10.1137/15M1018812. [DOI] [Google Scholar]
- 46.Xi Y, Xia J, Cauley S, Balakrishnan V. Superfast and stable structured solvers for Toeplitz least squares via randomized sampling. SIAM J. Matrix Anal. Appl. 2014;35:44–72. doi: 10.1137/120895755. [DOI] [Google Scholar]
- 47.Xia J, Chandrasekaran S, Gu M, Li XS. Fast algorithms for hierarchically semiseparable matrices. Numer. Linear Algebra Appl. 2010;17:953–976. doi: 10.1002/nla.691. [DOI] [Google Scholar]
- 48.Xia J, Chandrasekaran S, Gu M, Li XS. Superfast multifrontal method for large structured linear systems of equations. SIAM J. Matrix Anal. Appl. 2010;31:1382–1411. doi: 10.1137/09074543X. [DOI] [Google Scholar]





