Skip to main content
Elsevier Sponsored Documents logoLink to Elsevier Sponsored Documents
. 2020 Apr;368:112503. doi: 10.1016/j.cam.2019.112503

Support and approximation properties of Hermite splines

Julien Fageot a, Shayan Aziznejad a, Michael Unser a, Virginie Uhlmann a,b,
PMCID: PMC6919321  PMID: 32255895

Abstract

In this paper, we formally investigate two mathematical aspects of Hermite splines that are relevant to practical applications. We first demonstrate that Hermite splines are maximally localized, in the sense that the size of their support is minimal among pairs of functions with identical reproduction properties. Then, we precisely quantify the approximation power of Hermite splines for the reconstruction of functions and their derivatives. It is known that the Hermite and B-spline approximation schemes have the same approximation order. More precisely, their approximation error vanishes as O(T4) when the step size T goes to zero. In this work, we show that they actually have the same asymptotic approximation error constants, too. Therefore, they have identical asymptotic approximation properties. Hermite splines combine optimal localization and excellent approximation power, while retaining interpolation properties and closed-form expression, in contrast to existing similar functions. These findings shed a new light on the convenience of Hermite splines in the context of computer graphics and geometrical design.

Keywords: Hermite interpolation, Minimum-support property, Approximation error

1. Introduction

In his seminal 1973 monograph on cardinal interpolation and spline functions [1], I.J. Schoenberg explains and characterizes B-spline interpolation, which still inspires researchers and yields exciting applications nowadays. In the same work, he also sets the basis of Hermite interpolation [2], [3]. In the classical B-spline framework, a continuous-domain function is constructed from a discrete sequence of samples [4], [5], [6]. By contrast, the Hermite interpolation problem involves two sequences of discrete samples. They impose constraints not only on the resulting interpolated function but also on its derivatives up to a given order.

Curves in the plane or tensor-product surfaces in space can be constructed from one-dimensional interpolation schemes by interpolating along each spatial coordinate. The practical value of Hermite splines in this context is to offer tangential control on the interpolated curve. This can be easily understood through their link with Bézier curves [7]. The latter lie at the heart of vector graphics and are popular tools for computer-aided geometrical design and modeling [8], [9], [10]. Because of their small support, Hermite splines are also an interesting option for the design of multiwavelets, which are wavelets with multiple generators [11], [12]. In practice, Hermite splines thus provide a suitable solution to a number of problems, whether with respect to simplicity of construction, efficiency, or convenience. This hands-on intuition can be translated to formal properties of Hermite splines and mathematically characterized. Wer give as examples the joint interpolation properties of Hermite splines (see Section 1.2) that ensure that, at integer values, the interpolated function exactly matches the sequences of samples and derivative samples that were used to build it; their smoothness properties [13], which guarantee low curvature of the interpolated curve under some mild conditions; and their statistical optimality (in terms of MMSE) for the reconstruction of second-order Brownian motion from direct and first-derivative samples [14]. In that spirit, we investigate in this work the theoretical counterpart of two additional features that are observed to grant Hermite splines their practical usefulness.

1.1. Contributions

Our contributions state the minimal-support property of Hermite splines and investigate their approximation power. In the following, we describe the practical observations that motivate them, the results themselves, and related works.

Minimal-support property.

The short support of Hermite splines is an important feature that makes them attractive in practice. The size of the support relates to the local extent of modifications on the continuously defined spline curve. In Section 2, we formally demonstrate that Hermite splines have the minimal support among all basis functions that generate cubic and quadratic splines. When dealing with B-splines, there is a tradeoff between the ability to reproduce smooth functions, which increases with the B-spline polynomial degree, and the possibility to allow for more or less sharp transitions, which decreases with the degree. As the question of function reproduction is a central concept when studying both minimal support and approximation errors, we provide a formal definition of it in Section 2.1. On one hand, cubic splines can efficiently reproduce smooth functions are able to capture C2 transitions, but lack the power to capture C1 transitions. On the other hand, quadratic splines have a lesser approximation power, but are preferred when dealing with less smooth (C1) transitions. Hermite splines combine these two strengths in one scheme and are, in terms of support size, better than two-function schemes, including the one composed of the classical cubic and quadratic B-splines. In addition, we also show that one necessarily requires two generators to achieve this optimality. This result relates to similar ones involving a single generator [15], [16].

Rate of decay of the approximation error.

Hermite splines can provide faithful approximation reasonably fast as the number of parameters increases. This feature relates to the rate of decay of the error of approximation. Numerous works approach these questions by restricting themselves to a specific interpolation framework, such as [17], [18] for the specific case of Hermite approximation. Relying on L norms, they provide precise estimations of the optimal bound on the approximation error, but do not allow for comparisons with other schemes. In contrast, approaches have been developed for single [19] and multiple generators [20] to provide a unifying comparison setting. They offer generalized measures that can be applied to a wide variety of basis functions to estimate approximation constants. Hermite interpolation, however, violates some of the core assumptions of those analyses. There, we take strong inspirations from those previous works and provide a novel study of Hermite approximation. We deploy an analogous analysis strategy by relieving the boundedness assumptions and considering additional spline approximation schemes. In Section 3, we precisely quantify the rate of decay of the error of approximation of the Hermite scheme and quantitatively estimate the corresponding approximation constants. Hermite-spline interpolation offers excellent approximation properties when it comes to the reconstruction of a function and its first derivative. It is actually close to achieving the minimal error obtained when the approximation procedure is modified to correspond to the orthogonal projector. The investigation of the error of approximation on the derivative relates to [21], [22], although it follows a completely different line: while [21], [22] focused on the reconstruction of the derivative from signal samples, the Hermite scheme grants direct access on the function and derivative samples, allowing one to reconstruct the derivative in a multifunction setting.

Pioneering works cover the study of spline schemes to an impressive degree of generality [23], [24], [25], [26]. They include results on minimum support and approximation errors in a large variety of cases. Regarding minimum support, these previous results are however restricted to single-generator schemes. To the best of our knowledge, there are no previous works that would consider multi-generators and cover the Hermite case. These pioneering works also do not focus on providing a way to quantify the approximation-error constants, thus preventing one to compare schemes of the same order.

A better understanding of the approximation and support properties of Hermite splines has several useful consequences. For instance, wavelet schemes are commonly built from Hermite splines [27], [28], so that the precise characterization of the approximation power of wavelet bases is important in practical applications such as image compression [29]. Hermite splines are also used to construct parametric deformable contours in the context of image segmentation [13], [30], where their small support allows for the representation of open curves with natural conditions at their extremities [31].

1.2. Hermite splines

Schoenberg defines the cardinal cubic-Hermite-interpolation problem as follows [2], [3]: knowing the discrete sequences of numbers c[k] and d[k], kZ, we look for a continuously defined function fHer(t), tR, that satisfies fHer(k)=c[k], fHer(k)=d[k] for all kZ, such that fHer is piecewise polynomial of degree at most 3 and once differentiable with continuous derivative at the integers. The existence and uniqueness of the solution is guaranteed [2, Theorem 1] for any sequences c=(c[k]) and d=(d[k]) bounded by a polynomial, but we shall restrict to sequences in 2(Z) thereafter. In [3], it is shown that the Hermite spline fHer associated to the sequences c and d can be expressed as

fHer(t)=kZc[k]ϕ1(tk)+d[k]ϕ2(tk), (1)

where the functions ϕ1 and ϕ2 are given by

ϕ1(t)=(2|t|+1)(|t|1)210|t|1, (2)
ϕ2(t)=t(|t|1)210|t|1. (3)

In addition to their fairly simple analytical expression, the cubic Hermite splines have other important properties. First, they are of finite support in [1,1]. Moreover, the generating functions ϕ1, ϕ2 and their derivatives ϕ1, ϕ2 satisfy the joint interpolation conditions

ϕ1(k)=δ[k],ϕ2(k)=δ[k],ϕ1(k)=0,ϕ2(k)=0, (4)

for all kZ, where δ[k] is the discrete unit impulse. The functions and their first derivative are depicted in Fig. 1, where the interpolation properties can easily be observed. The functions ϕ1 and ϕ2 are deeply intertwined as c[k]=fHer(k) and d[k]=fHer(k) in (1). The cubic Hermite splines are differentiable with continuous derivatives at the integer knots points t=k. As a result, functions generated by cubic Hermite splines are C1-continuous piecewise-cubic polynomials with knots at integer locations.

Fig. 1.

Fig. 1

Cubic Hermite splines ϕ1 and ϕ2. The two functions and their derivatives are vanishing at the integers, with the exception of ϕ1(0)=1 and ϕ2(0)=1 (interpolation properties). They are supported in [1,1].

2. Minimum-support property of Hermite splines

B-splines are known to be maximally localized, meaning that they are compactly supported with minimal-support property among functions with the same approximation properties [32], [33]. Hermite splines possess a similar fundamental minimal-support property: they are of minimal support among the pair of functions that generate both quadratic and cubic splines (Theorem 3). In addition, there exists no single generating function that can take this role (Proposition 5). This demonstrates that Hermite splines are maximally localized for the purpose of representing piecewise-cubic functions that are continuously differentiable, as exploited, for instance, in image processing for the design of deformable parametric contours [13].

2.1. Integer-shift-invariant spaces and support properties

Consider a set of N1 functions φ=(φ1,,φN)(L2(R))N and define the space of functions

V(φ)=i=1NkZci[k]φi(k):c=(c1,,cN)(2(Z))N. (5)

We say that the φi are basis functions of the set V(φ). The space V(φ) is integer-shift-invariant in the sense that f(k) is in V(φ) for fV(φ) and kZ [23], [24], [25], [34].

We consider integer-shift-invariant spaces generated by a single (N=1) or two functions (N=2), which corresponds to B-splines with simple knots and Hermite splines, respectively. We only consider spaces (5) for which the family φi(k)i=1N,kZ is a Riesz basis and satisfies

Ai=1NkZci[k]2i=1NkZci[k]φi(k)L22Bi=1NkZci[k]2 (6)

for any ci2(Z) with i=1,,N and for some positive and finite constants 0<AB<. This ensures that any fV(φ) has a unique and stable representation in the basis φi(k)i=1N,kZ.

Any function fV(φ) can be reproduced by the family (φi(k))i=1N,kZ in the sense that there exist sequences ci2(Z) such that f=i,kci[k]φi(k). However, such functions are necessary in L2(R) according to (6). It is interesting to investigate the reproduction properties beyond L2(R); in particular, the ability to reproduce polynomials. For this purpose, one needs to extend the notion. We say that a function f can be reproduced by (φi(k))i=1N,kZ if there exists sequences ci such that f(t)=i,kci[k]φi(tk) holds for all tR. What is hidden here is that the sum should be well-defined at any time t. When the basis functions are compactly supported, the sum is actually finite for every fixed tR and any sequence ci. In this section, we restrict ourselves to compactly supported basis functions since our goal is to characterize basis functions of minimal support. For non-compactly supported functions, the same notion is valid up to the condition that the sum is absolutely convergent for any tR. The reproduction of polynomials will play a crucial role all along the paper. It is formalized in Definition 1.

Definition 1

A family of N1 basis functions φ=(φ1,,φN) is of order L if they satisfy R(1+|t|)L|φn(t)|dt< for 1nN and if they can reproduce polynomial functions up to degree (L1), meaning that, for =0,,(L1), there exist sequences ci, such that

tR,t=i=1NkZci,[k]φi(tNk). (7)

It is worth noting that the sequence of coefficients is not required to take the values of the polynomial at the knots but, instead, that the interpolated function and the polynomial coincide. Condition (7) is known to be equivalent to the so-called Strang and Fix conditions and is very classical in approximation theory [35]. The hypothesis that R(1+|t|)L|φn(t)|dt< ensures that the Fourier transform of the φn admits continuous and bounded derivatives up to order L, which is actually crucial in the Stang and Fix formulation. This technical condition appears and is discussed in [20, Section II-D]. Note that it is clearly satisfied as soon as the functions are compactly supported and locally integrable, as will be the case in the rest of this section. In Section 3, we shall consider basis functions that are possibly non-compactly supported.

Support of B-splines.

Under the Riesz condition, a natural question is the ability of the basis functions φ to exactly reproduce classes of functions. The possibility to perfectly reproduce polynomials is of crucial importance. The constant function 1 can be reproduced if and only if the basis functions satisfy the partition of unity, which is the minimal requirement for a practical approximation scheme [36].

The polynomial B-spline of order L1 is classically known to be able to reproduce polynomials t for every =0,,(L1). Following [37], we denote it by β(L1). In particular, the cubic B-spline (of order L=4) can perfectly reproduce any polynomial of degree at most 3. The ability of the basis functions φ to perfectly reproduce polynomials is intimately linked to their approximation power, as will be developed in Section 3. B-splines are actually the most localized functions that satisfy this property, as formalized in Proposition 2.

Proposition 2

Let φL2(R) be a compactly supported function such that (φ(k))kZ is a Riesz basis and can reproduce polynomials up to degree(L1)0. Then, the support of φ is at least of size L.

In particular, the B-spline of order L, whose support is of size L, is optimal in terms of support localization among basis functions that are able to reproduce polynomials of degree at most (L1).

This result is classical in approximation theory: Schoenberg showed that B-splines effectively have the adequate approximation order [1]. A complete characterization of the piecewise-polynomial functions of minimal support with a given approximation order can be found in [16, Theorem 1]. To the best of our knowledge, very little is known about the localization of basis functions when N>1, which is what we propose to investigate. In the multifunction scheme, we shall characterize reproduction properties by considering the reproduction of B-splines instead of polynomials. This takes advantage of the well-known reproduction properties of B-splines, which are inherited by any family that is able to reproduce them.

2.2. Minimal-support properties for two basis functions

The Hermite splines ϕ1 and ϕ2, given by (2), (3), are able to reproduce both β2 and β3, the quadratic and cubic B-splines of order 3 and 4, respectively [13]. In particular, this means that V(ϕ1,ϕ2) contains polynomials of degree at most 3. Many other pairs of basis functions, starting with β2 and β3 themselves, can also reproduce quadratic and cubic splines. In line with Proposition 2, the investigation the support of a pair of functions that have the same reproduction properties as the Hermite splines follows naturally. This boils down to the study of basis functions for which β2,β3V(φ1,φ2).

We characterize the support of such pairs of functions in Theorem 3. This result then allows us to deduce the minimal-support property of Hermite splines in Corollary 4.

Theorem 3

Let φ1,φ2L2(Rd) be two compactly supported basis functions. We assume that

β2(t)=kZakφ1(tk)+bkφ2(tk), (8)
β3(t)=kZckφ1(tk)+dkφ2(tk), (9)

with reproduction sequences a,b,c,d that satisfy

kZk3(|ak|+|bk|+|ck|+|dk|)<. (10)

In particular, the quadratic and cubic B-splines β2,β3 are in V(φ1,φ2). Then, we have that

|Suppφ1|+|Suppφ2|4. (11)

Proof

First of all, one can restrict oneself to compactly supported basis functions φ1 and φ2 (otherwise, |Suppφ1|+|Suppφ2|=). If one of the basis function, for instance φ2, is identically zero, then the cubic spline β3V(φ1). This means in particular that the basis function φ1 reproduces polynomials up to degree 3, and its support is therefore at least of size four [16, Theorem 1]. Hence, we again have that |Suppφ1|+|Suppφ2|=|Suppφ1|4. We now assume that φ1 and φ2 are not identically 0.

Step 1. We show that the extreme points of the supports of φ1 and φ2 are integers. For x=a,b,c,d, we set X(ω)=kZxkejωk, the 2π-periodic Fourier transform of the sequence x. Condition (10) ensures that X has a periodic continuous third derivative. In the Fourier domain, (8), (9) become

β2^(ω)=(1ejω)3(jω)3=A(ω)φ1^(ω)+B(ω)φ2^(ω), (12)
β3^(ω)=(1ejω)4(jω)4=C(ω)φ1^(ω)+D(ω)φ2^(ω). (13)

We set det(ω)=A(ω)D(ω)B(ω)C(ω), which is itself a function with continuous third derivative. From (12), (13), we obtain that

det(ω)φ^1(ω)=D(ω)(1ejω)3(jω)3B(ω)(1ejω)4(jω)4, (14)
det(ω)φ^2(ω)=C(ω)(1ejω)3(jω)3+A(ω)(1ejω)4(jω)4. (15)

From (14), we deduce that, at least when det(ω) does not vanish, we have the relation

(jω)4φ^1(ω)=(jω)F(ω)D(ω)(1ejω)F(ω)B(ω), (16)

where F(ω)=(1ejω)3det(ω). The strategy of the proof is to show that the function F is continuous and periodic, and that (16) is therefore valid for any ωR. We study F in two steps: (i) first, we show that det(ω) does not vanish for ω2πZ; and (ii) we then demonstrate that F has a limit at 0 by considering the Taylor expansion of det(ω).

(i) Let us start with the first issue. We show that det(ω)0 for ω2πZ by contradiction. Let us fix ω0(0,2π) and assume that det(ω0)=0. We set α=1ejω0jω0 and β=1ejω0j(ω0+2π). Then, α0, β0, and αβ, while, by periodicity, det(ω0)=det(ω0+2π)=0. Hence, (14) for ω=ω0 and (ω0+2π) implies that

α3α4β3β4D(ω0)B(ω0)=00. (17)

The matrix being invertible (with determinant α3β3(αβ)0), we deduce that D(ω0)=B(ω0)=0. Similarly, (15) with ω=ω0 and (ω0+2π) implies that A(ω0)=C(ω0)=0. Injecting this in (12) with ω=ω0, we deduce that β^2(ω0)=α3=0, which contradicts our initial assumption.

(ii) We now study det(ω) around the origin. The function admits a third-order continuous derivative; hence, it can be McLaurin expanded at 0 as

det(ω)=det(0)+det(1)(0)ω+12det(2)(0)ω2+16det(3)(0)ω3+o(ω3). (18)

Assume by contradiction that det(p)(0)=0 for p=0,1,2,3. Then, (18) gives that det(ω)=o(ω3) around 0. From (14), we remark that

det(ω)(jω)3(1ejω)3φ^1(ω)=D(ω)B(ω)1ejωjω. (19)

The function det(ω)(jω)3(1ejω)3 vanishes at ω=0 because det does and limω0(jω)3(1ejω)3=1. Therefore, the left term in (19) vanishes when ω goes to 0. Now, by periodicity, around 2π, we have that det(ω)=o((ω2π)3). Hence, det(ω)(1ejω)3=o(1) around 2π and, again, the left term in (19) is also vanishing when ω goes to 2π.

We deduce that the right term in (19) vanishes for both ω=0 and ω=2π. In other terms, we have that

0=D(0)B(0)=D(2π)0 (20)

and, since D(2π)=D(0) by periodicity, this implies that D(0)=B(0)=0. A similar reasoning shows that A(0)=C(0)=0. From (12) with ω=0, we obtain that β^3(0)=0, which is false.

As a consequence, at least one of the derivative of the McLaurin expansion (18) is nonzero, from which we easily deduce that F(ω)=(1ejω)3det(ω) has a limit (possibly 0) at the origin. The function F is well-defined and continuous for ω2πZ, continuously extendable at 0, and is therefore a continuous periodic function.

At this stage, we obtained that (16) is valid for any ωR. The functions F(ω)D(ω) and (1ejω)F(ω)B(ω) are 2π-periodic, hence their inverse Fourier transforms are sums of Dirac impulses located at the integers. It means in particular that we have, in the time domain, that

φ1(4)(t)=kZykδ(tk)+zkδ(tk), (21)

where y and z are the Fourier sequences of (1ejω)F(ω)B(ω) and F(ω)D(ω), respectively. Since φ1(4) is compactly supported, like φ1, only finitely many yk and zk are non-zero. Then, φ1 is a compactly supported function whose fourth derivative has a support with integer extreme points (due to (21)), and therefore has a support with integer extreme points., too The same reasoning applies for φ2, which concludes this part of the proof.

Step 2. We know that the supports of φ1 and φ2 are of the form [a,b] with a<b, a,bZ. By contradiction, we assume that |Suppφ1|+|Suppφ2|<4. Then, one of the two basis functions has a support of size one, for instance φ1. We also assume without loss of generality that Suppφ1=[0,1], implying that only y0,y1,z0,z1 are possibly nonzero in (21). Going back to the Fourier domain, one obtains that

(jω)4φ^1(ω)=y0+y1ejω+jω(z0+z1ejω). (22)

The function φ1 is compactly supported. Its Fourier transform is hence infinitely smooth, and we can do the McLaurin expansion of both sides in (22) up to order 3. This gives

o(ω3)=(y0+y1)+jω(y1+z0+z1)+ω2(y12+z1)+jω3(y16z12)+o(ω3). (23)

In particular, we obtain the relations

y0+y1=z0+z1y1=z1y12=y16z12=0. (24)

This imposes that y0=y1=z0=z1=0, which is absurd. Finally, it shows that |Suppφ1|+|Suppφ2|4, as expected.  □

Condition (10) plays an important role in our proof by imposing some regularity in the Fourier domain. In practice, one even expects that compactly supported basis functions can generate the B-splines β2 and β3 with finitely many coefficients, in which case (10) is automatically satisfied. However, we believe that (10) can be relaxed up to some extent. From Theorem 3, we easily deduce that Hermite splines have the minimal-support property.

Corollary 4

The Hermite splines (ϕ1,ϕ2) are of minimal support among the pairs of functions that are able to reproduce both quadratic and cubic B-splines with reproduction sequences satisfying (10).

Proof

From [13, Appendix A], we know that Hermite splines can reproduce both quadratic and cubic B-splines, hence β2 and β3 are in V(ϕ1,ϕ2) with compactly supported reproduction sequences obviously satisfying (10). The supports of ϕ1 and ϕ2 are of size two, which implies that |Suppϕ1|+|Suppϕ2|=4. Finally, the pair (ϕ1,ϕ2) is maximally localized due to (11).  □

It is worth noting that the supports of the pair of Hermite splines jointly has the same size as the B-spline β3. However, β3 alone has lesser reproduction properties. Being of class C2, it is in particular unable to reproduce the quadratic spline β2, which only has C1 transitions at the integers. The simplest way of reproducing β2,β3 is to consider the basis pair (β2,β3) itself, which is not maximally localized since the sum of the supports is 7. An important additional remark is that two functions are needed to reproduce both cubic and quadratic spline, as formalized in Proposition 5.

Proposition 5

There exists no single function φL2(R) that is able to reproduce β2 and β3 with summable reproduction sequences.

Proof

By contradiction, let us assume that there exists φ such that β2=kZakφ(k) and β3=kZbkφ(k) with a,b1(Z). Then, the Fourier transforms A(ejω) and B(ejω) are continuous 2π-periodic functions. In the Fourier domain, we have that

1ejωjω3=A(ejω)φ^(ω),1ejωjω4=B(ejω)φ^(ω). (25)

Set ω0(0,2π) and ω1=ω0+2π. The relation (25) imposes that A(ejωi), B(ejωi), and φ^(ωi) are non-zero for i=1,2, and

1ejωijωiA(ejωi)φ^(ωi)=B(ejωi)φ^(ωi). (26)

After simplifications, we deduce that

jωi=(1ejωi)A(ejωi)B(ejωi). (27)

The right term in (27) is equal for ω0 and ω1 by periodicity, while the left term is not. This contradicts our initial assumption and implies Proposition 5.  □

3. Approximation properties of Hermite splines

Existing approaches have been proposed to characterize the behavior of the approximation error for single [19] and multi-generators [20]. They however assume technical conditions that the Hermite interpolation does not satisfy. We therefore formulate new theoretical tools to quantify the asymptotic constant of the error of approximation of this scheme. Since functions are estimated with derivative samples in the Hermite case, we also propose to investigate the approximation error on the first derivative. In our setting, other existing approximation schemes can be considered as well, allowing us to relate the excellent approximation properties of Hermite splines to other schemes such as cubic B-splines and interlaced derivative sampling.

3.1. Generalized sampling and error of approximation

The approximation of a continuously defined signal from a collection of its samples in a generalized-sampling scheme relies on two ingredients: some basis functions φ=(φ1,,φN)L2(R)N forming a Riesz basis in the sense of (6), and some sampling functions φ~=(φ~1,,φ~N) that are rapidly decaying generalized functions. The sampling functions include rapidly decaying L2-integrable functions together with the Dirac impulse δ, its derivative, and their shifts. We recall that an L2 function is rapidly decaying if it decays faster than the inverse of any polynomial at infinity. A generalized function in S(R) is rapidly decaying if its convolution with any infinitely differentiable and rapidly decaying function is a rapidly decaying function [38]. In particular, a rapidly decaying (generalized) function has an infinitely differentiable Fourier transform, a property which we shall rely on thereafter. The term generalized is motivated by the fact that sampling functions allow one to access more than just the values of the signal at the sampling points [39].

The set of pairs of basis and sampling functions fully characterizes an approximation scheme. Moreover, because we aim at a fair comparison, the quantity of information per unit of time ought to be equal among the considered schemes. This requires us to slightly adapt the definition of V(φ) given in (5). From now, we shall consider

W(φ)=i=1NkZci[k]φi(Nk):c=(c1,,cN)(2(Z))N. (28)

The added parameter N ensures that, for any N1, there is on average a single degree of freedom on each interval of size one. The sampling and reconstruction problem is then formally defined as follows: the function f is reconstructed by its approximation f~ associated to the basis functions φ and for the sampling functions φ~, defined as

f~=i=1NkZf,φ~i(Nk)φi(Nk)W(φ). (29)

We denote by Qφφ~ the operator such that Qφφ~f=f~. Hermite-spline approximation thus corresponds to N=2 with basis functions φ1(t)=ϕ1(t2) and φ2(t)=2ϕ2(t2), and sampling functions φ~1(t)=δ(t) and φ~2(t)=δ(t).

The best approximation of a given scheme is obtained when the pairs of sampling and basis functions are properly chosen such that f~ is the orthogonal projection of f onto W(φ). This implies the imposition of a particular condition [20] on the sampling functions φ~, namely, that

φ~=φ1,dφN,d=F1{Gφ()1φ^}, (30)

where Gφ is the Gram matrix of size N×N associated to φ, given for ωR by

Gφ(ω)=kZφ^(ω+2kπ)φ^T(ω+2kπ). (31)

This particular collection of sampling functions is called the dual functions associated to φ and is denoted by φd. Note that Gφ(ω) is invertible for every ωR and, therefore, (30) is meaningful because (6) is equivalent to 0<Aλmin(ω)λmax(ω)B<, where λmin(ω) (λmax(ω), respectively) is the minimum (maximum, respectively) eigenvalue of Gφ(ω) [40]. When φ~ is defined as (30), the operator Qφφ~ is the orthogonal projector over W(φ), denoted as Pφ=Qφφd. In this situation, (29) is reformulated as

f~(t)=Pφf=i=1NkZf,φi,dNkφiNk. (32)

To simplify the notation, we shall write P=Pφ and Q=Qφφ~ thereafter. The quality of the approximation is then evaluated in terms of the error of approximation which is expressed as

Eφφ~(f)=ff~L2=fQfL2. (33)

When f~=Pf, the error is denoted as Eφ(f). A direct implication is that Eφ(f)=Eφφd(f)Eφφ~(f), reaching the equality if and only if φ~=φd.

Up to now, we considered approximation schemes with (generalized) samples taken on the integer grid, which corresponds to the sampling step T=1. This parameter affects the coarseness of the approximation: when T0, the error is expected to vanish. For T>0, the approximation space (28) becomes

WT(φ)=i=1NkZci[k]φiTNk:c(2(Z))N (34)

and the approximation of f is given by

f~T=QTf=i=1NkZf,1Tφ~iTNkφiTNk, (35)

with resulting error

Eφφ~(f,T)=fQTfL2. (36)

The orthogonal projector (32) and its associated error are easily reformulated accordingly.

A number of hypotheses on the basis functions φ and the sampling functions φ~ have to be met in order to study the errors Eφ(f,T) and Eφφ~(f,T) in terms of rate of decay and asymptotic constant. The first one is the Riesz-basis condition (6), which ensures a unique and stable representation. The second one is the order of the basis functions. We shall consider basis functions φ of a given order in the sense of Definition 1.

When it is met, the decrease of the optimal error Eφ(f,T) is bounded from above by TL [20]. The last important condition is that the sampling and basis functions are quasi-biorthonormal of order L.

Definition 6

Two families of bases φ and sampling functions φ~ are quasi-biorthonormal of order L if the basis functions are of order L and, for the dual function φd given by (30) and all =0,,(L1), we have that

Rtφ~(t)dt=Rtφd(t)dt. (37)

It is worth noting that (37) is a slight abuse of notation, since the φ~i are not necessarily defined pointwise. However, they are assumed to be rapidly decaying generalized functions and can therefore be taken against a slowly growing smooth function such as tt. The right term in (37) is well-defined due to the condition R(1+|t|)L|φn(t)|dt< in Definition 1, as discussed in [20].

The quasi-biorthonormality ensures that the decrease of the error Eφφ~(f,T) is also bounded by TL [20]. Finally, the rate of decay of the error of approximation being under control in all generality, the asymptotic constant can be obtained as

Cφφ~(f)=limT0TLEφφ~(f,T) (38)

which, in practice, can be computed relying on a Fourier-domain approximation-error kernel. For more details, we refer the interested reader to [41], [42], [43] and references therein.

3.2. Approximation constants of irregular sampling schemes

Our goal in this section is to extend the main results of [20] so as to include Hermite-spline approximations. The original framework is indeed restricted to sampling functions φ~ with bounded Fourier transforms. While this allows one to consider the case of interpolation, it excludes Hermite-spline approximations since the Fourier transform jω of δ is unbounded.

In what follows, it will be useful to consider Sobolev spaces of integer order N. For N0, we define W2N(R) as the space of functions f such that

R|f^(ω)|2(1+ω2)Ndω<. (39)

For N=0, this corresponds to the space L2(R). In general, fW2N(R) if and only if f,,f(N)L2(R). For technical reasons, we also consider Sobolev spaces of fractional order γ0, for which we simply replace the integer N by a nonnegative real number γ in (39). Fractional Sobolev spaces are intimately connected to fractional derivatives in the following sense: the fractional derivative of order γ0 is defined in the Fourier domain by

F{f(γ)}(ω)=(jω)γf^(ω) (40)

for any fW2γ(R). By definition of the Sobolev space, one has that f(γ)L2(R) as soon as fW2γ(R). In general, we have the Parseval-type relation f(γ)L22=R|f^(ω)|2|ω|2γdω.

Our development follows the key contributions of [20]. We therefore only detail the adaptations that are required in our case. We start with a brief summary of the general approach, which is common to many works of approximation theory in shift-invariant spaces. We first introduce the kernels associated to the functions φ,φ~ as

Emin(ω)=1+φ^T(ω)Gφ1(ω)φ^(ω), (41)
Eres(ω)=(φ~^φd^)T(ω)Gφ(ω)(φ~^φd^)(ω), (42)
E(ω)=Emin(ω)+Eres(ω), (43)

where Gφ is the Gram matrix (31). The kernel Emin relates to the minimum-error case achieved using the orthogonal projector (i.e., φ~=φd), and Eres to the residual error that arise when using sampling functions that differ from the dual functions. We furthermore note that Eres(ω)=0 when φ~=φd, as expected. The key ideas of relying on E are as follows.

  • The kernel E measures the approximation power of a reasonable approximation scheme (φ,φ~), in the sense that fQTfL2R|f^(ω)|2E(Tω)dω12 for small T>0.

  • The precise behavior of fQTfL2 is then deduced from that of E around the origin. It depends on the approximation order L of the scheme and, typically, behaves like CTL. The constant C depends on the function f to approximate and on the scheme (φ,φ~) via the Taylor expansion of E.

We give a precise meaning to these two points in Proposition 8, Proposition 9. Before that, we recall an important lemma taken from [20] which will play a fundamental role in our proofs.

Lemma 7

Let fW2L+1(R). For k0, we set f^k(ω)=f^(ω)1kT|ω|<(k+1)T. Then, the following relations hold:

f^=k0f^k; (44)
fkQTfkL22=R|f^k(ω)|2E(Tω)dω for k0; and (45)
|fQTfL2f0QTf0L2|k>0fkQTfkL2. (46)

Equality (44) is obvious. The two next relations come from [20, Theorem 1]; we have simply reformulated the results with our notation. First, it is proven (see [20, Theorem 1]) that fQTfL22=kZR|f^k(ω)|2E(Tω)dω as soon as f^(ω)f^(ωnT)=0 for any ωR and nZ, a condition that is satisfied by the fk by construction, giving (45). Moreover, the inequality (46) appears in the proof of [20, Theorem 1] (see (63)).

Proposition 8

Let (φ,φ~) be a set of N basis and sampling functions that are quasi-biorthonormal and provide an approximation scheme of orderL. We assume moreover that the kernel E given by (43) satisfies

|E(ω)|C2max(1,|ω|2p) (47)

for some C>0, some integer 0pL, and every ωR. Then, for every fW2L+1(R), we have that

fQTfL2=R|f^(ω)|2E(Tω)dω12+O(TL+1). (48)

The case p=0 corresponds to a bounded E and can be found in [20, Theorem 1].

Proof

We recall that f^k(ω) is defined as f^k(ω)=f^(ω)1kT|ω|<(k+1)T, and that fk=F1{f^k}. Then, we have that

|fQTfL2R|f^(ω)|2E(Tω)dω12||fQTfL2f0QTf0L2|+|f0QTf0L2R|f^(ω)|2E(Tω)dω12|(i)k>0fkQTfkL2+|f0QTf0L2R|f^(ω)|2E(Tω)dω12|=(I)+(II), (49)

where we used (46) in (i). As a consequence, (48) follows if one shows that the two terms (I) and (II) in (49) are O(TL+1). Using (45), we have that

fkQTfkL22=kT<|ω|(k+1)T|f^k(ω)|2E(Tω)dω=kT<|ω|(k+1)T|f^k(ω)|2|ω|2(L+1)T2p|ω|2(Lp+1)E(Tω)T2p|ω|2pdωC2T2p(kT)2(Lp+1)R|f^k(ω)|2|ω|2(L+1)dω=C2T2(L+1)k2(Lp+1)fk(L+1)L22, (50)

where the inequality is due to |ω|kT over the domain and to (47), which implies that C2supωRE(ω)max(1,|ω|p)sup|ω|1E(ω)|ω|p. Summing over k>0, we deduce that

(I)CTL+1k>01kLp+1fk(L+1)L2CTL+1k>01k2(Lp+1)12k>0fk(L+1)L2212=CTL+1ζ(2(Lp+1))(ff0)(L+1)L2=O(TL+1), (51)

where the second inequality is derived from Cauchy–Schwarz, and ζ is the Riemann zeta function. For the second term, we remark that, again due to [20, (27)], f0QTf0L22=|ω|1T|f^(ω)|2E(Tω)dω. Therefore, using the relation |gL2hL2|ghL2 (a consequence of the Minkowski inequality) we deduce that

(II)|ω|>1T|f^(ω)|2E(Tω)dω12=|ω|>1T|f^(ω)|2|ω|2(L+1)T2p|ω|2(Lp+1)E(Tω)T2p|ω|2pdω12CTpTLp+1|ω|>1T|f^(ω)|2|ω|2(L+1)dω12=CTL+1(ff0)(L+1)L2=O(TL+1), (52)

where the inequality once more follows from |ω|1T over the domain and from our assumption (47). Combining (51), (52) in (49) completes the proof. □

Our next result connects the expression R|f^(ω)|2E(Tw)dω to the McLaurin expansion of E.

Proposition 9

Let (φ,φ~) be a set of N basis and sampling functions that are quasi-biorthonormal and provide an approximation scheme of orderL. We assume moreover that the kernel E given by (43) is (2L+1)-times continuously differentiable and satisfies

|E(2L+1)(ω)|C2max(1,|ω|2p) (53)

for some C>0, some integer 0pL, and every ωR. Then, for every fW2L+p+12(R), we have that

R|f^(ω)|2E(Tw)dω=E(2L)(0)(2L)!f(L)L22T2L+O(T2L+1). (54)

Proof

The kernel E being symmetric, we deduce that E(2k+1)(0)=0 for k=0,,(L1). Moreover, the condition of (φ,φ~) ensures that E(2k)(0)=0 for k=0,,(L1), together with E(2L)(0)0. Therefore, E being (2L+1)-times continuously differentiable, its McLaurin expansion is given by

E(ω)=E(2L)(0)(2L)!ω2L+E(2L+1)(ωθ)(2L+1)!w2L+1, (55)

with θ=θ(ω)(0,1). Using (55), we deduce that

R|f^(ω)|2E(Tw)dω=R|f^(ω)|2E(2L)(0)(2L)!T2Lω2Ldω+T2L+1(2L+1)!R|f^(ω)|2ω2L+1E(2L+1)(ωTθ(ωT))dω=E(2L)(0)(2L)!f(L)L22T2L+T2L+1(2L+1)!R|f^(ω)|2ω2L+1E(2L+1)(ωTθ(ωT))dω. (56)

Due to (53) and 0<θ(ωT)<1,

E(2L+1)(ωTθ(ωT))C2max(1,|ω|2pT2p)C2(1+|ω|2pT2p), (57)

from which we deduce that R|f^(ω)|2ω2L+1E(2L+1)(ωTθ(ωT))dωC2(f(L+12)L22+T2pf(L+p+12)L22). Note that we refer here to fractional derivatives, as defined in (40). Injecting this to (56) implies that

R|f^(ω)|2E(Tw)dωE(2L)(0)(2L)!f(L)L22T2L=C2T2L+1(2L+1)!(f(L+12)L22+T2pf(L+p+12)L22)=O(T2L+1), (58)

which concludes the proof. □

Finally, we conclude with an extension of [20, Theorem 4] to sampling functions that are not necessarily bounded in the Fourier domain.

Theorem 10

We consider an approximation scheme (φ,φ~) with N basis and sampling functions such that

  • the basis functions are rapidly decaying L2 functions such that the family {φi(Nk)}i=1N,kZ is a Riesz basis in the sense of (6), with approximation power of order L1 (see Definition 1);

  • the sampling functions are rapidly decaying generalized functions such that (φ,φ~) is quasi-biorthonormal of order L (see Definition 6);

  • there exists an integer 0pL such that, for any ωR, any 0kL, and any 1iN,
    |φ~^i(k)(ω)|Cmax(1,|ω|p). (59)

Then, for any fWL+max(p+12,1)(R), we have that

fQTfL2T0E(2L)(0)(2L)!f(L)L2TL. (60)

The two first conditions in Theorem 10 are necessary to ensure that the approximation scheme has an approximation power of order L. The last condition allows us to show that such an approximation power is attained with restricted condition on the sampling functions having a possibly unbounded Fourier transform.

Proof

We prove that the conditions of Theorem 10 imply that we fulfill the hypotheses of both Proposition 8, Proposition 9.

Since φ,φ~ are rapidly decaying (generalized) functions, their Fourier transforms φ^ and φ~^ are infinitely differentiable. The same holds true for φ^d=Gφ1φ^ since Gφ is a matrix-valued infinitely differentiable function with infinitely differentiable inverse Gφ1 due to the Riesz basis condition. This implies that E is infinitely differentiable and therefore (2L+1)-times continuously differentiable.

The basis functions are rapidly decaying, hence in L1(R), which implies that φ^ is bounded. The Riesz-basis condition then implies that both Gφ and its inverse are bounded as functions of ω. It then follows that Emin(ω) is bounded, while Eres(ω) is dominated by (φ~^φ^d)(ω), itself being dominated by max(1,|ω|2p) due to (59). It finally implies (47) for some constant C>0. Similarly, for kL, the function φ^(k) is bounded, being the Fourier transform of ttkφ(t), which is in L1(R) due to the rapid decay of φ. Again, this property is transferred to the derivative of Gφ and its inverse. Then, exploiting the Leibnitz rule, one shows that Emin(2L+1) is bounded, while E(2L+1) is a sum of products made of bounded terms and of two terms of the form φ~^i(ω), which are controlled by (59). Putting things together, we deduce (53) for some constant C>0.

Finally, the hypotheses of Proposition 8, Proposition 9 are satisfied, implying (48), (54), together with

fQTfL2=E(2L)(0)(2L)!f(L)L2TL+O(TL+12). (61)

This proves (60). □

Note that [20, Theorem 4] corresponds to Theorem 10 with p=1, and with less restrictive assumptions on φ and φ~ (essentially, the rapid decay is replaced by a polynomial decay adapted to L). The condition fWL+max(p+12,1)(R) reflects the cost of covering the scenario of possibly irregular sampling functions with unbounded Fourier transforms.

3.3. Approximation properties of Hermite splines

We can now evaluate the approximation error on f in different frameworks, including the Hermite scheme. In our analysis, for (φ,φ~) being fixed, we also quantify the error on the derivative f when we approximate it by (QTf). We therefore study the quantities fQTfL2 and f(QTf)L2. Knowing the order of approximation L1, the quality of the approximation is quantified by the two asymptotic constants

Cφφ~=Cφ,1φ~Cφ,2φ~=limT0TLf(L)L21fQTfL2limT0T(L1)f(L1)L21f(QTf)L2. (62)

The asymptotic constant Cφ,2φ~ can be computed with the same tools as Cφ,1φ~. Using integration by parts, we indeed have that

QTf=1Ti=1NkZf,1Tφ~iTNkφiTNk (63)
=1Ti=1NkZf,1Tφ~i,intTNkφiTNk, (64)

where the new sampling functions are best defined in the Fourier domain as

φ~i,int^(ω)=1jωφ~i^(ω). (65)

As one power of T gets lost in the differentiation process, the rate of decay of the error on f is (L1). Note that the apparent singularity in (65) around 0 is counterbalanced in the analysis since one only approximates functions f that are derivatives functions, hence, for which f^(ω)=jωf^(ω).

In the Hermite framework, for which N=2, sampling functions are taken as φ~1=δ and φ~2=δ and basis functions as (2) and (3). For comparison purpose, we also consider two relevant schemes that fit our analysis framework: classical cubic B-splines and interlaced derivative sampling. Cubic B-spline approximation corresponds to N=1, with φ~=kZ(b3)1[k]δ(k), where (b3)1 is the direct B-spline filter and φ=β3 the cubic B-spline [37]. Interlaced derivative sampling can be defined in the more general framework of generalized sampling without band-limited constraints [41]. In this setting, N=2 and the sampling functions correspond to φ~1=δ and φ~2=δ12. The basis functions are constructed from the cubic B-spline to allow for a fair comparison and are given by

φ^1(ω)=3e2jω1+ejω42ω4, (66)
φ^2(ω)=e2jω1+ejω41+ejω4+ejω22e2jωω4 (67)

in the Fourier domain. A comprehensive description of their derivation is provided in [43]. In particular, the family (φ1,φ2) is known to be of order 4. It is worth noting that the sampling functions (φ~1,φ~2)=(δ,δ(12)) do not correspond to the dual functions in these frameworks. This is easily motivated by practical considerations: the sampling process is, in practice, implemented with digital filtering, which excludes dual functions due to their continuous nature. The dual functions can, however, still be constructed from the Gram matrix following (30) so as to estimate the optimal approximation error.

We now reveal the approximation power of these three different schemes. The results are known for the cubic B-splines [19] and are included for comparison purposes. For Hermite approximation and interlaced sampling, they are deduced from Theorem 10 and are not included in the multi-generator framework [20], whose hypotheses exclude the use of derivative samples.

Proposition 11

Let (φ,φ~) be one of the three approximation schemes considered above. Then, we have that

fQTfL2T017270f(4)L2T4,f(QTf)L2T0112210f(3)L2T3, (68)

for every fW25(R) (cubic B-splines) or fW2112(R) (Hermite splines or interlaced derivative sampling). Moreover, in the three cases, the optimal approximation scheme associated to φ leads to:

fQTfL2T0103fPTfL2,f(QTf)L2T0f(PTf)L2, (69)

for every fW25(R) (cubic B-splines) or fW2112(R) (Hermite splines and interlaced derivative sampling).

Proof

The three frameworks are readily known to define approximation schemes of order L=4 [13], [37], [41]. All the considered basis functions specify a Riesz basis, are rapidly decaying (Hermite and cubic B-splines are compactly supported while the basis functions for interlaced derivative-sampling, despite being non-compactly supported, are exponentially decaying [41]), and reproduce polynomials up to degree 3. The sampling functions are rapidly decaying generalized functions. Indeed, they are compactly supported for the Hermite and interlaced derivative-sampling schemes. For cubic B-splines, we have seen that φ~=kZ(b3)1[k]δ(k), where the sequence (b3)1 is known to be exponentially decaying [37], implying the result. In addition, for the Hermite and interlaced derivative sampling schemes (for cubic splines, respectively), (59) is clearly satisfied with p=1 (p=0, respectively). The conditions of Theorem 10 are therefore satisfied for L=4, implying (60).

The value of E(2L)(0) is computed by computing the McLaurin expansion of the kernel E.1 The analysis of the approximation error on the derivative follows the same principle, the kernel being given for (φ,φ~int) according to (65), giving (68).

We obtain the asymptotic behavior of the optimal approximation errors fPTfL2 and f(PTf)L2 associated to φ~=φd in the same way, leading to (69). □

Our findings call for three comments.

  • The three schemes being compared have the same approximation order, the same approximation constant, and the same optimal approximation constant (associated to (φ,φd)) to reconstruct both f and its derivative.

  • In all cases, the sampling functions result in a near-optimal asymptotic constant Cφ,1φ~ for the reconstruction of f. The minor discrepancy with respect to optimality is a factor of 1031.83, which is expected from the fact that the sampling functions are not dual functions. It is also remarkable that the reconstruction of the derivative, while not being associated to the dual functions, is associated to an optimal approximation constant Cφ,2φ~.

  • Dual functions, although offering the smallest error of approximation, have strong practical disadvantages. First, the φd, are non-compactly supported splines. More importantly, they cannot be easily implemented as they do not have a digital filter equivalent (i.e., computation of the f,φd depend on more than just knowing f and its derivative on a fixed grid, in contrast to usual cubic B-splines and Hermite splines). Finally, dual functions do not possess a closed-form expression in general. For these reasons, the sampling functions φ~ classically used in the three considered approximation schemes, although non-optimal, are preferable in practice.

In Table 1, we sum up these findings, including the asymptotic constants for cubic B-spline, interlaced derivative sampling, and cubic Hermite splines. We also provide their comparison with optimal constants. Hermite interpolation is thus not a unique way of approximating a function and its first derivative, even if one wishes the error to remain close to optimal. The notable difference lies in the fact that the Hermite scheme provides functions that are simultaneously of finite support, which is not the case for interlaced derivative sampling (see (66) and (67)), and interpolating, which is not the case for cubic B-splines.

Table 1.

Comparison of approximation methods.

Approximation method Cubic B-splines Interlaced derivative sampling Hermite splines
Digital-filter implementation
Interpolating
Finite support
Closed-form expression

Rate of decay (L) 4 4 4
Asymptotic constant (Cφ,1φ~) 17270 17270 17270
Ratio to optimal (Cφ,1φ~) 310 310 310

Rate of decay (L1) 3 3 3
Asymptotic constant (Cφ,2φ~) 112210 112210 112210
Ratio to optimal (Cφ,2φ~) 1 1 1

4. Concluding remarks

Our work focused on the formal investigation of two practical aspects of Hermite splines, namely, their short support and their good approximation properties. We show that Hermite splines are of minimal support among pairs of functions with similar reproduction properties and provide a framework to quantify their power of approximation. These results not only allow us to prove that Hermite splines are asymptomatically identical to cubic B-splines, but also offer a general framework for the quantitative approximation of functions and their derivatives.

In summary, Hermite splines are found to offer an approximation scheme that (1) has the same approximation power than the notorious cubic B-splines, (2) is interpolating (possibly with the derivative), (3) is based on maximally localized compactly supported basis functions. The resulting cost is the need to use two basis functions instead of a single one, and the need to have access to derivative samples.

Acknowledgments

This work was supported by core funding from the European Molecular Biology Laboratory (EMBL), Germany, the Swiss National Science Foundation, Switzerland under Grants 200020_184646/1, and P2ELP2_181759, and the European Research Council under Grant H2020-ERC (ERC grant agreement No 692726 - GlobalBioIm). The authors thank Yoann Pradat for useful remarks on the manuscript.

Footnotes

1

We relied on the technical computing software Mathematica 11 for this task.

Contributor Information

Julien Fageot, Email: julien.fageot@epfl.ch.

Shayan Aziznejad, Email: shayan.aziznejad@epfl.ch.

Michael Unser, Email: michael.unser@epfl.ch.

Virginie Uhlmann, Email: uhlmann@ebi.ac.uk.

References

  • 1.Schoenberg I. Society for Industrial and Applied Mathematics; Philadelphia, PA, USA: 1973. Cardinal Spline Interpolation. [Google Scholar]
  • 2.Lipow P., Schoenberg I. Cardinal interpolation and spline functions. III. Cardinal Hermite interpolation. Linear Algebra Appl. 1973;6:273–304. [Google Scholar]
  • 3.Schoenberg I., Sharma A. Cardinal interpolation and spline functions. V. The B-splines for cardinal Hermite interpolation. Linear Algebra Appl. 1973;7(1):1–42. [Google Scholar]
  • 4.Unser M., Aldroubi A., Eden M. B-Spline signal processing: Part I—Theory. IEEE Trans. Signal Process. 1993;41(2):821–833. [Google Scholar]
  • 5.Unser M., Aldroubi A., Eden M. B-Spline signal processing: Part II—Efficient design and applications. IEEE Trans. Signal Process. 1993;41(2):834–848. [Google Scholar]
  • 6.De Boor C. Springer-Verlag; New York, NY, USA: 1978. A Practical Guide to Splines. [Google Scholar]
  • 7.Farouki R. The Bernstein polynomial basis: A centennial retrospective. Comput. Aided Geom. Design. 2012;29(6):379–419. [Google Scholar]
  • 8.Prautzsch H., Boehm W., Paluszny M. Springer-Verlag; Berlin, Germany: 2013. Bézier and B-Spline Techniques. [Google Scholar]
  • 9.Farin G. Morgan Kaufmann Publishers; Burlington, MA, USA: 2002. Curves and Surfaces for CAGD: A Practical Guide. [Google Scholar]
  • 10.Böhm W., Farin G., Kahmann J. A survey of curve and surface methods in CAGD. Comput. Aided Geom. Design. 1984;1(1):1–60. [Google Scholar]
  • 11.Dahmen W., Han B., Jia R.-Q., Kunoth A. Biorthogonal multiwavelets on the interval: Cubic Hermite splines. Constr. Approx. 2000;16(2):221–259. [Google Scholar]
  • 12.Warming R., Beam R. Discrete multiresolution analysis using Hermite interpolation: Biorthogonal multiwavelets. SIAM J. Sci. Comput. 2000;22(4):1269–1317. [Google Scholar]
  • 13.Uhlmann V., Fageot J., Unser M. Hermite snakes with control of tangents. IEEE Trans. Image Process. 2016;25(6):2803–2816. doi: 10.1109/TIP.2016.2551363. [DOI] [PubMed] [Google Scholar]
  • 14.V. Uhlmann, J. Fageot, H. Gupta, M. Unser, Statistical optimality of Hermite splines, in: Proceedings of the Eleventh International Workshop on Sampling Theory and Applications (SampTA’15), Washington DC, USA, 2015, pp. 226–230.
  • 15.Delgado-Gonzalo R., Thévenaz P., Unser M. Exponential splines and minimal-support bases for curve representation. Comput. Aided Geom. Design. 2012;29(2):109–128. [Google Scholar]
  • 16.Blu T., Thévenaz P., Unser M. MOMS: Maximal-order interpolation of minimal support. IEEE Trans. Image Process. 2001;10(7):1069–1080. doi: 10.1109/83.931101. [DOI] [PubMed] [Google Scholar]
  • 17.Hall C. On error bounds for spline interpolation. J. Approx. Theory. 1968;1(2):209–218. [Google Scholar]
  • 18.Birkhoff G., Priver A. Hermite interpolation errors for derivatives. Stud. Appl. Math. 1967;46(1–4):440–447. [Google Scholar]
  • 19.Blu T., Unser M. Quantitative Fourier analysis of approximation techniques: Part I—Interpolators and projectors. IEEE Trans. Signal Process. 1999;47(10):2783–2795. [Google Scholar]
  • 20.Blu T., Unser M. Approximation error for quasi-interpolators and (multi-) wavelet expansions. Appl. Comput. Harmon. Anal. 1999;6(2):219–251. [Google Scholar]
  • 21.Condat L. Signal Processing Conference, 2011 19th European. IEEE; 2011. Reconstruction of derivatives: Error analysis and design criteria; pp. 839–843. [Google Scholar]
  • 22.Condat L., Möller T. Quantitative error analysis for the reconstruction of derivatives. IEEE Trans. Signal Process. 2011;59(6):2965–2969. [Google Scholar]
  • 23.de Boor C., DeVore R., Ron A. Approximation from shift-invariant subspaces of L2(Rd) Trans. Amer. Math. Soc. 1994;341(2):787–806. [Google Scholar]
  • 24.de Boor C., DeVore R., Ron A. The structure of finitely generated shift-invariant spaces in L2(Rd) J. Funct. Anal. 1994;119(1):37–78. [Google Scholar]
  • 25.de Boor C., DeVore R., Ron A. Approximation orders of FSI spaces in L2(Rd) Constr. Approx. 1998;14(4):631–652. [Google Scholar]
  • 26.O. Holtz, A. Ron, Approximation orders of shift-invariant subspaces of W2s(Rd), arXiv preprint math/0512609, 2005.
  • 27.Jia R.-Q., Liu S.-T. Wavelet bases of Hermite cubic splines on the interval. Adv. Comput. Math. 2006;25(1–3):23–39. [Google Scholar]
  • 28.Goodman T. Interpolatory Hermite spline wavelets. J. Approx. Theory. 1994;78(2):174–189. [Google Scholar]
  • 29.Antonini M., Barlaud M., Mathieu P., Daubechies I. Image coding using wavelet transform. IEEE Trans. Image Process. 1992;1(2):205–220. doi: 10.1109/83.136597. [DOI] [PubMed] [Google Scholar]
  • 30.V. Uhlmann, M. Unser, Tip-seeking active contours for bioimage segmentation, in: Proceedings of the Twelfth IEEE International Symposium on Biomedical Imaging: From Nano To Macro (ISBI’15), Brooklyn NY, USA, 2015, pp. 544–547.
  • 31.Migliozzi D., Cornaglia M., Mouchiroud L., Uhlmann V., Unser M., Auwerx J., Gijs M. Multimodal imaging and high-throughput image-processing for drug screening on living organisms on-chip. J. Biomed. Opt. 2019;24(2):021205–1–021205–9. doi: 10.1117/1.JBO.24.2.021205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Ron A. Factorization theorems for univariate splines on regular grids. Israel J. Math. 1990;70(1):48–68. [Google Scholar]
  • 33.Blu T., Thévenaz P., Unser M. MOMS: Maximal-order interpolation of minimal support. IEEE Trans. Image Process. 2001;10(7):1069–1080. doi: 10.1109/83.931101. [DOI] [PubMed] [Google Scholar]
  • 34.Jetter K., Plonka G. Multivariate Approximation and Applications. Cambridge University Press; 2001. A survey on L2-approximation order from shift-invariant spaces; pp. 73–111. [Google Scholar]
  • 35.Strang G., Fix G. Constructive Aspect of Functional Analysis. Cremonese; Rome, Italy: 1971. A Fourier analysis of the finite element variational method; pp. 796–830. [Google Scholar]
  • 36.De Boor C., DeVore R. Partitions of unity and approximation. Proc. Amer. Math. Soc. 1985;93(4):705–709. [Google Scholar]
  • 37.Unser M. Splines: A perfect fit for signal and image processing. IEEE Signal Process. Mag. 1999;16(6):22–38. [Google Scholar]
  • 38.Schwartz L. Hermann; 1966. Théorie des Distributions. [Google Scholar]
  • 39.Papoulis A. Generalized sampling expansion. IEEE Trans. Circuits Syst. 1977;24(11):652–654. [Google Scholar]
  • 40.Aldroubi A. Oblique projections in atomic spaces. Proc. Amer. Math. Soc. 1996;124(7):2051–2060. [Google Scholar]
  • 41.Unser M., Zerubia J. A generalized sampling theory without band-limiting constraints. IEEE Trans. Circuits Syst. II. 1998;45(8):959–969. [Google Scholar]
  • 42.Unser M., Zerubia J. Generalized sampling: Stability and performance analysis. IEEE Trans. Signal Process. 1997;45(12):2941–2950. [Google Scholar]
  • 43.V. Uhlmann, Landmark active contours for bioimage analysis: A tale of points and curves, EPFL Thesis no. 7951, Swiss Federal Institute of Technology Lausanne (EPFL), 2017, p. 263.

RESOURCES