Skip to main content
Springer logoLink to Springer
. 2023 Aug 7;21(2):26. doi: 10.1007/s43670-023-00065-7

Embracing off-the-grid samples

Oscar López 1,, Özgür Yılmaz 2
PMCID: PMC10406720  PMID: 37560141

Abstract

Many empirical studies suggest that samples of continuous-time signals taken at locations randomly deviated from an equispaced grid (i.e., off-the-grid) can benefit signal acquisition, e.g., undersampling and anti-aliasing. However, explicit statements of such advantages and their respective conditions are scarce in the literature. This paper provides some insight on this topic when the sampling positions are known, with grid deviations generated i.i.d. from a variety distributions. By solving a square-root LASSO decoder with an interpolation kernel we demonstrate the capabilities of nonuniform samples for compressive sampling, an effective paradigm for undersampling and anti-aliasing. For functions in the Wiener algebra that admit a discrete s-sparse representation in some transform domain, we show that O(spolylogN) random off-the-grid samples are sufficient to recover an accurate N2-bandlimited approximation of the signal. For sparse signals (i.e., sN), this sampling complexity is a great reduction in comparison to equispaced sampling where O(N) measurements are needed for the same quality of reconstruction (Nyquist–Shannon sampling theorem). We further consider noise attenuation via oversampling (relative to a desired bandwidth), a standard technique with limited theoretical understanding when the sampling positions are non-equispaced. By solving a least squares problem, we show that O(NlogN) i.i.d. randomly deviated samples provide an accurate N2-bandlimited approximation of the signal with suppression of the noise energy by a factor 1log(N).

Keywords: Nonuniform sampling, Sub-Nyquist sampling, Anti-aliasing, Jitter sampling, Compressive sensing, Dirichlet kernel

Introduction

The Nyquist–Shannon sampling theorem is perhaps the most impactful result in the theory of signal processing, fundamentally shaping the practice of acquiring and processing data [1, 2] (also attributed to Kotel’nikov [3], Ferrar [4], Cauchy [5], Ogura [6], Whittaker [7, 8]). In this setting, typical acquisition of a continuous-time signal involves taking equispaced samples at a rate slightly higher than a prescribed frequency ω Hz in order to obtain a bandlimited approximation via a quickly decaying kernel. Such techniques provide accurate approximations of (noisy) signals whose spectral energy is largely contained in the band [-ω/2,ω/2] [5, 911].

As a consequence, industrial signal acquisition and post-processing methods tend to be designed to incorporate uniform sampling. However, such sampling schemes are difficult to honor in practice due to physical constraints and natural factors that perturb sampling locations from the uniform grid, i.e., nonuniform or off-the-grid samples. In response, nonuniform analogs of the noise-free sampling theorem have been developed, where an average sampling density proportional to the highest frequency ω of the signal guarantees accurate interpolation, e.g., Landau density [1113]. However, non-equispaced samples are typically unwanted and regarded as a burden due to the extra computational cost involved in regularization, i.e., interpolating the nonuniform samples onto the desired equispaced grid.

On the other hand, many works in the literature have considered the potential benefits of deliberate nonuniform sampling [1441]. Suppression of aliasing error, i.e., anti-aliasing, is a well known advantage of randomly perturbed samples. For example, jittered sampling is a common technique for anti-aliasing that also provides a well distributed set of samples [30, 4244]. To the best of the authors’ knowledge, this phenomenon was first noticed by Shapiro and Silverman [14] (also by Beutler [15, 16] and implicitly by Landau [12]) and remained unused in applications until rediscovered in Pixar animation studios by Cook [17]. According to our literature review, such observations remain largely empirical or arguably uninformative for applications. Closing this gap between theory and experiments would help the practical design of such widely used methodologies.

To this end, in this paper we propose a practical framework that allows us to concretely investigate the properties of randomly deviated samples for undersampling, anti-aliasing and general noise attenuation. To elaborate (see Sect. 1.1 for notation), let f:[-12,12)C be our function of interest that belongs to some smoothness class. Our goal is to obtain a uniform discretization fCN, where an estimate of fk=f(k-1N-12) will provide an accurate approximation f(x) of f(x) for all x[-12,12). We are given noisy non-equispaced samples, b=f~+dCm, where f~k=f(k-1m-12+Δk) is the nonuniformly sampled signal and dCm encompasses unwanted additive noise. In general, we will consider functions f with support on [-12,12) whose periodic extension is in the Wiener algebra A(Ω) [45], where by abuse of notation Ω denotes the interval [-12,12) and the torus T.

To achieve undersampling and anti-aliasing, we assume our uniform signal admits a sparse (or compressible) representation along the lines of compressive sensing [4648]. We say that f is compressible with respect to a transform, ΨCN×N, if there exists some gCN such that f=Ψg and g can be accurately approximated by an s-sparse vector (sN). In this scenario, our methodology consists of constructing an interpolation kernel SRm×N that achieves Sff~ accurately for smooth signals, in order to define our estimate f(x) using the discrete approximation Ψgf where

g:=argminhCNλh1+NmSΨh-b2 1

and λ0 is a parameter to be chosen appropriately. We show that for signals in the Wiener algebra and under certain distributions D, if we have mO(spolylog(N)) off-the-grid samples with i.i.d. deviations {Δk}k=1mD then the approximation error |f(x)-f(x)| is proportional to d2, the error of the best s-sparse approximation of g,  and the error of the best N2-bandlimited approximation of f in the Wiener algebra norm (see equation 6.1 in [45]). If sN, the average sampling rate required for our result (step size 1spolylog(N)) provides a stark contrast to standard density conditions where a rate proportional to the highest frequency ωN, resulting in step size 1N, is needed for the same bandlimited approximation. The result is among the first to formally state the anti-aliasing nature of nonuniform sampling in the context of undersampling (see Sect. 3).

Removing the sparse signal model, we attenuate measurement noise (i.e., denoise) by defining f(x) using the discrete estimate

ff:=argminhCNSh-b2. 2

In this context, our main result states that mNlog(N) i.i.d. randomly deviated samples provide approximation error |f(x)-f(x)| proportional to the noise level (d2log(N)) and the error of the best N2-bandlimited approximation of f in the Wiener algebra norm. Thus, by nonuniform oversampling (relative to the desired N2-bandwidth) we attenuate unwanted noise regardless of its structure. While uniform oversampling is a common noise filtration technique, our results show that general nonuniform samples also posses this denoising property (see Sect. 4).

The rest of the paper is organized as follows: Sect. 2 provides a detailed elaboration of our sampling scenario, signal model and methodology. Section 3 showcases our results for anti-aliasing and undersampling of compressible signals while Sect. 4 considers noise attenuation via oversampling. A comprehensive discussion of the results and implications is presented in Sect. 5. Several experiments and computational considerations are found in Sect. 6, followed by concluding remarks in Sect. 7. We postpone the proofs of our statements until Sect. 8. Before proceeding to the next section, we find it best to introduce the general notation that will be used throughout. However, each subsequent section may introduce additional notation helpful in its specific context.

Notation

We denote complex valued functions of real variables using bold letters, e.g., f:RC. For any integer nN, [n] denotes the set {N:1n}. For k,N, bk indicates the k-th entry of the vector bDk denotes the (k,) entry of the matrix D and Dk(D) denotes the k-th row (resp. -th column) of the matrix. We reserve x to denote real variables and write the complex exponential as e(x):=e2πix, where i is the imaginary unit. For a vector fCn and 1p<, fp:=[k=1n|fk|p]1/p is the p-norm, f=maxk[n]|fk|, and f0 gives the total number of non-zero elements of f. For a matrix XCn×m, σk(X) denotes the k-th largest singular value of X and X:=σ1(X) is the spectral norm. A(Ω) is the Wiener algebra and Hk(Ω) is the Sobolev space Wk,2(Ω) (with domain Ω), Sn-1 is the unit sphere in Cn, and the adjoint of a linear operator A is denoted by A.

Assumptions and methodology

In this section we develop the signal model, deviation model and interpolation kernel, in Sects. 2.12.2 and 2.3 respectively. This will allow us to proceed to Sects. 3 and 4 where the main results are elaborated. We note that the deviation model (Sect. 2.2) and sparse signal model at the end of Sect. 2.1 only apply to the compressive sensing results in Sect. 3. However, the sampling on the torus assumption in Sect. 2.1 does apply to the results in Sect. 4 as well.

Signal model

For the results in Sects. 3 and 4, let Ω=[-12,12) and let f:ΩC be the function of interest to be sampled. We assume fA(Ω) with Fourier expansion

f(x)==-ce(x), 3

on Ω. Note that our regularity assumption implies that

=-|c|<,

which will be crucial for our error bounds. Further, Hk(Ω)A(Ω) for k1 so that our context applies to many signals of interest.

Henceforth, let NN be odd. We denote the discretized regular data vector by fCN, which is obtained by sampling f on the uniform grid τ={t1,,tN}Ω, with tk:=k-1N-12, (which is a collection of equispaced points) so that fk=f(tk). The vector f will be our discrete signal of interest to recover via nonuniform samples in order to ultimately obtain an approximation to f(x) for all xΩ. Similar results can be obtained in the case N is even, our current assumption is adopted to simplify the exposition.

The observed nonuniform data is encapsulated in the vector f~Cm with underlying unstructured grid τ~={t~1,,t~m}Ω where t~k:=k-1m-12+Δk is now a collection of generally non-equispaced points. The entries of the perturbation vector ΔRm define the pointwise deviations of τ~ from the equispaced grid {k-1m-12}k=1m, where f~k=f(t~k). Noisy nonuniform samples are given as

b=f~+dCm,

where the noise model, d,  does not incorporate off-the-grid corruption. We assume that we know τ~.

In order for the expansion (3) to remain valid for xτ~, we must impose τ~Ω. This is not possible for the general deviations Δ we wish to examine, so we instead adopt the torus as our sampling domain to ensure this condition.

Sampling on the torus: for all our results, we consider sampling schemes to be on the torus. In other words, we allow grid points τ~ to lie outside of the interval [-12,12) but they will correspond to samples of f within [-12,12) via a circular wrap-around. To elaborate, if f|Ω(x) is given as

f|Ω(x)=f(x)ifx-12,120ifx-12,12,

then we define f~(x) as the periodic extension of f|Ω(x) to the whole line

f~(x)==-f|Ω(x+).

We now apply samples generated from our deviations τ~ to f~(x). Indeed, for any t~k generated outside of Ω will have f~(t~k)=f(t) for some tΩ. In this way, we avoid restricting the magnitude of the entries of Δ and the expansion (3) will remain valid for any nonuniform samples generated.

Sparse signal model: For the results of Sect. 3 only, we impose a compressibility condition on fCN. To this end, let ΨCN×N be a basis with 0<σN(Ψ)=:α and σ1(Ψ)=:β. We assume there exists some gCN such that f=Ψg, where g can be accurately approximated by an sN sparse vector. To be precise, for s[N] we define the error of best s-sparse approximation of g as

ϵs(g):=minh0sh-g1,

and assume s has been chosen so that ϵs(g) is within a prescribed error tolerance determined by the practitioner.

In Sect. 8.1, we will relax the condition that Ψ be a basis by allowing full column rank matrices ΨCN×n with nN. While such transforms are not typical in compressive sensing, we argue that they may be of practical interest since our results will show that if Ψ can be selected as tall matrix then the sampling complexity will solely depend on its number of columns (i.e., the smallest dimension n).

The transform Ψ will have to be coherent with respect to the 1D centered discrete Fourier basis FCN×N (see Sect. 2.3 for definition of F). We define the DFT-incoherence parameter as

γ=max[N]k=1N|Fk,Ψ|,

which provides a uniform bound on the 1-norm of the discrete Fourier coefficients of the columns of Ψ. This parameter will play a role in the sampling complexity of our result in Sect. 3, as a metric that quantifies the smoothness of our signal of interest. We discuss γ in detail in Sect. 5.3, including examples for several transforms common in compressive sensing.

Deviation model

Section 3 will apply to random deviations ΔRm whose entries are i.i.d. with any distribution, D, that obeys the following: for δD, there exists some θ0 such that for all integers 0<|j|N-1m we have

2Nm|Ee(jmδ)|θ. 4

This will be known as our deviation model. In our results, distributions with smaller θ parameter will require less samples and provide reduced error bounds. We postpone further discussion of the deviation model until Sect. 5.2, where we will also provide examples of deviations that fit our criteria. We note that the deviation model is most relevant when m<N. The case mN is discussed in Sect. 4, which no longer requires this deviation model or the sparse signal model.

Dirichlet kernel

In Sects. 3 and 4, we model our nonuniform samples via an interpolation kernel SRm×N that achieves Sff~ accurately. We consider the Dirichlet kernel defined by S=NF:CNCm, where FCN×N is a 1D centered discrete Fourier transform (DFT) and NCm×N is a 1D centered nonuniform discrete Fourier transform (NDFT, see [49, 50]) with normalized rows and non-harmonic frequencies chosen according to τ~. In other words, let N~=N-12, then the (k,)[m]×[N] entry of N is given as

Nk=1Ne(-t~k(-N~-1)).

This NDFT is referred to as a nonuniform discrete Fourier transform of type 2 in [50]. Thus, the action of S on fCN can be given as follows: we first apply the centered inverse DFT to our discrete uniform data

fˇu:=(Ff)u=p=1NfpFup:=1Np=1Nfpe(tp(u-N~-1)),u[N], 5

followed by the NDFT in terms of τ~:

(Sf)k:=(Nfˇ)k=u=1NfˇuNku:=1Nu=1Nfˇue(-t~k(u-N~-1)),k[m]. 6

Equivalently,

(Sf)k=1Np=1NfpK(t~k-tp)k[m], 7

where K(x)=sin(Nπx)sin(πx) is the Dirichlet kernel. This equality is well known and holds by applying the geometric series formula upon expansion. This kernel is commonly used for trigonometric interpolation and is accurate when acting on signals that can be well approximated by trigonometric polynomials of finite order, as we show in the following theorem.

Theorem 1

Let S,f and f~ be defined as above and t~kΩ for some k[m]. If t~k=tp for some p[N] then

f~-Sfk=0 8

and otherwise

f~-Sfk=||>N~ce(t~k)-(-1)+N~Ne(r()t~k), 9

where r()=rem(+N~,N)-N~ with rem(p,q) giving the remainder after division of p by q. As a consequence,  if t~kΩ for all k[m] then for any integer 1p<

f~-Sfp2m1p||>N~|c|, 10

and

f~-Sf2||>N~|c|. 11

The proof of this theorem is postponed until Sect. 8.3. Therefore, the error due to S is proportional to the 1-norm (or Wiener algebra norm) of the Fourier coefficients of f that correspond to frequencies larger than N~=N-12. In particular notice that if c=0 for all >N~ we obtain perfect interpolation, as expected from standard results in signal processing (i.e., bandlimited signals consisting of trigonometric polynomials with finite degree N~). Despite the wide usage of trigonometric interpolation in applications [5153], such a result that gives a sharp error bound does not seem to exist in the literature.

Notice that Theorem 1 only holds for τ~Ω as restricted in Sect. 2.1. However, the results continues to hold for τ~ unrestricted if we sample on the torus as imposed in Sect. 2.1. Therefore, the error bound will always hold under our setup.

Anti-aliasing via nonuniform sampling

With the definitions and assumptions introduced in Sect. 2, our methodology in this chapter will consist of modeling our m nonuniform measurements via S and approximating the s largest coefficients of f in Ψ (in the representation f=Ψg). This discrete approximation will provide an accurate estimate f(x) of f(x) for all xΩ, achieving precision comparable to that given by the best N2-bandlimited approximation of f while requiring mN samples.

The following is a simplified statement, assuming that Ψ is an orthonormal basis and mN. We focus on this cleaner result for ease of exposition, presented as a corollary of the main result in Sect. 8.1. The full statement considers the case mN and allows for more general and practical Ψ that allow for reduced sample complexity.

Theorem 2

Let 2sN and mN, where m is the number of nonuniform samples. Under our signal model with Fourier expansion (3), let ΨCN×N be an orthonormal basis with DFT-incoherence parameter γ. Define the interpolation kernel S as in Sect. 2.3 with the entries of Δ i.i.d. from any distribution satisfying our deviation model from Sect. 2.2 with θ<1.

Define

g:=argminhCNλh1+NmSΨh-b2 12

with

0<λ1-θ22s.

If

mC1γ2(1+θ)(1-θ)2slog4C2N 13

where C1 and C2 are absolute constants,  then

f-Ψg28ϵs(g)s+4λs+821-θNmd2+2N||>N-12|c| 14

with probability exceeding 1-1N.

Therefore, with mslog4(N) randomly perturbed samples we can recover f with error (14) proportional to the sparse model mismatch ϵs(g), the noise level d2, and the error of the best N-12-bandlimited approximation of f in the Wiener algebra norm (i.e., ||>N-12|c|). As a consequence, we can approximate f(x) for all xΩ as stated in the following corollary.

Corollary 1

Let h:ΩCN be the vector valued function defined entry-wise for [N] as

h(x):=1Ne(-x(-N~-1)), 15

and define the function f:ΩC via

f(x)=h(x),FΨg, 16

where g is given by (12).

Then,  under the assumptions of Theorem 2,

|f(x)-f(x)|8ϵs(g)s+4λs+821-θNmd2+8Nλs+162N1-θ+2||>N-12|c| 17

holds for all xΩ=[-12,12) with probability exceeding 1-1N.

The proof of this corollary is presented in Sect. 8.3. In the case ϵs(g)=d2=0, the results intuitively say that we can recover a N-12-bandlimited approximation of f with O(spolylog(N)) random off-the-grid samples. In the case of equispaced samples, O(N) measurements are needed for the same quality of reconstruction by the Nyquist–Shannon sampling theorem (or by Theorem 1 directly). Thus, for compressible signals with sN, random nonuniform samples provide a significant reduction in sampling complexity (undersampling) and simultaneously allow recovery of frequency components exceeding the sampling density (anti-aliasing). See Sect. 5 for further discussion.

Notice that general denoising is not guaranteed in an undersampling scenario, due to the term Nmd2 in (14), and (17). In other words, one cannot expect the output estimate to reduce the measurement noise d2 since Nm1 appearing in our error bound implies an amplification of the input noise level. Such situations with limited samples are typical in compressive sensing and this noise amplifying behavior is demonstrated numerically in Sect. 6.3. In general a practitioner must oversample (i.e., N<m) to attenuate the effects of generic noise. However, Theorem 2 and Corollary 1 state that nonuniform samples specifically attenuate aliasing noise.

Denoising via nonuniform oversampling

In this section, we show that reduction in the noise level introduced during acquisition is possible given nonuniform samples whose average density exceeds the Nyquist rate (relative to a desired bandwidth). While the implications of this section are not surprising in the context of classical sampling theory, to the best of our knowledge such guarantees do not exist in the literature when the sampling points are nonuniform.

By removing the sparse signal model (Sect. 2.1), deviation model (Sect. 2.2), and requiring mN off-the-grid samples (on the torus, see Sect. 2.1), we now use the numerically cheaper program of least squares. To reiterate, fA(Ω) with Fourier expansion =-ce(x) is our continuous signal of interest. With N odd, fCN is the discrete signal to be approximated, where fk=f(tk) for tk:=k-1N-12. The vector f~Cm encapsulates the nonuniformly sampled data where f~k=f(t~k) for t~k:=k-1m-12+Δk. Noisy nonuniform samples are given as

b=f~+dCm,

where the additive noise model, d,  does not incorporate off-the-grid corruption.

In this oversampling context, we provide a denoising result for a more general set of deviations.

Theorem 3

Let the entries of Δ be i.i.d. from any distribution and define

f:=argminhCNSh-b2. 18

If m=κNlog(N) with κ4loge/2, then

f-f222κlog(N)d2+42N||>N-12|c| 19

with probability exceeding 1-1N.

The proof can be found in Sect. 8.2. In this scenario, we oversample relative to the N-12-bandlimited output by generating a set samples with average density exceeding the Nyquist rate (step size 1N). With mκNlog(N) for κ1, bound (19) tells us that we can diminish the measurement noise level d2 by a factor 1κlog(N). The oversampling parameter κ may be varied for increased attenuation at the cost of denser sampling. We comment that the methodology from Sect. 3 with mN also allows for denoising and similar error bounds (see Theorem 4). However, focusing on the oversampling case distinctly provides simplified results with many additional benefits.

In particular, here the deviations Δ need not be from our deviation model in Sect. 2.2 and instead the result applies to perturbations generated by any distribution. This includes the degenerate distribution (deterministic), so the claim also holds in the case of equispaced samples. Furthermore, we no longer require the sparsity assumption and the result applies to all functions in the Wiener algebra. Finally, the recovery method (18) consists of standard least squares which can be solved cheaply relative to the square-root LASSO decoder (12) from the previous section.

We may proceed analogously to Corollary 1 and show that the output discrete signal f provides a continuous approximation

f(x):=h(x),Fff(x)

for all xΩ, where h(x) is defined in (15). The error of this estimate is bounded as

|f(x)-f(x)|22κlog(N)d2+42N+2||>N-12|c|,

proportional to the error of the best N-12-bandlimited approximation in the Wiener algebra norm while attenuating the introduced measurement noise. In the result, the structure of the deviated samples is quite general and accounts for many practical cases.

While related results exist in the equispaced case (see for example Sect. 4 of [10]), Theorem 3 is the first such statement in a general non-equispaced case. The result therefore provides insight into widely applied techniques for the removal of unwanted noise, without making any assumptions on the noise structure.

Discussion

This section elaborates on several aspects of the results. Section 5.1 discusses relevant work in the literature. Section 5.2 provides examples of distributions that satisfy our deviation model and intuition of its meaning. Section 5.3 explores the γ parameter with examples of transformations Ψ that produce a satisfiable sampling complexity.

Related work

Several studies in the compressive sensing literature are similar to our results in Sect. 3 [53, 54]. In contrast to these references, we derive recovery guarantees for non-orthonormal systems (when θ0) while focusing the scope of the paper within the context of classical sampling theory (introducing error according to the bandlimited approximation). The work in [53] considers sampling of sparse trigonometric polynomials and overlaps with our application in the case Ψ=F. Our results generalize this work to allow for other signal models and sparsifying transforms. Furthermore, [53] assumes that the samples are chosen uniformly at random from a continuous interval or a discrete set of N equispaced points. In contrast, our results pertain to general deviations from an equispaced grid with average sampling density spolylog(N) and allow for many other distributions of the perturbations.

Deviation model

In this section, we present several examples of distributions that are suitable for our results in Sect. 3. Notice that our deviation model utilizes the characteristic function of a given distribution, evaluated at a finite set of points. This allows one to easily consider many distributions for our purpose by consulting the relevant and exhaustive literature of characteristic functions (see for example [55]).

  1. Uniform continuous: D=U[-12m,12m] gives θ=0. To generalize this example, we may take D=U[μ-p2m,μ+p2m], for any μR and pN/{0} to obtain θ=0 (i.e., shift and dilate on the torus). Notice that with p=m, we obtain i.i.d. samples chosen uniformly from the whole interval Ω (as in [53]).

  2. Uniform discrete: D=U{-12m+kmn¯}k=0n¯-1 with n¯:=2(N-1)m+1 gives θ=0. To generalize, we may shift and dilate D=U{μ-p2m+pkmn¯}k=0n¯-1, for any μR, pN/{0} and integer n¯>2(N-1)pm. We obtain θ=0 as well.

  3. Normal: D=N(μ,σ¯2), for any mean μR and variance σ¯2>0. Here
    θ=2Nme-2(σ¯πm)2.
    In particular, for fixed σ¯, m may be chosen large enough to satisfy the conditions of Theorem 4 and vice versa.
  4. Laplace: D=L(μ,b), for any location μR and scale b>0 gives
    θ=2Nm(1+(2πbm)2).
  5. Exponential: D=Exp(λ), for any rate λ>0 gives
    θ=2Nm1+4π2m2λ-2.

In particular, notice that examples 1 and 2 include cases of jittered sampling [30, 4244]. Indeed, with p=1 these examples partition Ω into m regions of equal size and these distributions will choose a point randomly from each region (in a continuous or discrete sense). The jittered sampling list can be expanded by considering other distributions to generate samples within each region.

In general we will have θ>0, which implies deteriorated output quality and increases the number of required off-the-grid samples according to Theorem 2. Arguably, our deviation model introduces a notion of optimal jitter when the chosen distribution achieves θ=0, ideal in our results. This observation may be of interest in the active literature of jittered sampling techniques [30].

Intuitively, θ is measuring how biased a given distribution is in generating deviations. If δD, |Ee(jmδ)|0 means that the distribution is nearly centered and impartial. On the other hand, |Ee(jmδ)|1 gives the opposite interpretation where the deviations will be generated favoring a certain direction in an almost deterministic sense. Our result is not applicable to such biased distributions, since in Theorem 2 as θ1 the error bound becomes unbounded and meaningless.

Signal model

In this section we discuss the DFT-incoherence parameter γ, introduced in Sect. 2.1 as

γ=max[n]k=1N|Fk,Ψ|,

where we now let ΨCN×n be a full column rank matrix with nN. The parameter γ a uniform upper bound on the 1-norm of the discrete Fourier coefficients of the columns of Ψ. Since the decay of the Fourier coefficients of a function is related to its smoothness, intuitively γ can be seen as a measure of the smoothness of the columns of Ψ. Implicitly, this also measures the smoothness of f since its uniform discretization admits a representation via this transformation f=Ψg.

Therefore, the role of γ on the sampling complexity is clear, relatively small γ implies that our signal of interest is smooth and therefore requires less samples. This observation is intuitive, since non-smooth functions will require additional samples to capture discontinuities in accordance with Gibbs phenomenon. This argument is validated numerically in Sect. 6.1, where we compare reconstruction via an infinitely differentiable ensemble (FFT) and a discontinuous wavelet (Daubechies 2).

We now consider several common choices for Ψ and discuss the respective γ parameter:

  1. Ψ=F (the DFT), then γ=1 which is optimal. However, most appropriate and common is the choice Ψ=F which can be shown to exhibit γO(1) by a simple calculation.

  2. When Ψ=H is the inverse 1D Haar wavelet transform, we have γO(log(N)). In [56] it is shown that the inner products between rows of F and rows of H decay according to an inverse power law of the frequency (see Lemma 1 therein). A similar proof shows that |Fk,H|1|k|, which gives the desired upper bound for γ via an integral comparison. Notice that these basis vectors have jump discontinuities, and yet we still obtain an acceptable DFT-incoherence parameter for nonuniform undersampling.

  3. Ψ=IN (the N×N identity) gives γ=N. This is the worst case scenario for normalized transforms since
    maxvSN-1k=1N|Fk,v|=maxvSN-1k=1N|Fk,Fv|=maxvSN-1k=1N|vk|=N.
    In general, our smooth signals of interest are not fit for this sparsity model.
  4. Let p1 be an integer. Matrices Ψ whose columns are uniform discretizations of p-differentiable functions, with p-1 periodic and continuous derivatives and p-th derivative that is piecewise continuous. In this case γO(log(N)) if p=1 and γO(1) if p2. For sake of brevity we do not provide this calculation, but refer the reader to Section 2.8 in [57] for an informal argument.

Example 4 is particularly informative due to its generality and ability to somewhat formalize the intuition behind γ previously discussed. This example implies the applicability of our result to a general class of smooth functions that agree nicely with our signal model defined in Sect. 2.1 (functions in A(Ω)).

Numerical experiments

In this section we present numerical experiments to explore several aspects of our methodology and results. Specifically, we consider the effects of the DFT-incoherence and θ parameter in Sects. 6.1 and 6.2 respectively. Section 6.3 investigates the noise attenuation properties of nonuniform samples. We first introduce several terms and models to describe the setup of the experiments. Throughout we let N=2015 be the size of the uniformly discretized signal f.

Program (1) with λ=122s is solved using CVX [58, 59], a MATLAB® optimization toolbox for solving convex problems. We implement the Dirichlet kernel using (7) directly to construct S. We warn the reader that in this section we have not dedicated much effort to optimize the numerical complexity of the interpolation kernel. For a faster implementation, we recommend instead applying the DFT/NDFT representation S=NF (see Sect. 2.3) using NFFT 3 software from [49] or its parallel counterpart [60].

Given output f=Ψg with true solution f,  we consider the relative norm of the reconstruction error as a measure of output quality, given as

RelativeError=f-f2f2.

Grid perturbations: To construct the nonuniform grid τ~, we introduce an irregularity parameter ρ0. We define our perturbations by sampling from a uniform distribution, so that each Δk is drawn uniformly at random from [-ρm,ρm] for all k[m] independently. Off-the-grid samples τ~ are generated independently for each signal reconstruction experiment.

Complex exponential signal model: We consider bandlimited complex exponentials with random harmonic frequencies. With bandwidth ω=N-12=1007, and sparsity level s=50 we generate ωZs by choosing s frequencies uniformly at random from {-ω,-ω+1,,ω} and let

f(x)=k=1seωkx.

We use the DFT as a sparsifying transform Ψ=F so that g=Ψf=Ψ-1f is a 50-sparse vector. This transform is implemented using MATLAB’s fft function. The frequency vector, ω, is generated randomly for each independent set of experiments. Note that in this case we have optimal DFT-incoherence parameter γ=1 (see Sect. 5.3).

Gaussian signal model: We consider a non-bandlimited signal consisting of sums of Gaussian functions. This signal model is defined as

f(x)=-e-100x2+e-100(x-.104)2-e-100(x+.217)2.

For this dataset, we use the Daubechies 2 wavelet as a sparsifying transform Ψ, implemented using the Rice Wavelet Toolbox [61]. This provides g=Ψf=Ψ-1f that can be well approximated by a 50-sparse vector. In other words, all entries of g are non-zero but ϵ50(g)<.088f2250 and if g50 is the best 50-sparse approximation of g then f-Ψg502<.026f2850. The smallest singular value of the transform is σ2015(Ψ)=1 and we have γ36.62, computed numerically.

Effect of DFT-incoherence

This section is dedicated to exploring the effect of the DFT-incoherence parameter in signal reconstruction. We consider the complex exponential and Gaussian signal models described above. Recall that in the complex exponential model we have Ψ=F (the DFT) with optimal DFT-incoherence parameter γ=1. In the Gaussian model Ψ is the Daubechies 2 wavelet with γ36.62. Varying the number of nonuniform samples, we will compare the quality of reconstruction using both signal models with respective transforms to investigate the role of γ in the reconstruction error. We consider the sparsity level s=50 and solve (1) with λ=122s=120, though the Gaussian signal model is not 50-sparse in the Daubechies domain (see last paragraph of this subsection for further discussion).

Here we set irregularity parameter ρ=12 to generate the deviations (so that θ=0) and vary the average step size of the nonuniform samples. We do so by letting m vary through the set {N1.5,N2,N2.5,,N10}. For each fixed value of m,  the average relative error is obtained by averaging the relative errors of 50 independent reconstruction experiments. The results are shown in Fig. 1, where we plot the average step size vs average relative reconstruction error.

Fig. 1.

Fig. 1

Plot of average relative reconstruction error vs average step size for both signal models. In the complex exponential model (Ψ=F, the DFT) we have γ=1 and in the Gaussian signal model we have γ36.62 (Daubechies 2 wavelet). Notice that the complex exponential model allows for reconstruction from larger step sizes in comparison to the Gaussian signal model

These experiments demonstrate the negative effect of larger DFT-incoherence parameters in signal reconstruction. Indeed, in Fig. 1 we see that the complex exponential model with γ=1 allows for accurate reconstruction from larger step sizes. This is to be expected from Sect. 3, where the results imply that the Daubechies 2 wavelet will require more samples for successful reconstruction according to its parameter γ36.62.

To appropriately interpret these experiments, it is important to note that the signal from the Gaussian model is only compressible and does not exhibit a 50-sparse representation via the Daubechies transform. Arguably, this may render the experiments of this section inappropriate to purely determine the effect of γ since the impact of approximating the Gaussian signal with a 50-sparse vector may be significant and produce an unfair comparison (i.e., due the sparse model mismatch term ϵ50(g) appearing in our error bound (14)). This is important for the reader to keep in mind, but we argue that the effect of this mismatch is negligible since in the Gaussian signal model with g=Ψ-1f we have ϵ50(g)<f2250 and if g50 is the best 50-sparse approximation of g then f-Ψg502<f2850. This argument can be further validated with modified numerical experiments where f does have a 50-sparse representation in the Daubechies domain, producing reconstruction errors with identical behavior and magnitude as those in Fig. 1. Therefore, we believe our results here are informative to understand the impact of γ. For brevity, we do not present these modified experiments since such an f will not longer satisfy the Gaussian signal model and complicate our discussion.

Effect of the deviation model parameter

In this section we generate the deviations in such a way that vary the deviation model parameter θ, in order to explore its effect on signal reconstruction. We only consider the complex exponential signal model for this purpose and fix m=N10=201.

We vary θ by generating deviations with irregularity parameter ρ varying over {.001,.002,.003,,.009}{.01,.02,.03,,.5}. For each fixed ρ value we compute the average relative reconstruction error of 50 independent experiments. Notice that for each k[m] and any j

EejmΔk=sin2πjρ2πjρ.

Given ρ, we use this observation and definition (4) to compute the respective θ value by considering the maximum of the expression above over all 0<|j|N-1m=10. The relationship between ρ and θ is illustrated in Fig. 2 (right plot), where smaller irregularity parameters ρ0 provide larger deviation model parameters θ.

Fig. 2.

Fig. 2

(Left) Plot of average relative reconstruction error vs corresponding θ parameter and (right) plot illustrating the relationship between the irregularity parameter ρ and the deviation model parameter θ. The plots emphasize via red outlines the θ values that satisfy the conditions of Theorem 2 (i.e., θ<1). Although our results only hold for three θ values (0, .409, .833),  the experiments demonstrate that accurate recovery is possible otherwise

According to (4), this allows θ[0,20.05), which violates the assumption θ<1 of Theorem 2 and does not allow (1) to be implemented with parameter in the required range

0<λ1-θ22s.

Despite this, we implement all experiments in this section with λ=122s=120 (where s=50). Such a fixed choice may not provide a fair set of results, since the parameter is not adapted in any way to the deviation model. Regardless, the experiments will prove to be informative while revealing the robustness of the square-root LASSO decoder with respect to parameter selection.

Figure 2 plots θ vs average relative reconstruction error (left plot). In the plot, our main result (Theorem 2) is only strictly applicable in three cases (outlined in red, θ=0,.409,.833). However, the experiments demonstrate that decent signal reconstruction may be achieved when the condition θ<1 does not hold and the parameter λ is not chosen appropriately. Therefore, the applicability of the methodology goes beyond the restrictions of the theorem and the numerical results demonstrate the flexibility of the square-root LASSO decoder.

Noise attenuation

This section explores the robustness of the methodology when presented with measurement noise, in both the undersampled and oversampled cases relative to the target bandwidth N-12 (Sects. 3 and 4 respectively). We only solve the square-root LASSO problem (1) with λ=122s=120, and avoid the least squares problem (18) for brevity. However, we note that both programs produce similar results and conclusions in the oversampled case (see Theorem 4). We only consider the bandlimited complex exponential signal model for this purpose. We generate additive random noise dRm from a uniform distribution. Each entry of dRm is i.i.d. from [-χ1000,χ1000] where χ=1mf1, chosen to maintain d2 relatively constant as m varies.

We set ρ=12 to generate the deviations (so that θ=0) and vary the average step size of the nonuniform samples. We do so by letting m vary through the set {N.5,N.75,N,,N6.75,N7}, notice that only the first two cases correspond to oversampling. For each fixed value of m,  the relative reconstruction error is obtained by averaging the result of 50 independent experiments. The results are shown in Fig. 3, where we plot the average step size vs average relative reconstruction error and average relative input noise level d2/f2.

Fig. 3.

Fig. 3

Plot of average relative reconstruction error (f-f2/f2) vs average step size (blue curve) and average input relative measurement error (d2/f2) vs average step size (red curve). Notice that the first 13 step size values achieve noise attenuation, i.e., the reconstruction error is lower than the input noise level

The first two cases (m=N.5,N.75) correspond to oversampling and illustrate the results from Sect. 4 (and Theorem 4), where attenuation of the input noise level is achieved. Surprisingly, these experiments demonstrate that nonuniform undersampling also allows for denoising. This is seen in Fig. 3, where the values m=N1.25,N1.5,,N3.5 correspond to sub-Nyquist rates and output an average relative reconstruction error smaller than the input measurement noise level. Thus, when nonuniform samples are not severely undersampled, the negative effects of random noise can be reduced.

Conclusions

This paper provides a concrete framework to study the benefits of random nonuniform samples for signal acquisition (in comparison to equispaced sampling), with explicit statements that are informative for practitioners. Related observations are extensive but largely empirical in the sampling theory literature. Therefore, this work supplies novel theoretical insights on this widely discussed phenomenon. In the context of compressive sensing, we extend the applicability of this acquisition paradigm by demonstrating how it naturally intersects with standard sampling techniques. We hope that these observations will prompt a broader usage of compressive sensing in real world applications that rely on classical sampling theory.

There are several avenues for future research. First, the overall methodology requires the practitioner to know the nonuniform sampling locations τ~ accurately. While this is typical for signal reconstruction techniques that involve non-equispaced samples, it would be of practical interest to extend the methodology is such a way that allows for robustness to inaccurate sampling locations and even self-calibration. Further, as mentioned in Sect. 6, this work has not dedicated much effort to a numerically efficient implementation of the Dirichlet kernel S. This is crucial for large-scale applications, where a direct implementation of the Dirichlet kernel via its Fourier or Dirichlet representation (see [62]) may be too inefficient for practical purposes. As future work, it would be useful to consider other interpolation kernels with greater numerical efficiency (e.g., a low order Lagrange interpolation operator).

Finally, to explore the undersampling and anti-aliasing properties of nonuniform samples, our results here require a sparse signal assumption and adopt compressive sensing methodologies. However, most work that first discussed this nonuniform sampling phenomenon precedes the introduction of compressive sensing and does not explicitly impose sparsity assumptions. Therefore, to fully determine the benefits provided by off-the-grid samples it would be most informative to consider a more general setting, e.g., only relying on the smoothness of continuous-time signals. We believe the work achieved here provides a potential avenue to do so.

Proofs

We now provide proofs to all of our claims. In Sect. 8.1 we prove Theorem 2 via a more general result. Theorem 3 is proven in Sect. 8.2. Section 8.3 establishes the Dirichlet kernel error bounds in Theorem 1 and Corollary 1.

Proof of Theorem 2

In this section, we will prove a more general result than Theorem 2 assuming that Ψ is a full column-rank matrix and allowing mN. Theorem 2 will follow from Theorem 4 by taking α=β=1,n=N, and simplifying some terms.

Theorem 4

Let 2snN and ΨCN×n be a full column rank matrix with DFT-incoherence parameter γ and extreme singular values σ1(Ψ):=βα:=σn(Ψ)>0. Let the entries of Δ be i.i.d. from any distribution satisfying our deviation model with θ<1. Define

g:=argminhCnλh1+NmSΨh-b2 20

with

0<λα1-θ22s.

If

mC1γ2β2(1+θ)α4(1-θ)2s·logC2γ2β2(1+θ)α4(1-θ)2s+2log2C2β2(1+θ)α2(1-θ)slog(n)+log(n) 21

where C1 and C2 are absolute constants,  then

f-Ψg28βϵs(g)s+4βλs+8β2α1-θNmd2+2N||>N-12|c|

with probability exceeding 1-1n.

This theorem generalizes Theorem 2 to more general transformations Ψ for sparse representation. This is more practical since the columns of Ψ need not be orthogonal, instead linear independence suffices (with knowledge of the singular values α,β). In particular notice that (21) depends on n and does not involve N,  as opposed to mslog4(N) in (13). Since nN, this general result allows for a potential reduction in sample complexity if the practitioner may construct Ψ in such an efficient manner while still allowing a sparse and accurate representation of f.

Furthermore, notice that this more general result allows for oversampling mn or mN. If we apply Theorem 4 with s=n then ϵs(g)=0 and we obtain an error bound similar to those in Sect. 4, reducing additive noise by a factor Nnlog2(n) from mnlog4(n) off-the-grid samples. However, in this scenario the sparsifying transform is no longer of much relevance and it is arguably best to consider the approach of Sect. 4 which removes the need to consider γ,β,α, and θ via a numerically cheaper methodology and a more general set of deviations.

To establish Theorem 4 we will consider the G-adjusted restricted isometry property (G-RIP) [63], defined as follows:

Definition 1

(G-adjusted restricted isometry property [63]) Let 1sn and GCn×n be invertible. The s-th G-adjusted Restricted Isometry Constant (G-RIC) δs,G of a matrix ACm×n is the smallest δ>0 such that

(1-δ)Gv22Av(1+δ)Gv22

for all v{zCn|z0s}. If 0<δs,G<1 then the matrix A is said to satisfy the G-adjusted Restricted Isometry Property (G-RIP) of order s.

This property ensures that a measurement matrix is well conditioned amongst all s-sparse signals, allowing for successful compressive sensing from spolylog(n) measurements. Once established for our measurement ensemble, Theorem 4 will follow by applying the following result:

Theorem 5

(Theorem 13.9 in [63]) Let GCn×n be invertible and ACm×n have the G-RIP of order q and constant 0<δ<1 where

q=24s1+δ1-δG2G-12. 22

Let gCn, y=Ag+dCm, and λ1-δ2G-1s. Then

g=argminhCnλh1+Ah-y2

satisfies

g-g28ϵs(g)s+812λs+G-11-δd2. 23

We therefore obtain our main result if we establish the G-RIP for

A:=NSΨ.

To do so, we note that our measurement ensemble is generated from a nondegenerate collection of independent families of random vectors. Such random matrices have been shown to possess the G-RIP in the literature. To be specific, a nondegenerate collection is defined as follows:

Definition 2

(Nondegenerate collection [63]) Let A1,,Am be independent families of random vectors on Cn. The collection C={Ak}k=1m is nondegenerate if the matrix

1mk=1mEakak,

where akAk, is positive-definite. In this case, write GCCn×n for its unique positive-definite square root.

Our ensemble fits this definition, with the rows of NCm×N generated from a collection of m independent families of random vectors:

Nk=1Ne-N~k-1m-12+Δke-(N~-1)k-1m-12+ΔkeN~k-1m-12+ΔkwithΔkD.

Therefore, in our scenario, the k-th family Ak independently generates deviation ΔkD and produces a random vector of the form above as the k-th row of N. This in turn also generates the rows of A independently, since its k-th row is given as NNkFΨ. To apply G-RIP results from the literature for such matrices, we will have to consider the coherence of our collection:

Definition 3

(Coherence of an unsaturated collection C [63]) Let A1,,Am be independent families of random vectors, with smallest constants μ1,,μk such that

ak2μk

holds almost surely for akAk. The coherence of an unsaturated collection C={Ak}k=1m is

μ(C)=maxk[m]μk.

In the above definition, a family Ak is saturated is it consists of a single vector and a collection is unsaturated if no family in the collection is saturated. In our context, it is easy to see that the condition θ<1 avoids saturation and the definition above applies. The coherence of our collection of families will translate to the DFT-incoherence parameter defined in Sect. 2.1.

With these definitions in mind, we now state a simplified version of Theorem 13.12 in [63] that will show the G-RIP for our ensemble.

Theorem 6

Let 0<δ, ϵ<1, ns2, C={Ak}k=1m be a nondegenerate collection generating the rows of A. Suppose that

mc~1GC-12μ(C)sδ2log2GC-12μ(C)s+1log2(s)log(n)+logϵ-1, 24

where c~1 is an absolute constant. Then with probability at least 1-ϵ, the matrix A has the G-RIP of order s with constant δs,Gδ.

In conclusion, to obtain Theorem 4 we will first show that A is generated by a nondegenerate collection with unique positive-definite square root G. Establishing this will provide a upper bounds for G-1, G, and μ(C). At this point, Theorem 6 will provide A with the G-RIP and subsequently Theorem 5 can be applied to obtain the error bounds.

To establish that the collection C={Ak}k=1m above is nondegenerate, it suffices to show that

1mEAw22β2(1+θ)w22and1mEAw22α2(1-θ)w22 25

for all wCn. This will show that 1mEAA is positive-definite if the deviation model satisfies θ<1. Further, let G be the unique positive-definite square root of 1mEAA, then (25) will also show that

Gβ1+θandG-11α1-θ. 26

To this end, let wCn and normalize N~:=NN so that for k[m],[N]

N~k:=e(-t~k(-N~-1)).

Throughout, let Δ~R be an independent copy of the entries of ΔRm. Then with v:=FΨw,

1mEAw22=1mEN~FΨw22:=1mEN~v22=E1mk=1m|N~k,v|2=E1mk=1m|=1Ne(t~k(-n~-1))v|2=E1mk=1m=1N~=1Ne(t~k(-~))vv¯~==1N~=1Nvv¯~E1mk=1me(t~k(-~))==1N|v|2+j=1(N-1)/m-~=jmvv¯~Ee(jm(Δ~-1/2))+j=1(N-1)/m-~=-jmvv¯~Ee(-jm(Δ~-1/2)).

The last equality can be obtained as follows,

E1mk=1me(t~k(-~))=E1mk=1mek-1m-12+Δk(-~)=1mk=1mek-1m-12(-~)Ee(Δk(-~))=1mk=1mek-1m-12(-~)Ee(Δ~(-~))=Ee(Δ~-1/2)(-~)k=1m1mek-1m(-~)=1if=~Eejm(Δ~-1/2)if-~=jm,jZ/{0}0otherwise.

The third equality uses the fact that Ee(Δk(-~))=Ee(Δ~(-~)) for all k[m] in order to properly factor out this constant from the sum in the fourth equality. The last equality is due to the geometric series formula.

Returning to our original calculation, we bound the last term using our deviation model assumptions

|j=1(N-1)/m-~=-jmvv¯~Ee(-jm(Δ~-1/2))|=|j=1(N-1)/mQjvv¯+jmEe(-jm(Δ~-1/2))|j=1(N-1)/mQj|v||v+jm||Ee(-jm(Δ~-1/2))|θm2Nj=1(N-1)/mQj|v||v+jm|θm2Nj=1(N-1)/m|v,v|=θmv222Nj=1(N-1)/m1=θmv222NN-1mθv222.

Qj[N] is the index set of allowed indices according to j,  i.e., that satisfy [N] and +jm[N]. The second inequality holds by our deviation model assumption (4).

The remaining sum (with -~=jm) can be bounded similarly. Combine these inequalities with the singular values of Ψ to obtain

1mEAw22v22+2θv222:=Ψw221+θβ2w221+θ,

and

1mEAw22α2w221-θ.

We will apply this inequality and similar orthogonality properties in what follows (e.g., in Sect. 8.2), and ask the reader to keep this in mind.

To upper bound the coherence of the collection C, let N~kFΨAk as above. Then

N~kFΨ=max[n]|N~k,(FΨ)|max[n]N~k(FΨ)1=max[n]k=1N|Fk,Ψ|:=γ

and therefore

μ(C)γ2. 27

The proof of Theorem 4 is now an application of Theorems 6 and 5 using the derivations above.

Proof of Theorem 4

We are considering the equivalent program

g:=argminhCnλh1+1mAh-Nb2.

From the arguments above, the rows of A are generated by a nondegenerate collection C with coherence bounded as (27). The unique positive-definite square root of 1mEAA, denoted G, satisfies the bounds (26).

We now apply Theorem 6 with δ=1/2, ϵ=n-1 and order

q=24s1+δ1-δG2G-12.

By (26) and (27), if

mc~1γ2qδ2α2(1-θ)log2γ2qα2(1-θ)+1log2(q)log(n)+logn, 28

then (24) is satisfied and the conclusion of Theorem 6 holds. Therefore, with probability exceeding 1-n-1, A has G-RIP of order q with constant δq,Gδ=1/2.

To show that our sampling assumption (21) satisfies (28), notice that by (26)

q=212sG2G-12212sβ2(1+θ)α2(1-θ)2121+124sβ2(1+θ)α2(1-θ):=q~.

The last inequality holds since

12sβ2(1+θ)α2(1-θ)12s24,

and for any real number a24 it holds that a(1+124)a. In (28), replace q with q~. This provides our assumed sampling complexity, where expression (21) simplifies by absorbing all absolute constants into C1 and C2.

With parameter λ chosen for (20), the conditions of Theorem 5 hold with δ=1/2 and we obtain the error bound

g-g28ϵs(g)s+812λs+2α1-θNmSf-b2.

To finish, notice that

g-g21βΨ(g-g)2=1βf-Ψg2,

and

Sf-b2Sf-f~2+d22m||>N-12|c|+d2,

where the last inequality holds by Theorem 1.

To obtain Theorem 2 from Theorem 4, notice that in Theorem 2 we have n=N and α=β=1. The assumption mN gives that

Nγ2(1+θ)s(1-θ)2(1+θ)s(1-θ)2,

which allows further simplification by combining all the logarithmic factors into a single polylog(N) term (introducing absolute constants where necessary). We note that the condition mN is not needed and is only applied for ease of exposition in the introductory result.

Proof of Theorem 3

To establish the claim, we aim to show that

infvSN-1Sv2δ>0, 29

holds with high probability. By optimality of f, this will give

f-f21δS(f-f)21δSf-b2+1δb-Sf22δSf-b22δd2+4δm||>N-12|c|,

where the last inequality is due to our noise model and trigonometric interpolation error (Theorem 1).

To this end, we normalize by letting S~=1mN~F:=NmNF and note that when mN our sampling operator is isometric in the sense that

ES~S~=F1mEN~N~F=IN 30

where IN is the N×N identity matrix. To see this, we use our calculations from the previous section (that establish (25)) to obtain as before that for ,~[N]

E1mN~N~~=1mEN~,N~~=1mEk=1me(t~k(-~))=1if=~Eejm(Δ~-1/2)if-~=jm,jZ/{0}0otherwise.

However, if mN, notice that the middle case never occurs since |-~|N-1<m for all ,~[N]. Therefore, (30) holds.

With the isometry established, we may now proceed to the main component of the proof of Theorem 3.

Theorem 7

Let mκN with κ2log(N)log(e/2) and the entries of Δ be i.i.d. with any distribution. Then

infvSN-1Sv2m2N,

with probability exceeding 1-1N.

Proof

We will apply a matrix Chernoff inequality to lower bound the smallest eigenvalue of S~S~. To apply Theorem 1.1 in [64], notice that we can expand

S~S~=k=1mS~kS~k,

which is a sum of independent, random, self-adjoint, and positive-definite matrices. Our isometry condition (30) gives that ES~S~=IN has extreme eigenvalues equal to 1, we stress that this holds because we assume mN as shown above. Further,

S~kS~k=1mFN~kN~kF=1mN~kN~k=Nm.

Therefore, by Theorem 1.1 in [64] with R=Nm and δ=12, we obtain

PλminS~S~12N2em/N.

With mκN and κ2log(N)log(e/2), the left hand side is upper bounded by N-1. Since the singular values of S~ are the square root of the eigenvalues of S~S~, this establishes the result.

With our remarks in the beginning of the section, we can now easily establish the proof of Theorem 3.

Proof of Theorem 3

Under our assumptions, apply Theorem 7 to obtain that for all vSN-1

infvSN-1Sv2m2N

holds with the prescribed probability. This establishes (29) with δ=m2N. The remainder of the proof follows from our outline in the beginning of the section.

Interpolation error of Dirichlet kernel: proof

In this section we provide the error term of our interpolation operator when applied to our signal model (Theorem 1) and also the error bound given in Corollary 1.

Proof of Theorem 1

We begin by showing (8), i.e., if t~k=tp~ for some p~[N] (our “nonuniform” sample lies on the equispaced interpolation grid) then the error is zero. This is easy to see by orthogonality of the complex exponentials, combining (5), (6) (recall that N~=N-12) we have

(Sf)k=f,Sk=p=1NfpSkp=1Np=1Nfpu=-N~N~e(utp)e(-ut~k)=1Np=1Nfpu=-N~N~e(utp)e(-utp~)=fp~=f(tp~)=f(t~k)=f~k.

The fourth equality holds since we are assuming t~k=tp~ for some p~[N].

We now deal with the general case (9). Recall the Fourier expansion of our underlying function

f(x)==-ce(x).

Again, using (5), (6) and the Fourier expansion at f(tp)=fp we obtain

(Sf)k=f,Sk=p=1NfpSkp:=1Np=1N=-ce(tp)u=-N~N~e(utp)e(-ut~k).

At this point, we wish to switch the order of summation and sum over all p[N]. We must assume the corresponding summands are non-zero. To this end, we continue assuming fp,Skp0 for all p[N]. We will deal with these cases separately afterward. In particular we will remove this assumption for the fp’s and show that Skp0 under our assumption τ~Ω.

Proceeding, we may now sum over all p[N] to obtain

(Sf)k=1Nu=-N~N~=-ce(-ut~k)p=1Ne((u+)tp)=u=-N~N~j=-(-1)jNcjN+ue(ut~k)=j=-(-1)j+N~Ncje(r(j)t~k).

The second equality is obtained by orthogonality of the exponential basis functions, p=1Ne((u+)tp)=0 when +uNZ and otherwise equal to N(-1)jN for some jZ where u+=jN. The last equality results from a reordering of the absolutely convergent series where the mapping r is defined as in the statement of Theorem 1.

To illustrate the reordering, we consider j0 (for simplicity) and first notice that (-1)jN=(-1)j since N is assumed to be odd in Sect. 2.1. Aesthetically expanding the previous sum gives

u=-N~N~j=0(-1)jcjN+ueut~k=e(-N~t~k)c-N~-cN-N~+c2N-N~-+e((-N~+1)t~k)c-N~+1-cN-N~+1+c2N-N~+1-+e(0·t~k)c0-cN+c2N-+e(N~t~k)cN~-cN+N~+c2N+N~-.

Notice that in the first row starting at the second coefficient we have indices N-N~=N~+1 followed by 2N-N~=N+N~+1 and so on, which are subsequent to the indices of the coefficients in the last row (one column prior). Therefore, if start at the top left coefficient c-N~ and “column-wise” traverse this infinite array of Fourier coefficients we will obtain the ordered sequence {(-1)j+N~Ncj}j=-N~ (with no repetitions).

The coefficients in row q[N] correspond to frequency value -N~+q-1 and have indices of the form pN-N~+q-1 for some pN. To establish that the reordered series is equivalent, we finish by checking that for a given index the mapping r gives the correct frequency value, i.e., r(pN-N~+q-1)=-N~+q-1 for all q[N]:

r(pN-N~+q-1):=rem(pN-N~+q-1+N~,N)-N~=rem(pN+q-1,N)-N~=q-1-N~.

We can therefore reorder the series as desired and incorporate the sum over j<0 via the same logic to establish the equality.

Since for {-N~,-N~+1,,N~} we have r()= and (-1)+N~N=1, we finally obtain

f(t~k)-(Sf)k=||>N~ce(t~k)-(-1)+N~Ne(r()t~k).

The definition of the p-norms along with the triangle inequality give the remaining claim. In particular,

f~-Sfp=k=1m|f(t~k)-(Sf)k|p1/p=k=1m|||>N~ce(t~k)-(-1)+N~Ne(r()t~k)|p1/pk=1m||>N~2|c|p1/p=m||>N~2|c|p1/p=2m1/p||>N~|c|.

This finishes the proof in the case fp,Skp0 for all p[N]. To remove this condition for the fp’s, we may find a real number μ such that the function

g(x):=f(x)+μ=(-,)Z/{0}ce(x)+c0+μ

is non-zero when x{tp}p=1N. In particular notice that if we define h=f+μCN, then hp0 for all p[N]. Therefore, only assuming now that Skp0 for p[N], the previous argument can be applied to conclude

g(t~k)-(Sh)k=||>N~ce(t~k)-(-1)+N~Ne(r()t~k).

However, if 1NCN denotes the all ones vector and eN~+1CN is the N~+1-th standard basis vector, notice that

(Sh)k=Sk,h=Sk,f+μSk,1N=Sk,f+μNk,F1N=Sk,f+μNNk,eN~+1=Sk,f+μ=(Sf)k+μ.

The fourth equality holds by orthogonality of F and since F(N~+1)=1N1N. The fifth inequality holds since Nk(N~+1)=1N. Therefore

g(t~k)-(Sh)k=f(t~k)+μ-(Sf)k+μ=f(t~k)-(Sf)k,

and the claim holds in this case as well.

The assumption Skp0 will always hold if τ~Ω, i.e., t~k[-12,12) for all k[m]. We show this case by deriving conditions under which this occurs. As noted before, we have

Skp:=u=-N~N~e(u(tp-t~k))=u=0N-1e(u(tp-t~k))e(-N~(tp-t~k))=e(-N~(tp-t~k))1-e(N(tp-t~k))1-e(tp-t~k)

and we see that Skp=0 iff N(tp-t~k)Z/{0} and tp-t~kZ. However, notice that

N(tp-t~k)=Np-1N-k-1m-Δk=p-1-N(k-1)m-NΔk,

so that N(tp-t~k)Z/{0} iff N(k-1)m+NΔk=Nt~k+N2Z/{p-1}. This condition equivalently requires t~k=jN-12 for some jZ/{p-1}. Since this must hold for all p[N], we have finally have that

N(tp-t~k)Z/{0}ifft~k=jN-12forsomejZ/{0,1,,N-1}.

We see that such a condition would imply that t~kΩ:=[-12,12), which violates our assumption τ~Ω. This finishes the proof.

We end this section with the proof of Corollary 1.

Proof of Corollary 1

The proof will consist of applying Theorem 2 (under identical assumptions) and Theorem 1.

By Theorem 2, we have that

f-Ψg28ϵs(g)s+4λs+821-θNmd2+2N||>N-12|c|

with probability exceeding 1-1N. As in the proof of Theorem 1, we can show that for xΩ

f(x)-h(x),Ff=||>N~ce(x)-(-1)+N~Ne(r()x).

Therefore

|f(x)-f(x)|:=|f(x)-h(x),FΨg||f(x)-h(x),Ff|+|h(x),Ff-h(x),FΨg||||>N~ce(x)-(-1)+N~Ne(r()x)|+h(x)2F(f-Ψg)22||>N~|c|+8ϵs(x)s+4λs+821-θNmd2+2N||>N-12|c|.

The last inequality holds since h(x)2=1 (here x is considered fixed and h(x)CN). This finishes the proof.

Acknowledgements

This work was in part financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Collaborative Research and Development Grant DNOISE II (375142-08). This research was carried out as part of the SINBAD II project with support from the following organizations: BG Group, BGP, CGG, Chevron, ConocoPhillips, DownUnder GeoSolutions, Hess Corporation, Petrobras, PGS, Sub Salt Solutions, WesternGeco, and Woodside. Özgür Yılmaz also acknowledges an NSERC Discovery Grant (22R82411) and an NSERC Accelerator Award (22R68054).

Data Availability

Not applicable

References

  • 1.Shannon CE. Communication in the presence of noise. Proc. IRE. 1949;37(1):10–21. [Google Scholar]
  • 2.Nyquist H. Certain topics in telegraph transmission theory. AIEE Trans. 1928;47:617–644. [Google Scholar]
  • 3.Kotel’nikov, V.A.: On the transmission capacity of ether and wire in electrocommunications. Physics-Uspekhi. 49(7), 736 (2006)
  • 4.Ferrar WL. On the consistency of cardinal function interpolation. Proc. R. Soc. Edinb. 1928;47:230–242. [Google Scholar]
  • 5.Jerri AJ. The Shannon sampling theorem—its various extensions and applications: a tutorial review. Proc. IEEE. 1977;65(11):1565–1596. [Google Scholar]
  • 6.Ogura K. On a certain transcendental function in the theory of interpolation. Tôhoku Math. J. 1920;17:64–72. [Google Scholar]
  • 7.Whittaker ET. On the functions which are represented by the expansion of interpolating theory. Proc. R. Soc. Edinb. 1915;35:181–194. [Google Scholar]
  • 8.Whittaker JM. The Fourier theory of the cardinal functions. Proc. Math. Soc. Edinb. 1929;1:169–176. [Google Scholar]
  • 9.Zayed AI. Advances in Shannon’s Sampling Theory. Boca Raton: CRC Press; 1993. [Google Scholar]
  • 10.Oppenheim AV, Schafer RW. Discrete-Time Signal Processing. 3. Hoboken: Prentice Hall Press; 2009. [Google Scholar]
  • 11.Marvasti F. Nonuniform Sampling: Theory and Practice. Berlin: Springer; 2001. [Google Scholar]
  • 12.Landau H. Necessary density condition for sampling and interpolation of certain entire functions. Acta Math. 1967;117:37–52. [Google Scholar]
  • 13.Grochenig K, Razafinjatovo H. On Landau’s necessary density conditions for sampling and interpolation of band-limited functions. J. Lond. Math. Soc. 1996;54(3):557–565. [Google Scholar]
  • 14.Shapiro HS, Silverman RA. Alias-free sampling of random noise. J. Soc. Ind. Appl. Math. 1960;8(2):225–248. [Google Scholar]
  • 15.Beutler FJ. Error-free recovery of signals from irregularly spaced samples. Soc. Ind. Appl. Math. 1966;8(3):328–335. [Google Scholar]
  • 16.Beutler F. Alias-free randomly timed sampling of stochastic processes. IEEE Trans. Inf. Theory. 1970;16(2):147–152. [Google Scholar]
  • 17.Cook, R.L.: Stochastic sampling in computer graphics. ACM Trans. Graph. 6(1), 51–72 (1986)
  • 18.Venkataramani R, Bresler Y. Optimal sub-Nyquist nonuniform sampling and reconstruction for multiband signals. IEEE Trans. Signal Process. 2001;48(10):2301–2313. [Google Scholar]
  • 19.Hajar M, El Badaoui M, Raad A, Bonnardot F. Discrete random sampling: theory and practice in machine monitoring. Mech. Syst. Signal Process. 2019;123:386–402. [Google Scholar]
  • 20.Jia, M., Wang, C., Ting Chen, K., Baba, T.: An non-uniform sampling strategy for physiological signals component analysis. In: Digest of Technical Papers—IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA. pp. 526–529 (2013)
  • 21.Maciejewski MW, Qui HZ, Mobli M, Hoch JC. Nonuniform sampling and spectral aliasing. J. Magn. Reson. 2009;199(1):88–93. doi: 10.1016/j.jmr.2009.04.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Wu T, Dey S, Chen Mike Shuo-Wei. A nonuniform sampling ADC architecture with reconfigurable digital anti-aliasing filter. IEEE J. Sel. Top. Signal Process. 2016;63(10):1639–1651. [Google Scholar]
  • 23.Czyż K. Nonuniformly sampled active noise control system. IFAC Proc. Vol. 2004;37(20):351–355. [Google Scholar]
  • 24.Wang, D., Liu, X., Wu, X., Wang, Z.: Reconstruction of periodic band limited signals from non-uniform samples with sub-Nyquist sampling rate. Sensors. 20(21) (2020) [DOI] [PMC free article] [PubMed]
  • 25.Zeevi YY, Shlomot E. Nonuniform sampling and antialiasing in image representation. IEEE Trans. Signal Process. 1993;41(3):1223–1236. [Google Scholar]
  • 26.Mitchell DP. Generating antialiased images at low sampling densities. ACM SIGGRAPH Comput. Graph. 1987;21(4):65–72. [Google Scholar]
  • 27.Mitchell, D.P.: The antialiasing problem in ray tracing. In: SIGGRAPH 90 (1990)
  • 28.Maymon S, Oppenheim AV. Sinc interpolation of nonuniform samples. IEEE Trans. Signal Process. 2011;59(10):4745–4758. [Google Scholar]
  • 29.Hennenfent G, Herrmann FJ. Seismic denoising with nonuniformly sampled curvelets. Comput. Sci. Eng. 2006;8(3):16–25. [Google Scholar]
  • 30.Christensen, P., Kensler, A., Kilpatrick, C.: Progressive multi-jittered sample sequences. Computer Graphics Forum. 37, 21–33 (2018)
  • 31.Bretthorst, G.L.: Nonuniform Sampling: Bandwidth and Aliasing. AIP Conf. Proc. 567(1), 1–28 (2001)
  • 32.Gastpar M, Bresler Y. On the necessary density for spectrum-blind nonuniform sampling subject to quantization. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. 2000;1:348–351. [Google Scholar]
  • 33.Shlomot, E., Zeevi, Y.Y.: A nonuniform sampling and representation scheme for images which are not bandlimited. In: The Sixteenth Conference of Electrical and Electronics Engineers in Israel Tel-Aviv, Israel. 1–4 (1989)
  • 34.Penev, P.S., Iordanov, L.G.: Optimal estimation of subband speech from nonuniform non-recurrent signal-driven sparse samples. In: IEEE International Conference on Acoustics, Speech and Signal Processing Proceedings, Salt Lake City, UT, USA. 2, 765–768 (2001)
  • 35.Wisecup RD. Unambiguous signal recovery above the Nyquist using random-sample-interval imaging. Geophysics. 1998;63(2):331–789. [Google Scholar]
  • 36.Cary, P.W.: 3D Stacking of Irregularly Sampled Data by Wavefield Reconstruction. SEG Technical Program Expanded Abstracts (1997)
  • 37.Han, K., Wei, Y., Ma, X.: An efficient non-uniform filtering method for level-crossing sampling. In: IEEE International Conference on Digital Signal Processing (2016)
  • 38.Koh J, Lee W, Sarkar TK, Salazar-Palma M. Calculation of far-field radiation pattern using nonuniformly spaced antennas by a least square method. IEEE Trans. Antennas Propag. 2013;62(4):1572–1578. [Google Scholar]
  • 39.Bechir, D.M., Ridha, B.: Non-uniform sampling schemes for RF bandpass sampling receiver. In: International Conference on Signal Processing Systems, Singapore, 3–17 (2009)
  • 40.Lee, H., Bien, Z.: Sub-Nyquist nonuniform sampling and perfect reconstruction of speech signals. In: TENCON 2005-2005 IEEE Region 10 Conference, Melbourne, VIC, Australia, 1–6 (2005)
  • 41.Hennenfent G, Herrmann FJ. Simply denoise: wavefield reconstruction via jittered undersampling. Geophysics. 2008;73(3):V19–V28. [Google Scholar]
  • 42.Bellhouse DR. Area estimation by point-counting techniques. Biometrics. 1981;37(2):303–312. [Google Scholar]
  • 43.Cook, R.L., Porter, T., Carpenter, L.: Distributed ray tracing. ACM SIGGRAPH Computer Graphics. 18(3), 137–145 (1984)
  • 44.Dobkin DP, Eppstein D, Mitchell DP. Computing the discrepancy with applications to supersampling patterns. ACM Trans. Graph. 1996;15(4):354–376. [Google Scholar]
  • 45.Katznelson Y. An Introduction to Harmonic Analysis. 3. Cambridge: Cambridge University Press; 2004. [Google Scholar]
  • 46.Boche H, Calderbank R, Kutyniok G, Vybíral J. Compressed Sensing and Its Applications. Basel: Birkhäuser; 2013. [Google Scholar]
  • 47.Foucart S, Rauhut H. A Mathematical Introduction to Compressive Sensing. Basel: Birkhäuser; 2013. [Google Scholar]
  • 48.Pfander GE. Sampling Theory, a Renaissance. Basel: Birkhäuser; 2015. [Google Scholar]
  • 49.Keiner, J., Kunis, S., Potts, D.: Using NFFT 3—a software library for various non-equispaced fast Fourier transforms. ACM Trans. Math. Softw. 36, 19:1–19:30 (2008)
  • 50.Greengard L, Lee J. Accelerating the nonuniform fast Fourier transform. Appl. Comput. Harmon. Anal. 2004;35:111–129. [Google Scholar]
  • 51.Strohmer T. Numerical analysis of the non-uniform sampling problem. J. Comput. Appl. Math. 2000;122:297–316. [Google Scholar]
  • 52.Margolis E, Eldar YC. Nonuniform sampling of periodic bandlimited signals. IEEE Trans. Signal Process. 2008;56(7):2728–2745. [Google Scholar]
  • 53.Rauhut H. Stability results for random sampling of sparse trigonometric polynomials. IEEE Trans. Inf. Theory. 2008;54(12):5661–5670. [Google Scholar]
  • 54.Rauhut, H.: Compressive sensing and structured random matrices. Theoretical Foundations and Numerical Methods for Sparse Recovery, edited by Massimo Fornasier, Berlin, New York: De Gruyter. 1–92 (2010). 10.1515/9783110226157.1
  • 55.Oberhettinger F. Fourier Transforms of Distributions and Their Inverses: A Collection of Tables. Cambridge: Academic Press; 1973. [Google Scholar]
  • 56.Krahmer F, Ward R. Stable and robust sampling strategies for compressive imaging. IEEE Trans. Image Process. 2014;23(2):612–622. doi: 10.1109/TIP.2013.2288004. [DOI] [PubMed] [Google Scholar]
  • 57.López, O.: Embracing Nonuniform Samples (T). University of British Columbia (2019). Retrieved from https://open.library.ubc.ca/collections/ubctheses/24/items/1.0380720. Accessed on 1 Sep 2022
  • 58.Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.0 beta. (2013). http://cvxr.com/cvx. Accessed on 1 Mar 2023
  • 59.Grant, M., Boyd, S.: Graph implementations for nonsmooth convex programs. In: Recent Advances in Learning and Control (A Tribute to M. Vidyasagar). Lecture Notes in Control and Information Sciences, pp. 95–110. Springer, Berlin (2008). http://stanford.edu/~boyd/graph_dcp.html
  • 60.Pippig M, Potts D. Parallel three-dimensional non-equispaced fast Fourier transforms and their applications to particle simulation. SIAM J. Sci. Comput. 2013;35(4):C411–C437. [Google Scholar]
  • 61.Baraniuk, R., Choi, H., Fernandes, F., Hendricks, B., Neelamani, R., Ribeiro, V., Romberg, J., Gopinath, R., Guo, H., Lang, M., Odegard, J.E., Wei, D.: Rice Wavelet Toolbox. [Online] (2001). https://www.ece.rice.edu/dsp/software/rwt.shtml. Accessed 13 June 2019
  • 62.López, O., Kumar, R., Yılmaz, Ö., Herrmann, F.J.: Off-the-grid low-rank matrix recovery and seismic data reconstruction. IEEE J. Sel. Top. Signal Process. 10(4) 658–671 (2016)
  • 63.Adcock B, Hansen A. Compressive Imaging: Structure, Sampling, Learning. Cambridge: Cambridge University Press; 2021. [Google Scholar]
  • 64.Tropp J. User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 2012;12:389–434. doi: 10.1007/s10208-011-9099-z. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable


Articles from Sampling Theory, Signal Processing, and Data Analysis are provided here courtesy of Springer

RESOURCES