Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jun 1.
Published in final edited form as: Adv Comput Math. 2021 May 2;47(3):40. doi: 10.1007/s10444-021-09859-6

A hybrid collocation-perturbation approach for PDEs with random domains

Julio E Castrillón-Candás 1, Fabio Nobile 2, Raúl F Tempone 3
PMCID: PMC8294127  NIHMSID: NIHMS1712260  PMID: 34305359

Abstract

Consider a linear elliptic PDE defined over a stochastic stochastic geometry a function of N random variables. In many application, quantify the uncertainty propagated to a Quantity of Interest (QoI) is an important problem. The random domain is split into large and small variations contributions. The large variations are approximated by applying a sparse grid stochastic collocation method. The small variations are approximated with a stochastic collocation-perturbation method and added as a correction term to the large variation sparse grid component. Convergence rates for the variance of the QoI are derived and compared to those obtained in numerical experiments. Our approach significantly reduces the dimensionality of the stochastic problem making it suitable for large dimensional problems. The computational cost of the correction term increases at most quadratically with respect to the number of dimensions of the small variations. Moreover, for the case that the small and large variations are independent the cost increases linearly.

Keywords: Uncertainty Quantification, Stochastic Collocation, Perturbation, Stochastic PDEs, Finite Elements, Complex Analysis, Smolyak Sparse Grids

1. Introduction

The problem of design under the uncertainty of the underlying domain can be encountered in many real life applications. For example, in semiconductor fabrication the underlying geometry becomes increasingly uncertain as the physical scales are reduced [33]. This uncertainty is propagated to an important Quantity of Interest (QoI), such as the capacitance of the semiconductor circuit. If the variance of the capacitance is high this could lead to low yields during the manufacturing process. Quantifying the uncertainty in a given QoI, such as the capacitance, is of important so as to be able to maximize yields. This will have a direct impact in reducing the costly and time-consuming design cycle. Other examples included graphene nano-sheet fabrication [21]. In this paper we focus on the problem of how to efficiently compute the statistics of the QoI given uncertainty in the underlying geometry.

Uncertainty Quantification (UQ) methods applied to Partial Differential Equations (PDEs) with random geometries can be mostly divided into collocation and perturbation approaches. For large deviations of the geometry the collocation method [6,8,14,32] is well suited. In addition, in [6,18] the authors derive error estimates of the solution with respect to the number of stochastic variables in the geometry description. However, this approach is only effective for medium size stochastic problems. In contrast, the perturbation approaches introduced in [20,33,17,9,13,11,12] are very efficient for high dimension, but with small perturbations of the domain. More recently, new approaches based on multi-level Monte Carlo have been developed [28] that is well suited for low regularity of the solution. Furthermore, the domain mapping approach has been extended to elliptic problems with random domains in [19].

We develop a hybrid collocation-perturbation method that is well suited for a combination of large and small variations. The main idea is to meld both approaches such that the accuracy versus dimension of the problem is significantly accelerated.

We represent the domain in terms of a series of random variables and then remap the corresponding PDE to a deterministic domain with random coefficients. The random geometry is split into small and large deviations. A collocation sparse grid method is used to approximate the contribution to the QoI from the first large deviations NL terms of the stochastic domain expansion. Conversely, the contribution of the small deviations (the tail) are cheaply computed with a collocation and perturbation method. This contribution is called the variance correction.

For the collocation method we apply an isotropic sparse grid. This is to simplify the presentation in this work. However, we are free to use any collocation method such as anisotropic sparse grids [26], quasi-optimal [25] or dimension adaptive [15,23,22] to increase the efficiency of the collocation computation. The results in this paper show that the variance correction significantly reduces the overall dimensionality of the stochastic problem while the computational cost of the correction term increases at most quadratically with respect to the number of dimensions of the small variations.

A rigorous convergence analysis of the statistics of the QoI in terms of the number of collocation knots and the perturbation approximation of the tail is derived. Analytic estimates show that the error of the QoI for the hybrid collocation-perturbation method (or the hybrid perturbation method for short) decays quadratically with respect to the of sum of the coefficients of the expansion of the tail. This is in contrast to the linear decay of the error estimates derived in [6] for the pure stochastic collocation approach. Furthermore, numerical experiments show a faster convergence rate than the stochastic collocation approach. Moreover, the variance correction is computed at a fraction of the cost of the low dimensional large variations.

In Section 2 mathematical background material is introduced. In Section 3 the stochastic domain problem is introduced. We assume that there exist a bijective map such that the elliptic PDE with a stochastic domain is remapped to deterministic reference domain with a random diffusion matrix. The random boundary is assumed to be parameterized by N random variables. In section 4 the hybrid collocation-perturbation approach is derived. This approach reduces to computing mean and variance correction terms that quantifies the perturbation contribution from the tail of the random domain expansion. In Section 5 we show that analytic extension in CNL exists for the variance correction term. In Section 6 mean and variance error estimates are derived in terms of the finite element, sparse grid and perturbation approximations. In section 7 complexity and tolerance analysis is derived. In section 8 we test our approach on numerical examples that are consistent with theoretically derived convergence rates.

2. Background

In this section we introduce the general notation and mathematical background that will be used in this paper. Let (Ω, F, P) be a complete probability space, where Ω is the set of outcomes, F is a sigma algebra of events and P is a probability measure. Define LPq(Ω), qN, as the following Banach spaces:

LPq(Ω){vΩv(ω)qdP(ω)<}andLP(Ω){vPess supωΩv(ω)<},

where v:ΩR is a measurable random variable.

Consider the random variables Y1, … , YN measurable in (Ω, F, P). Form the N valued random vector Y := [Y1, … , YN], Y : ΩΓ, and let ΓΓ1××ΓNRN. Without loss of generality denote Γn := [−1, 1] as the image of Yn for n = 1, … , N and let B(Γ) be the Borel σ− algebra.

For all AB(Γ) consider the induced measure μYP(Y1(A)). Suppose that μY is absolutely continuous with respect to the Lebesgue measure defined on Γ. From the Radon–Nikodym theorem [4] we conclude that there exists a density function ρ(y) : Γ → [0, +∞) such that for any event AB(Γ) we have that P(YA)P(Y1(A))=Aρ(y)dy. For any measurable function Y[LP1(Γ)]N define the expected value as E[Y]=Γyρ(y)dy. Finally, the following Banach spaces will be useful for the stochastic collocation sparse grid error estimates. For qN let

Lρq(Γ){vΩv(y)qρ(y)dy<}andLρ(Γ){vρ(y)dyρess supyΓv(y)<}.

We discuss in the next section an approach of approximating a function with sufficient regularity by multivariate polynomials and sparse grid interpolation.

2.1. Sparse Grids

Our goal is to find a compact an accurate approximation of a multivariate function f~:ΓV with sufficient regularity. It is assumed that f~C0(Γ;V) where

C0(Γ;V){v:ΓVis continuous onΓand maxyΓv(y)V<}

and V is a Banach space. Consider the univariate Lagrange interpolant along the nth dimension of Γ

Inm(i):C0(Γn)Pm(i)1(Γn),

where i ⩾ 1 denotes the level of approximation and m(i) the number of collocation knots used to build the interpolation at level i such that m(0) = 0, m(1) = 1 and m(i) < m(i + 1) for i ⩾ 1. Furthermore let Inm(0)=0. The space Pm(i)1(Γn) is the set of polynomials of degree at most m(i) − 1.

We can construct an interpolant by taking tensor products of Inm(i) along each dimension for n = 1, … , N. However, the number of collocation knots explodes exponentially with respect to the number of dimensions, thus limiting feasibility to small dimensions. Alternately, consider the difference operator along the nth dimension

Δnm(i)Inm(i)Inm(i1).

The sparse grid approximation of f~C0(Γ) is defined as

Swm,g[f~]=iN+N:g(i)wn=1NΔnm(in)(f~), (1)

where w ⩾ 0, wN+(N+N{0}), is the approximation level, i=(i1,,iN)N+N, and g:N+NN is strictly increasing in each argument. The sparse grid can also we re-written as

Swm,g[f~]=iN+N:g(i)wc(i)n=1NInm(in)(f~),withc(i)=j{0,1}N:g(i+j)w(1)j. (2)

From the previous expression, we see that the sparse grid approximation is obtained as a linear combination of full tensor product interpolations. However, the constraint g(i) ⩽ w in (2) restricts the growth of tensor grids of high degree.

Consider the multi-indexed vector m(i) = (m(i1), … , m(iN)) and the associated polynomial polynomial set

Λm,g(w)={pNN,g(m1(p+1))w}.

Let PΛm,g(w)(Γ) be the multivariate polynomial space

PΛm,g(w)(Γ)=span{n=1Nynpn,withpΛm,g(w)}.

It can shown that Swm,g[f~]PΛm,g(w)(Γ) (see e.g. [2]). One of the most popular choices for m and g is given by the Smolyak (SM) formulas (see [29,3,2])

m(i)={1,fori=12i1+1,fori>1}andg(i)=n=1N(in1),

in conjunction with Clenshaw-Curtis (CC) abscissas interpolation points. This choice gives rise to sequence of nested one dimensional interpolation formulas. The number of interpolation knots of the Smolyak sparse grid grows significantly slower than Tensor Product (TP) (see [2]) and Total Degree (TD) grids. Other popular choices include Hyperbolic Cross (HC) sparse grids.

It can also be shown that the TD, SM and HC anisotropic sparse approximation formulas can be readily constructed with improved convergence rates (see [26]). Moreover, in [10], the authors show convergence of anisotropic sparse grid approximations with infinite dimensions (N → ∞). In [25] the authors show the construction of quasi-optimal grids have been shown to have exponential convergence.

As pointed out in the introduction, we have the option of using any collocation method such as anisotropic [26], quasi-optimal [25] or dimension adaptive [15,23,22] sparse grids to increase the efficiency of the collocation computation. The important result in this paper is that the overall dimensionality of the stochastic problem is significantly reduced with the addition of the perturbation component.

3. Problem setup and formulation

Let D(ω)Rd, dN, be an open bounded domain with Lipschitz boundary D(ω) that is shape dependent on the stochastic parameter ωΩ

Suppose there exist a reference domain URd, which is open and bounded with Lipschitz boundary ∂U. In addition assume that almost surely in Ω there exist a bijective map F(ω):U¯D(ω)¯. The map ηx, U¯D(ω)¯, is written as

ηx=F(η,ω),

where η are the coordinates for the reference domain U and x are the coordinates for Rd . See the cartoon example in Figure 1. Denote by J(ω) as the Jacobian of F(ω) and suppose that F satisfies the following assumption.

Fig. 1.

Fig. 1

Example of reference domain deformation through the bijective map F(ω) for some realization ωΩ. The image is created Tikz code [30].

Assumption 1 Given a bijective map F(ω):U¯D(ω)¯ there exist constants Fmin and Fmax such that

0<Fminσmin(J(ω))andσmax(J(ω))Fmax<

almost everywhere in U and almost surely in Ω. We have denoted by σmin(J(ω)) (and σmax(J(ω))) the minimum (respectively maximum) singular value of the Jacobian J(ω). In Figure 1 a cartoon example of the deformation of the reference domain U is shown.

By applying the chain rule on Sobolev spaces [1] for any vH1(D(ω)) we have that v = J-T∇(vF), where J-T := (J−1)T, i.e. the transpose of matrix J−1, and vFH1(U). Therefore we can prove the following result.

Lemma 1 Under Assumptions 1 it is immediate to prove the following results:

  1. L2(D(ω)) and L2(U) are isomorphic almost surely.

  2. H1(D(ω)) and H1(U) are isomorphic almost surely.

Proof For i) and ii) see [6] or [18].

Let GωΩD(ω)Rd, i.e. the region in Rd defined by the union of all the perturbations of the stochastic domain. Consider the functions a:GR, and f:GR that are defined over the region of all the stochastic perturbations of the domain D(ω) in Rd. Similarly, let UωΩD(ω)Rd be region formed by the union of all the stochastic perturbations of the boundary. For ωΩ a.s. let gH12(D(ω)) be defined as the trace of a deterministic function νH1(G).

Consider the following boundary value problem: Given f(x,ω),a(x,ω):D(ω)Rd and g(x,ω):D(ω)Rd find u(x,ω):D(ω)Rd such that almost surely

(a(x,ω)u(x,ω))=f(x,ω),xD(ω)u=gonD(ω). (3)

We now make the following assumption:

Assumption 2 Let amaxess supxD(ω),ωΩa(x,ω) and

aminessinfxD(ω),ωΩa(x,ω).

Assume that the constants amin and amax satisfy the following inequality: 0 < aminamax < ∞.

Recall that D(ω) is open and bounded with Lipschitz boundary D(ω). By applying a change of variables the weak form of (3) can be formulated on the reference domain (see [6] for details) as:

Problem 1 Given that (fF)(η, ω) ∈ L2(U) find u^(η,ω)H01(U) s.t.

B(ω;u^,v)=l~(ω;v),vH01(U) (4)

almost surely, where l~(ω;v)U(fF)(η,ω)J(η,ω)vL(ν^(η,ω),v), g^gF, ν^νF, for any w, sH01(U)

B(ω;s,w)U(aF)(η,ω)sTC1(η,ω)wJ(η,ω),L(ν^(η,ω),v)U(aF)(η,ω)((ν^(η,ω))TC1(η,ω)J(η,ω)v,

C(η, ω) := J(ω)TJ(ω), and ν^(η,ω)U=g^(η,ω). This homogeneous boundary value problem can be remapped to D(ω) as u~(x,ω)(u^F1)(x,ω), thus we can rewrite u^(η,ω)=(u~F)(η,ω). The solution u(x,ω)H1(D(ω)) for the Dirichlet boundary value problem is obtained as u(x,ω)=u~(x,ω)+(ν^F1)(x,ω).

The solution of (4) is numerically computed with a semi-discrete approximation. Suppose that we have a set of regular triangulations Th with maximum mesh spacing parameter h > 0. Furthermore, let Hh(U)H01(U) be the space of continuous piecewise polynomial defined on Th with Nh cardinality. Let u^h:ΓHh(U) be the semi-discrete approximation of the solution of Problem (4) that satisfies the following problem: Find u^hHh(U) such that

U[u^h(,y)]TG(y)vhdη=U(fF)(,y)vhJ(y)dηL(ν^,vh) (5)

for all vhHh(U) and for a.s. yΓ. Note that G(y) := (aF(y))∣J(y)∣J(y)−1J(y)T and Qh(y)Q(u~hF)=Q(u^h(y)).

3.1. Quantity of Interest

For many practical problems the QoI is not necessarily the solution of the elliptic PDE, but instead a bounded linear functional Q:L2(U)R of the solution. For example, this could be the average of the solution on a specific region of the domain, i.e.

Q(u^)U¯q(η)u^(η,ω)dη (6)

with qL2(U) over the region U~U. It is assumed that there exists δ > 0 such that dist(U~,U)>δ.

In the next section, the perturbation approximation is derived for Q(u) and not directly from the solution u. It is thus necessary to introduce the influence function φ:H01(U)R, which can be easily computed by the following adjoint problem:

Problem 2 Find φH01(U) such that for all vH01(U)

B(ω;v,φ)=Q(v) (7)

a.s. in Ω. After computing the influence function φ, the QoI can be computed as Q(u^)=B(u^,φ).

Remark 1 We can pick a particular operator T such that ν^=T(g^) and vanishes in the region defined by U~. Thus we have that Q(ν^)=0 and Q(u^+ν^)=Q(u^).

3.2. Domain parameterization

To simplify the analysis of the elliptic PDE with a random domain from equation (3) we remapped the solution onto a fix deterministic reference domain. This approach has also been applied in [6,14,18,17]. This approach is reminiscent of Karhunen-Loève (KL) expansions of random fields (see [18]).

Suppose that b1, … , bN are a collection of vector valued Sobolev functions where each of the entries of bn:URd for n = 1, … , N belong in the space W1,∞(U). We further make the following assumptions.

Assumption 3 Assume that F(η, ω) has the finite noise model

F(η,ω)η+n=1Nμnbn(η)Yn(ω).

Assumption 4

1. ∥∥bnlL(U) = 1 for n =1, … N.

2. ∞ > μ1 ⩾ ⋯ ⩾ μN ⩾ 0.

The stochastic domain perturbation is now split as

F(η,ω)FL(η,ω)+FS(η,ω),

where we denote FL(η, ω) as the large deviations and FS(η, ω) as the small deviations modes with the following parameterization:

FL(η,ω)n=1NLμL,nbL,n(η)Yn(ω)&FS(η,ω)n=1NSμS,nbS,n(η)Yn+NL(ω),

where NL + NS = N. Furthermore, for n = 1, … , NL let μL,n := μn, bL,n(η) := bn(η), and for n = 1, … , NS let μS,n := μn+NL and bS,n(η) := bn+NL (η).

Denote yL := [y1, … ,yNL], ΓLn=1NLΓn, and ρ(yL):ΓLR+ as the joint probability density of yL. Similarly denote yS := [yNL+1, … , yN], ΓSn=NL+1NΓn, and ρ(yS):ΓSR+ as the joint probability density of yS. From the stochastic model the Jacobian J is written as

J(η,ω)=I+n=1NμnBn(η)Yn(ω) (8)

where for n = 1, … N, Bn(η) is the Jacobian of bn(η).

4. Perturbation approach

In this section a perturbation method is presented to approximate Q(y) with respect to the domain perturbation. In Section 4.1, the perturbation approach is applied with respect to the tail field FS(η, ω). A stochastic collocation approach is then used to approximate the contribution with respect to FL(η, ω). We follow a similar approach as in [20] by using shape calculus. To this end we introduce the following definition.

Definition 1 Let ψ be a regular function of the parameters yWRN, the Gâteaux derivative (shape derivative) evaluated at y on the space of perturbations WRN is defined as

<Dyψ(y),δy>=lims0+ψ(y+sδy)ψ(y)s,δyW.

Similarly, the second order derivative Dy2 (shape Hessian) as a bilinear form on W is defined as

Dy2ψ(y)(δy1,δy2)=lims0+<Dyψ(y+sδy2)Dyψ(y)s,δy1>,δy2,δy1W.

Suppose that Q is a regular function with respect to the parameters y, then for all y = y0 + δyW the following expansion holds:

Q(y)=Q(y0)+<DyQ(y0),δy>+12Dy2Q(y+θδy)(δy,δy) (9)

for some θ ∈ (0, 1). Thus we have a procedure to approximate the QoI Q(y) with respect to the first order term and bound the error with the second order term. To explicitly formulate the first and second order terms we make the following assumption:

Assumption 5 For all v, wH01(U), let G(y;v,w)vTG(y)w, where G(y) := (aF)(η, y)J−1(y)JT(y)∣J(y)∣, we have that for all yW

  1. yG(y)[L1(U)]N

  2. For n = 1, … , N there exists a uniformly bounded constant CG(y)>0 on W s.t.
    UynG(y;v,w)CG(y)vH01(U)wH01(U).
  3. Furthermore, for all yW we assume thaty(fF)(y), yν^(y)[L1(U)]N.

Remark 2 Although we have that (i) and (ii) are assumptions for now, under Assumptions 1 - 4, a(η,ω)W1,(Rd), and Lemma 9 in Section 6 it can be shown that Assumption 5(i) and (ii) are true for all yΓ.

Definition 2 For all v, wH01(U), and yW let

<DyB(y;v,w),δy>lims0+1s[B(y+sδy;v,w)B(y;v,w)]δyΓ.

Remark 3 Under Assumption 5 for any v, wH01(U) we have that for all yW

<DyB(y;v,w),δy>=UyG(y;v,w)δy=n=1NU(vTynG(y)w)δyn

where G(y) := (aF)(η, y)J−1(y)JT(y)∣J(y)∣. Furthermore, under Assumption 5 we have that

<Dy(fF)(,y),δy>=Uy(fF)(,y)δy.

We can introduce as well the derivative for any function (vF)(η, y) ∈ L2(U) with respect to y: For all yW we have that

Dy(vF)(η,y)(δy):=limso+1s[(vF)(η,y+sδy)(vF)(η,y)].

Lemma 2 Suppose that Assumptions 1 to 5 are satisfied. Then for any y, δyW and for all vH01(U) we have that

B(y;Dy(u~F)(η,y)(δy),v)=n=1Nδyn(U((u~F)(,y))TynG(y)v)+yn(fF)(,y)J(y)v+(fF)(,y)ynJ(y)v((ν^)TynG(y)v(ynν^(y))TG(y)v).

Proof

B(y;Dy(u~F)(,y)(δy),v)=lims0+1sU((u~F)(,y+sδy)T(u~F)(,y)T)G(y)v=lims0+1sU(u~F)(,y+sδy)TG(y)v(u~F)(,y+sδy)TG(y+sδy)v+lims0+1sU(u~F)(,y+sδy)TG(y+sδy)v(u~F)(,y)TG(y)v=n=1NU((u~F)(,y))TynG(y)δynv+lims0+1s(l~(y+sδy;v)l~(y;v))

then

B(y;Dy(u~F)(,y)(δy),v)=n=1NU(u~F)(,y)TynG(y)δynv+Uyn(fF)(,y)δynJ(y)v+U(fF)(,y)ynJ(y)δynvlims0+1sU(ν^(y+sδy))TG(y+sδy)v(ν^(y))TG(y)v)

The result follows. □

Lemma 3 Suppose that Assumptions 1 to 5 are satisfied. Then for any y, δyW and for all vH01(U) we have that

B(y;v,Dyφ(y)(δy))=n=1NU(v)TynG(y)δynφ(y).

Proof We follow the same procedure as in Lemma 2. □

A consequence of Lemma 2 and Lemma 3 is that Dy(u~F)(,y)(δy) and Dyφ(y)(δy) belong in H01(U) for any yW and δyW.

Lemma 4 Under the same assumptions as Lemma 3 we have that

lims0+Q(y+sδy)Q(y)s=n=1NδynU(((u~F)(,y))TynG(y)φ(y))+yn(fF)(,y)J(y)φ(y)(ynν^(y))TG(y)φ(y)((ν^(y))TynG(y)φ(y)+(fF)(,y)ynJ(y)φ(y)). (10)

where the influence function φ(y) satisfies equation (7).

Proof

lims0+Q(y+sδy)Q(y)s=lims0+U1s((u~F)(,y+sδy))TG(y+sδy)φ(y+sδy)((u~F)(,y))TG(y)φ(y))=n=1NδynU((u~F)(,y))TynG(y)φ(y)+U(Dy(u~F)(,y))T(δy)G(y)φ(y)+((u~F)(,y))TG(y)Dyφ(y)(δy).

From Lemma 2 with v = φ(y) and Lemma 3 with v=(u~F)(,y) we obtain the result. □

Lemma 5 Suppose that Assumptions 1 to 5 are satisfied. Then for any y, δyW and for all vH01(U) we have that

Dy2Q(y)(δy,δy)=n,m=1Nδynδym(U((u~F)(,y))T(ymynG(y))φ(y))+(ymynν^(y))TG(y)φ(y)+(ynν^(y))TymG(y)φ(y)+(ymν^(y))TynG(y)φ(y)+(ν^(y))TymynG(y)φ(y)ymyn(fF)(,y)J(y)φ(y)yn(fF)(,y)ymJ(y)φ(y)(ym(fF)(,y)ynJ(y)φ(y)(fF)(,y)ymynJ(y)φ(y))n=1Nδyn(U(Dy(u~F)(,y))T(δy)(ynG(y))φ(y))+((u~F)(,y))T(ynG(y))Dyφ(y)(δy)+(ynν^(y))TG(y)Dyφ(y)(δy)+(ν^(y))TynG(y)Dyφ(y)(δy)yn(fF)(,y)J(y)Dyφ(y)(δy)((fF)(,y)ynJ(y)Dyφ(y)(δ(y)).

Proof Let E(y,δy)lims0+Q(y+sδy)Q(y)s (see Lemma 4). Taking the first variation of equation of E(y) we have that

lims0+E(y+sδy,δy)E(y,δy)s=n=1Nδynlims0+1s[U((u~(y+sδy)T)]ynG(y+sδy)φ(y+sδy)+yn(fF)(,y+sδy)J(y+sδy)φ(y+sδy)(ynν^(y+sδy))TG(y+sδy)φ(y+sδy)(ν^(y+sδy))TynG(y+sδy)φ(y+sδy)+((fF)(,y+sδy)ynJ(y+sδy)φ(y+sδy))U(((u~F)(,y))TynG(y)φ(y)+yn(fF)(,y)J(y)φ(y))(ynν^(y))TG(y)φ(y)[((ν^(y))TynG(y)φ(y)+(fF)(,y)ynJ(y)φ(y))].

Following the same approach as in Lemma 3) and 4) we obtain the result. □

4.1. Hybrid collocation-perturbation approach

We now consider a linear approximation of the QoI Q(y) with respect to yΓ. For any y = y0 + δy, y0Γ, yΓ, the linear approximation has the form

Q(y0)+<DyQ(y0),δy>

where δy = yy0Γ. Recall that Γ = ΓL × ΓS and make the following definitions and assumptions

  1. y := [yL, yS], δy := [δyL, δyS], and y0[y0L,y0S].

  2. y0L takes values on ΓL and δyL := 0ΓL.

  3. y0S0ΓS and δyS = yS takes values on ΓS.

We can now construct a linear approximation of the QoI with respect to the allowable perturbation set Γ. Consider the following linear approximation of Q(yL, yS)

Q^(yL,yS)Q(yL,y0S)+Q~(yL,y0S,δyS), (11)

and from Lemma 4 we have that

Q~(yL,y0S,δyS)<DyQ(yL,y0S),δy0S>=n=1NSδynSUαn(η,yL,y0S)dη,

where

αn(η,yL,y0S)((u~F)(η,yL,y0S))TynSG(yL,y0S)φ(yL)+ynS(fF)(η,yL,y0S)J(yL,y0S)φ(yL,y0S)(ynSν^(yL,y0S))TG(yL,y0S)φ(yL,y0S)(ν^(yL,y0S))TynSG(yL,y0S)φ(yL,y0S)+(fF)(η,yL,y0S)ynSJ(yL,y0S)φ(yL,y0S).

This linear approximation only shows the explicit dependence on the variable yS without the decay of the coefficients μnS for n = 1, … , NS. However, to obtain directly capture the effect of the coefficients let y~nμn12yn for n = 1, … , N and δy~n=μn12δyn. It is not hard to see that <DyQ(yL,y0S), δy0S> can be reformulated with respect to the variables y~[y~1,,y~N] and δy~[δy~1,,δy~N] by applying Lemma 4 as

<DyQ(yL,y0S),δy0S>=n=1NSμS,nδynSUα~n(η,yL,y0S)dη (12)

where

α~n(η,yL,y0S)((u~F)(η,yL,y0S))Ty~nSG(yL,y0S)φ(yL,y0S)+y~nS(fF)(η,yL,y0S)J(yL,y0S)φ(yL,y0S)(y~nSν^(yL,y0S))TG(yL,y0S)φ(yL,y0S)(ν^(yL,y0S))Ty~nSG(yL.y0S)φ(yL,y0S)+(fF)(η,yL,y0S)y~nSJ(yL,y0S)φ(yL,y0S)

This will allow an explicit dependence of the mean and variance error in terms of the coefficients μS,n, n = 1, … , NS, as show in in Section 6.

The mean of Q^(yL,yS) can be obtained as

E[Q^(yL,yS)]=E[Q(yL,y0S)]+E[Q~(yL,y0S,δyS)].

From Fubini’s theorem we have

E[Q(yL,y0S)]=ΓLQ(yL,0)ρL(yL)dyL. (13)

and from equation (12)

E[Q~(yL,y0S,δyS)]=n=1NSμS,nΓL[1,1]ynSγn(yL,0)ρ(yL,ynS)dyLdynS, (14)

where γn(yL,0)Uα~n(η,yL,0)dη, ρ(yL) is the marginal distribution of ρ(y) with respect to the variables yL and similarly for ρ(yL,ynS)(n=1,,NS). The term E[Q~(yL,y0S,δyS)] is referred as the mean correction. The variance of Q^(yL,yS) can be computed as

var[Q^(yL,yS)]=E[Q^(yL,yS)2]E[Q^(yL,yS)2]=var[Q(yL,y0S)]+E[Q~(yL,y0S,δyS)2]+2E[Q(yL,y0S)Q~(yL,y0S,δyS)](I)E[Q~(yL,y0S,δyS)]22E[Q(yL,y0S)]E[Q~(yL,y0S,δyS)](I).

The term (I) is referred as the variance correction of var[Q(yL,y0S)]. From Fubini’s theorem and equation (12) we have that

E[Q~(yL,y0S,δyS)2]=k=1NSn=1NSΓL[1,1][1,1]μS,kμS,nykSynSγk(yL,y0S)γn(yL,y0S)ρ(yL,ykS,ynS)dyLdykSdynS, (15)

and E[Q(yL,y0S)Q~(yL,y0S,δyS)] is equal to

k=1NSΓL[1,1]Q(yL,0)γk(yL,0)ykSρ(yL,ykS)dyLdykS. (16)

Note that the mean E[Q(yL,y0S)] and variance var[Q(yL,y0S)] depend only on the large variation variables yL. If the region of analyticity of the QoI with respect to the stochastic variables yL is sufficiently large, it is reasonable to approximate Q(yL,y0S) with a Smolyak sparse grid Swm,g[Q(yL,y0S)], with respect to the variable yL (see [6] for details). Thus in equations (13) - (16) Q(yL,y0S) is replaced with the the sparse grid approximation Swm,g[Q(yL,y0S)] and for n = 1, … , NS γn(yL, 0) is replaced with Swm,g[γn(yL,0)].

Remark 4 For the special case that for n = 1, … , N and m = 1, … , N E[Yn(ω)Ym(ω)]=δ[nm], ρ(y) = ρ(yL)ρ(yS), for all yLΓL and ySΓS (i.e. independence assumption of the joint probability distribution ρ(yL, yS)), the mean and variance corrections are simplified. Applying Fubini’s theorem and from equation 13 the mean of Q^(yL,yS) now becomes

E[Q^(yL,yS)]=E[Q(yL,y0S)]+E[Q~(yL,y0S,δyS)]=0=E[Q(yL,y0S)]=ΓLQ(yL,0)ρL(yL)dyL,

i.e. there is no contribution from the small variations. Applying a similar argument we have that

var[Q^(yL,yS)]=E[Q^(yL,yS)2]E[Q^(yL,yS)]2=var[Q(yL,y0S)]+E[Q~(yL,y0S,δyS)2]+2E[Q(yL,y0S)Q~(yL,y0S,δyS)]=0E[Q~(yL,y0S,δyS)]2=02E[Q(yL,y0S)]E[Q~(yL,y0S,δyS)]=0=var[Q(yL,y0S)]+n=1NSμnSUαn(x,yL,y0S)dxUαn(y,yL,y0S)dyVariance correction.

Notice that for this case the variance correction consists of NS terms, thus the computational cost will depend linearly with respect to NS.

5. Analytic correction

In this section we show that the mean and variance corrections are analytic in a well defined region in CNL with respect to the variables yLΓL. The size of the region of analyticity will directly correlated with the convergence rate of a Smolyak sparse grid. To this end, let us establish the following definition: For any 0<β<δ~, for some constant δ~>0, define the following region in CNL,

Θβ,NL{zCNL;z=y+w,y[1,1]NL,{l=1NLsupηUBl(η)2μlwlβ}}. (17)

Observe that the size of the region Θβ,NL is mostly controlled by the decay of the coefficients μl and the size of ∥Bl(η)∥2. Thus the faster the coefficient μl decays the larger the region Θβ,NL will be.

Furthermore, rewrite J(η, yL) as J(yL) = I+R(yL), with R(yL)n=1NLμnBn(η)yn. We now state the first analyticity theorem for the solution (u~F)(yL) with respect to the random variables yLΓL.

To simplify the analyticity proof the following assumptions are made.

Assumption 6

  1. (aF)(η, ω) is only a function of ηU and independent of ωΩ.

  2. (ν^F)(η,ω) is only a function of ηU and independent of ωΩ.

  3. f:GR can be analytically extended in Cd. Furthermore assume that the analytic extension Re(fF)(η, z), Im(fF)(η, z) ∈ L2(U).

  4. There exists 0<δ~<1 such that n=1NB(η)2μn1δ~, for all ηU.

Remark 5 Note that Assumption 6 (i) is not necessarily hard and Theorem 1, 2 and the error analysis in Section 6 can be easily adapted for a less restrictive hypothesis (See Assumptions 7 and Lemma 2 in [7]). None-the-less, this assumption is still practical for layered materials such as semi-conductor design. For such problems (ν^F)(η,ω) can be non-constant along the non-stochastic directions.

Remark 6 Under Assumption 6 (iv) we have that Redet(J(z))δ~dα for all zΘβ,NL, where α=2exp(dβδβ)>0. This implies the real part of Redet(J(z)) for all zΘβ,NL will never have a sign change.

The following theorem, can be proven with a slight modification of Theorem 7 in [7] to take into account Assumption 6 (ii) and the mapping model of equation (8)

Theorem 1 Let 0<δ~<1 then the solution (u~F)(η,yL):ΓLH01(U) of Problem 1 can be extended holomorphically on Θβ,NL if

β<min{δ~log(2γ)d+log(2γ),1+δ~221},

where γ2δ~2+(2δ~)dδ~d+(2δ~)d.

Remark 7 By following a similar argument, the influence function φ(y) can be extended holomorphically in Θβ,NL if

β<min{δ~log(2γ)d+log(2γ),1+δ~221}.

Remark 8 To prove the following theorem will be using the following matrix calculus identity [5]: Suppose that the matrix ARn,n is a function of a the real variable α then

A1α=A1AαA1.

We are now ready to show that the linear approximation Q^(yL,yS) can be analytically extended on Θβ,NL. Note that it is sufficient to show that Uα~n(η,yL,0) can be analytically extended on Θβ,NL.

Theorem 2 Let 0<δ~<1, if β<min{δ~log(2γ)d+log(2γ), 1+δ~221} then there exists an extension of Uα~n(η,yL,0), for n = 1, … , NS, which is holomorphic in Θβ,NL.

Proof Consider the extension of yLzL, where zLCNL. First, we have that

U((u~F)(yL,yS))Ty~nSG(yL,yS)φ(yL,yS) (18)

for n = 1, … , NS can be extended on Θβ,NL. Note the for the sake of reducing notation clutter we dropped the dependence of the variable ηU and it is understood from context unless clarification is needed.

We now show that each entry of the matrix y~nSG(zL,yS) is holomorphic on Θβ,NL for all yΓS. First, we have that

y~nSG(zL,yS)=(y~nS(aF)(zL,yS))C1(zL,yS)J(zL,yS)+(aF)(zL,yS)(C1(zL,yS)y~nSJ(zL,yS))(+J(zL,yS)y~nSC1(zL,yS)).

From Assumption 6 (aF)(·, zL) and y~nS(aF)(,zL,yS)=0 are holomorphic on Θβ,NL for all ySΓS. From Remark 8 we have that

y~nSC1(zL,yS)=C1(zL,yS)(y~nSC(zL,yS))C1(zL,yS).

Since β<δ~ the series

J1(zL,yS)=(I+R(zL,yS))1=I+k=1R(zL,yS)k

is convergent for all zLΘβ and for all ySΓS. It follows that each entry of ∂F(zL, y)−1 and therefore C(zL, y)−1 is holomorphic for all zLΘβ,NL and for all ySΓS. We have that ∣J(zL, yS)∣ and y~lSC(zL,yS) are functions of a finite polynomial therefore they are holomorphic for all zLΘβ,NL and ySΓS.

From Jacobi’s formula we have that for all zLΘβ,NL, ySΓS and l = 1, … , NS

y~nSJ(zL,yS)=tr(Adj(J(zL,yS))y~nSJ(zL,yS))=J(zL,yS)tr(J(zL,yS)1BnS(η)).

It follows that for all zLΘβ,NL and ySΓS we have that y~nSG(zL,yS) are holomorphic for n = 1, … NL.

We shall now prove the main result. First, extend yL along the nth dimension as ynzn, znC and let z~n=[z1,,zn1,zn+1,,zNL]. From Theorem 1 we have that (u~F)(zL,yS) and φ(zL, yS) are holomorphic for zLΘβ,NL and ySΓS if

β<min{δ~log(2γ)d+log(2γ),1+δ~221}.

Thus from Theorem 1.9.1 in [16] the series

(u~F)(zL,yS)=l=0u~l(z~n,yS)znlandφ(zL,yS)=l=0φ¯l(z~n,yS)znl,

are absolutely convergent in H01(U) for all zC, where u~l(z~n,yS), φ~l(z~n,yS)H01(U) for l = 0, … , ∞. Furthermore,

(u~F)(zL,yS)L2(U)l=0u~l(z~n,yS)L2(U)znll=0u~l(z~n,yS)H01(U)znl

i.e. (u~F)(zL,yS) is holomorphic on Θβ,NL along the nth dimension. A similar argument is made for ∇φ(zL, yS).

Since the matrix y~nSG(zL,yS) is holomorphic for all zLΘβ,NL and ySΓS then we can rewrite the (i, j) entry as k=0gki,j(z~n,yS)znk where gki,j(z~n,yS)L(U). For each i, j = 1, … , d consider the map

Ti,jUy~nSG(zL,yS)(i,j)xiu~(zL,yS)xjφ(zL,yS)=k,l,p=0znk+l+pUgki,j(z~n,yS)xiu~l(z~n,yS)xjφ~p(z~n,yS).

For i, j = 1, … , d, for all zLΘβ,NL and ySΓS

Ti,jk,l,p=0znk+l+pUgki,j(zL,yS)xiu~l(z~n,yS)xjφp(z~n,yS)(From Cauchy Schwartz it follows that)k,l,p=0znk+l+pgki,j(z~n,yS)L(U)xiu~l(z~n,yS)L2(U)xjφp(z~n,yS)L2(U)k,l,p=0znk+l+pgki,j(z~n,yS)L(U)u~l(z~n,yS)H01(U)φp(z~n,yS)H01(U)<.

Thus equation (18) can be analytically extended on Θβ,NL along the nth dimensions for all ySΓS. Equation (18) can now be analytically extended on the entire domain Θβ,NL. Repeat the analytic extension of (18) for n = 1, … , NL. Hartog’s Theorem implies that (18) is continuous in Θβ,NL. Osgood’s Lemma them implies that (18) is holomorphic on Θβ,NL. Following a similar argument as for (18) we can analytically extended the rest of the terms of αn(yL, yS) on Θβ,NL for n = 1, … , NS. □

6. Error analysis

In this section we analyze the perturbation error between the exact QoI Q(yL, yS) and the sparse grid hybrid perturbation approximation Swm,g[Q^h(yL,yS)]. With a slight abuse of notation by Swm,g[Q^h(yL,yS)] we mean the two sparse grids approximations:

Swm,g[Q^h(yL,yS)]Swm,g[Qh(yL,0)]+n=1NSμS,nynSSwm,g[Uα~n,h(η,yL,0)],

where αn,h(·, yL, 0), for n = 1, … , NS, and Qh(yL, yS) are the finite element approximations of αn(·, yL, 0) and Q(yL, yS) respectively. It is easy to show that var(Q(yL,yS))var(Swm,g[Q^h(yL,yS)]) is equal to

E[Q2(yL,yS)Swm,g[Q^h(yL,yS)]2](I)(E[Q(yL,yS)]2E[Swm,g[Q^h(yL,yS)]2])(II).

(I) Applying Jensen’s inequality we have that

E[Q2(y)Swm,g[Q^h(y)]2]Q(y)+Swm,g[Q^h(y)]Lρ(Γ)(Q(y)Q^(y)Lρ2(Γ)+Q^(y)Q^h(y)Lρ1(Γ)+Q^h(y)Swm,g[Q^h(y)]]Lρ2(Γ)). (19)

(II) Similarly, we have that

E[Q(yL,yS)]2E[Swm,g[Q^h(yL,yS)]]2Q(y)+Swm,gQ^h(y)Lρ1(Γ)Q(y)Swm,gQ^h(yL)Lρ1(Γ)Q(y)+Swm,gQ^h(y)Lρ1(Γ)(Q(y)Q^(y)Lρ1(Γ)+Q^(y)Q^h(y)Lρ1(Γ)+Q^h(y)Swm,gQ^h(y)Lρ1(Γ))

Applying Jensen inequality

E[Q(yL,yS)]2E[Swm,g[Q^h(yL,yS)]]2Q(y)+Swm,gQ^h(y)Lρ2(Γ)(Q(y)Q^(y)Lρ2(Γ)+Q^(y)Q^h(y)Lρ1(Γ)+Q^h(y)Swm,gQ^h(y)Lρ2(Γ)). (20)

Combining equations (19) and (20) we have that

var(Q(y))var(Swm,g[Q^h(y)])CPQ(y)Q^(y)Lρ2(Γ)Perturbation+CPFEQ^(y)Q^h(y)Lρ1(Γ)Finite Element+CPSGQ^h(y)Swm,g[Q^h(y)]Lρ2(Γ)Sparse Grid.

Similarly we have that the mean error satisfies the following bound:

E[Q(yL,yS)Swm,g[Q^h(yL,yS)]Q(y)Q^(y)Lρ2(Γ)Perturbation (I)+Q^(y)Q^h(y)Lρ1(Γ)Finite Element (II)+Q^h(y)Swm,g[Q^h(y)]Lρ2(Γ)Sparse Grid (III).

Remark 9 For the case that probability distributions ρ(yL) and ρ(yS) are independent then the mean correction is exactly zero, thus the mean error would be bounded by the following terms

E[Q(yL,yS)]E[Swm,g[Qh(yL)]]CTQ(yL,yS)Q(yL)Lρ2(Γ)Truncation+CFEQ(yL)Qh(yL)Lρ1(ΓL)Finite Element+CSGQh(yL)Swm,g[Qh(yL)]Lρ2(ΓL)Sparse Grid

for some positive constants CT, CFE and CSG. We refer the reader to Section 5 in [6] for the definition of the constants and the bounds of these errors.

6.1. Perturbation error

In this section we analyze perturbation approximation error

Q(yL,yS)Q^(yL,yS)Lρ2(Γ)=R(y~L,δy~S)Lρ2(Γ) (21)

where the remainder is equal to

R(y~L,δy~S)12Dy~S2Q(y~L+θδy~S)(δy~S,δy~S)

for some θ ∈ (0, 1). Since the perturbation approach involves two derivatives, to obtain a bounded error estimate the following assumptions are made:

Assumption 7 Assume that fH2(G). Furthermore, assume that F:UD(ω) is also 2-smooth almost surely.

From this assumption the following lemma can be proven.

Lemma 6

H2(D(ω)) and H2(U) are isomorphic almost surely.

Proof See Theorem 3.35 in [1]. □

Remark 10 Note that the previous Sobolev norm equivalence will depend on the parameter ωΩ. The constant depends on the transformation F and the determinant of the Jacobian ∣J∣. See the proof of Theorem 3.35 in [1] for more details.

To estimate the perturbation error the next step is to bound bound the remainder. To this end the following series of lemmas are useful.

Lemma 7 For all n = 1, … , NS and for all yΓ

supxUσmax(y~nSJ1(y))supxUBS,n(η)2Fmin2.

Proof From Remark 8 we have that

y~nSJ1(y)=F1(y)(y~nSJ(y))J1(y)

and thus

σmax(y~nSF(y))σmax(BS,n(η))Fmin2.

From Assumption 1 the result follows. □

Lemma 8 For all yΓ

supxUy~nSJ(y)supxUFmaxdFmin1BS,n(η)2d

Proof Using Jacobi’s formula we have that for all yΓ

y~nSJ(y)=tr(Adj(J(y))y~nSJ(y))=J(y)i=1dλi(J(y)1BS,n(η)))

Lemma 9 For all n, m = 1, … , NS and for all yΓ

supxUσmax(y~nSy~mSJ1(y))supxU2Fmin3BS,n(η)2BS,m(η)2.

Proof From Remark 8 we have that

y~nSy~mSJ1(y)=J1(y)[(y~mSJ(y))J1(y)(y~nSJ(y))][y~nSy~mSJ(y)+(y~nSF(y))J1(y)(y~mSJ(y))]J1(y)

Taking the triangular and multiplicative inequality, and following the same approach as Lemma 7 we obtain the desired result. □

Lemma 10 For n, m = 1, … , NS for all yΓ

supxUy~mSy~nSJ(y)supxUd(d+1)FmaxdFmin2BS,n(η)2BS,m(η)2.

Proof Using Jacobi’s formula we have

y~mSy~nSJ(y)=y~mS(J(y)tr(J(y)1BS,n(η)))=y~mSJ(y)tr(J(y)1BS,n(η))+J(y)tr(y~mSJ(y)1BS,n(η))=J(y)tr(J(y)1BS,m(η))tr(J(y1BS,n(η))J(y)tr(J(y)1BS,m(η)J(y)1BS,n(η)).

The result follows. □

Lemma 11 For all v, wH01(U) and yΓ we have that

U(aF)(,yL,0)(v)Ty~nSG~(y)wvH01(U)wH01(U)amaxB(d,Fmin,Fmax,BS,n).

where G~(y)=J(y)1J(y)TF(y) for all yΓ and

B(d,Fmin,Fmax,BS,n)supxU(d+2)FmaxdFmin3BS,n(η)2.

Proof First we expand the partial derivative of G~(y) with respect to y~nS:

y~nSG~(y)=y~nSJT(y)J1(y)F(y)+JT(y)y~nSF1(y)J(y)+JT(y)J1(y)y~nSJ(y),

From Lemmas 7, 8 and the triangular inequality we have that

supxU,yΓσmax(ynSG~(y))(d+2)FmaxdFmin3BS,n(η)2.

Lemma 12 For all v, wH01(U) and yΓ we have that

U(aF)(,yL)(w)Ty~nSy~mSG~(y)v

is less or equal to

vH01(U)wH01(U)amax2(d+3)Fmax4FmindBS,n(η)2BS,m(η)2.

Proof From Remark 8 we have that

y~mSy~nSG~(y)=y~mSy~nSJT(y)J1(y)F(y)+y~nSJT(y)y~mSF1(y)J(y)+y~nSJT(y)J1(y)y~mSJ(y)+y~mSJT(y)y~nSF1(y)J(y)+JT(y)y~mSy~nSJ1(y)F(y)+JT(y)y~nSJ1(y)y~mSJ(y)+y~mSJT(y)F1(y)y~nSJ(y)+J1(y)y~mSJ1(y)y~nSF(y)+JT(y)J1(y)y~mSy~nSJ(y).

From Lemmas 7 - 10, and the triangular inequality we have that for all yΓ

y~mSy~nSG~(y)2supxU(7+3d+2Fmin1Fmaxd)Fmin4FmaxdBS,n(η)2BS,m(η)2

The next step is to bound Dy~S(u~F)(η,yL,0)(δyS)L2(U) and Dy~Sφ(yL,0)(δyS)L2(U).

Lemma 13 For all yLΓL and δySΓ we have that:

(a)

Dy~S(u~F)(,yL,0)(δyS)L2(U)n=1NSμS,naminFmindFmax2supxU(((u~F)(,yL,0)H01(U)+ν^H1(U))amax(d+2)FmaxdFmin3BS,n(η)2)+Cd12FmaxdCP(U)fH1(G)+dCP(U)FmaxdFmin(d+1)fL2(G)BS,n(η)2).

where C is a uniformly bounded constant.

(b)

Dy~Sφ(yL,0)(δyS)L2(U)supηUn=1NSμS,nφ(yL,0)H01(U)amax(d+2)BS,n2aminFmin(d+3)Fmax(d+2).

Proof (a) For any vH01(U) and for all yLΓL. and δySΓ from Lemma 2

aminFmindFmax2Dy~u~(,yL,0)(δyS)L2(U)vL2(U)n=1NSδy~nSU(u~F)(,yL,0)Ty~nSG(yL,0)v+(y~nS(fF)(,yL,0))J(yL,0)v+(fF)(,yL,0)y~nSJ(yL,0)v(ν^)Ty~nSG(yL,0)v.

With the choice of v=Dy~S(u~F)(,yL,0)(δyS) and from Lemma 11

Dy~S(u~F)(,yL,0)(δyS)L2(U)1aminFmindFmax2vL2(U)()n=1NSδynSU((u~F)(,yL,0))Ty~nSG(yL,0)v+y~nS(fF)(,yL,0)J(yL,0)v+(fF)(,yL,0)y~nSJ(yL,0)v((ν^)Ty~nSG(yL,0)v).

Now,

Uy~nS(fF)(,yL,0)J(yL,0)vFmaxdUy~nS(fF)(,yL,0)vFmaxdCP(U)vL2(U)k=1dFkfy¯nFkL2(U)FmaxdCP(U)vL2(U)bn(η)[L(U)]df1L2(U)(Applying Lemma 1 ii), whereC>0is a constant.)d12CFmaxdCP(U)vL2(U)fH1(G).

Finally, from Lemma 8 the result follows.

(b) Apply Lemma 3 with v=Dy~Sφ(yL,0)(δyS) and Lemma 11. □

Lemma 14 For all yΓ and n, m = 1, … , N we have that

y~nS(fF)(,y)L2(U)d12Fmin1fH1(G).
y~nSy~mS(fF)(,y)L2(U)FmindfH2(G).

Proof (a) Follow the proof in Lemma 13. (b) By applying the chain rule for

Sobolev spaces and Lemma 6 we obtain that for all yΓ

y~nSy~mS(fF)(,y)L2(U)=i=1dj=1dFiFjfymSFiynSFjL2(U)d2i=1dj=1dFiFjfL2(U)Cd2i=1dj=1dFiFjfD(ω)Cd2fH2(G).

for some constant C > 0. This completes the proof. □

From Lemmas 4 - 5 and 7 - 14 we have that for all yLΓL and for all δySΓS

Dy~S2Q(y)(δyS,δyS)n,m=1NSδy~nSδy~mSsupyΓ,xUGn,m,

where

Gn,m(C,CP(U),amax,amin,Fmax,Fmin,d,v^[L(U)]d,ν^H1(U),fH2(G),(u~F)(yL,0)H1(U),φ(yL,0)H1(U),BS,n2,BS,m2,μS,1,,μS,NS)

is a bounded constant that depends on the indicated parameters. We have now proven the following result.

Theorem 3 Let (u~F)(yL,0) be the solution to the bilinear Problem 1 that satisfies Assumptions 1-7 then for all yLΓL and ySΓS

Q(yL,yS)Q^(yL,yS)Lρ2(Γ)12n,m=1NSμS,nμS,mGn,mG(k=1NSμS,k)2,

where G12maxn,msupyΓ,ηUGn,m.

6.2. Finite element error

The finite element convergence rate for the solution (u~F) and influence function φ are directly dependent on the regularity of these functions, the polynomial order Hh(U)H01(U) of the finite element space and the mesh size h). By applying the triangular inequality

Q^(yL,yS)Q^h(yL,yS)Lρ1(Γ)Q(yL,0)Qh(yL,0)Lρ1(ΓL)+n=1NSμS,nUα~n(x,yL,0)α~i,h(x,yL,0)Lρ2(ΓL).

Following a duality argument we obtain

Q(yL,0)Qh(yL,0)Lρ1(ΓL)amaxFmaxdFmin2CΓL(r)DΓL(r)h2r.

for some constant rN, CΓL (r) := ʃΓL C(r, u(yL, 0))ρ(yL)dy and DΓL (r) := ʃΓL C(r, φ(yL, 0)) ρ(yL)dy.

The constant r is function of i) the regularity properties of the influence function and the solution (u~F)(·, yL, 0) ii) the polynomial degree of the finite element basis.

It follows that

Q^(yL,0)Q^h(yL,0)Lρ2(Γ)S0h2r+hrn=1NSSnμS,n (22)

where S0amax, FmaxdFmin2CΓL(r)DΓL(r) and

Sn(amax,Fmax,Fmin,d,v^[L(U)]d,CΓL(r),DΓL(r),fL2(G)ν^(yL,0)H1(U),BS,n2)

are bounded constants for n = 1, … , NS.

6.3. Sparse grid error

For the sake of simplicity only convergence rates for the isotropic Smolyak sparse grid are shown. This analysis can be extended to the anisotropic case without much difficulty.

Q^h(yL,yS)Swm,gQ^h(yL,yS)Lρ2(Γ)amaxFmaxdFmin2e0Lρ2(ΓL;H01(U))+n=1NSμS,nenLρ2(ΓL),

where e0u^h(,yL,0)Swm,gu^h(,yL,0)] and

enUα~n,h(,yL,0)Swm,g[Uα~n,h(,yL,0)]

for n = 1, … , NS, and

Lρq(ΓL;V){v:ΓL×UVis strongly measurable,ΓvVqρ(y)dy<}.

for any Banach space V defined on U.

It can be shown that e0Lρ2(ΓL;H01(U)) (and enLρ2(ΓL) for n = 1, … , NS) have a algebraic or sub-exponential convergence rate as a function of the number of collocation knots η (see [26,27]). necessary condition is that the semi-discrete solution u~0,h(z)u^h(,yL,0) and u~n,h(z)Uα^k,n(,yL,0), n = 1, … , NS admit an analytic extension in the same region Θβ,NL. This is a reasonable assumption to make.

Consider the polyellipse in Eσ1,,σNLΠn=1NLEn,σnCNL where

En,σn{zC;σn>0;σnκn0;Re(z)=eκn+eκn2cos(θ)},{Im(z)=eκneκn2sin(θ),θ[0,2π)},

and let

Σn{znC;yn=y+wn,y[1,1],wnτnβ1δ~}

for n = 1, … , NL. For the sparse grid error estimates to be valid the solution u~h(,yL,0) and Uα~n,h(,yL,0), n = 1, … , NS, have to admit an extension on the polyellipse Eσ1,,σNL. The coefficients σn, for n = 1, … , N control the overall decay σ^ of the sparse grid error estimate. Since we restrict our attention to isotropic sparse grids the decay will be dictated by the smallest σn i.e. σ^minn=1,,NLσn.

The next step is to find a suitable embedding of Eσ1,,σNL in Θβ,NL. Thus we need to pick the largest σn n = 1, … , NL such that Eσ1,,σNLΘβ,NL. This is achieved by forming the set Σ := Σ1 × ⋯ × ΣNL and letting σ1=σ2==σNL=σ^=log(τNL2+1+τNL)>0 as shown in Figure 2.

Fig. 2.

Fig. 2

Embedding of En,σ^n in ΣnΘβ,NL.

We now have almost everything we need to state the sparse grid error estimates. However, in [27] to simplify the estimate it is assumed that if vC0(Γ;H01(U)) then the term M(v) (see page 2322) is equal to one. We reintroduce the term M(v) and note that it can be bounded by maxzΘβ,NL v(z)H01(U) and update the sparse grids error estimate. To this end let M~max{maxn=1NLmaxzΘNL,βu~n,h(z)H01(U), maxzΘNL,βu~n,h(z)}.

Remark 11 In [6] Corollary 8 a bound for (u~F)(,z)H01(U), zΘβ,NL, can be obtained by applying the Poincaré inequality. Following a similar argument a bound for φ(,z)H01(U) for all zΘβ,NL. Thus bounds for u~n,h(z)H01(U) for n = 0, … , NL and for all zΘβ,NL can be obtained.

Modifying Theorem 3.11 in [27] it can be shown that given a sufficiently large η (w > NL/log 2) a Smolyak sparse grid with a nested Clenshaw Curtis abscissas we obtain the following estimate

e0Lρ2(ΓL;H01(U))Q(σ,δ(σ),NL,M~)ημ3(σ,δ,NL)exp(NLσ21NLημ2(NL)) (23)

and

enLρ2(ΓL)Q(σ,δ(σ),NL,M~)ημ3(σ,δ,NL)exp(NLσ21NLημ2(NL)) (24)

for n = 1, … , NS, where σ=σ^2, δ(σ)(elog(2)1)C~2(σ),

Q(σ,δ(σ),NL,M~)C1(σ,δ(σ),M~)exp(σδ(σ)C~2(σ))max{1,C1(σ,δ(σ),M~)}NL1C1(σ,δ(σ),M~),

μ2(NL)=log(2)NL(1+log(2NL)) and μ3(σ,δ(σ),Ns)=σδ(σ)C~2(σ)1+log(2NL). Furthermore, C(σ)=4e2σ1,

C~2(σ)=1+1log2π2σ,δ(σ)=elog(2)1C~2(σ),C1(σ,δ,M~)=4M~C(σ)a(δ,σ)eδσ,

and

a(δ,σ)exp(δσ{1σlog2(2)+1log(2)2σ+2(1+1log(2)π2σ)}).

7. Complexity analysis

We now perform an accuracy vs the total work analysis. The objective is to derived total work W as a function of a tolerance TOL > 0, such that var[Q(yL,yS)]var[Swm,g[Q^h(yL,yS)]]TOL and E[Q(yL,yS)]E[Swm,g[Qh(yL,yS)]]TOL. We restrict our attention to the isotropic sparse grid with Clenshaw-Curtis abscissas. For each realization of the semi-discrete approximation uh, it is assumed that it requires O(Nhq) work to compute, where Nh is the cardinality of the finite element space Hh(U)H01(U), and the constant q is a function of the regularity of uh and the efficiency of the solver. The cost for solving the approximation of the influence function φhHh(U) is also assumed to be O(Nhq). Thus for any yLΓL, the cost for computing Qh(yL, 0) := B(yL, 0; uh(yL, 0),φh(yL, 0)) is bounded by O(Nhd2+Nhq). Similarly, for any yLΓL the cost for evaluating Uα~n,h(,yL,0) is O(Nhd2+Nhq).

Remark 12 To compute the expectation integrals for the mean and variance correction a Gauss quadrature scheme can be used coupled with an auxiliary probability distribution ρ^(y) such that

ρ^(y)=Πn=1Nρn(yn)andρρ^<C<.

for some C > 0 (See [6] for details). However, for the sake of simplifying the analysis it is assumed that quadrature is exact and of cost O(1).

Let η0(NL, m, g, w, Θβ,NL) be the number of the sparse grid knots for constructing Swm,g[α~n,h(,yL,0)] and ηn(NL, m, g, w, Θβ,NL) for constructing Swm,g[α~n,h(,yL,0)], for n = 1, … , NS. The cost for computing E[Swm,g[Qh(yL,0)]] is O((Nhd2+Nhq)η0) and the cost for computing n=1NSμS,nE[ynSSwm,g[Uα~n(,yL,0)]] is bounded by O((Nhd2+Nhq)NSη), where

ηmaxn=0,,NSηn.

The total cost for computing the mean correction is bounded by

WTotalmean(TOL)=O((Nh(TOL)d2+Nhq(TOL))NS(TOL)η(TOL)). (25)

Following a similar argument the cost for computing the variance correction is bounded by

WTotalvar(TOL)=O((Nh(TOL)d2+Nhq(TOL))NS2(TOL)η(TOL)). (26)

We now obtain the estimates for Nh(TOL), NS(TOL) and η(TOL) for the Perturbation, Finite Element and Sparse Grids respectively:

  1. Perturbation: From the perturbation estimate derived in Section 6.1 we seek Q(yL,yS)Q^(yL,yS)Lρ2(Γ)TOL3CP with respect to the decay of the coefficients μS,n, n = 1, … NS. First, make the assumption that BTn=1NSμS,nCDNLl for some uniformly bounded CD > 0 and l > 0. It follows that Q(yL,yS)Q^(yL,yS)Lρ2(Γ)TOL3CP if
    BT2GCD2NL2lGTOL3CP.
    Finally, we have that
    NS(TOL)(TOL3CPCD2G)1(2l).
  2. Finite Element: From Section 6.2 if
    S0h2r+BTT0hrTOL3CPFE,
    T0maxn=1NSSn, then Q^(yL,0)Q^h(yL,0)Lρ2(Γ;H01(U))TOL3CPFE. Solving the quadratic inequality we obtain that
    h(TOL)(BTT02S0+((BTT04S0)2+4TOL12S0CPFE)12)1r
    Assuming that Nh grows as O(hd) then
    Nh(TOL)D3(BTT02S0+((BTT04S0)2+4TOL12S0CFE)12)dr

    for some constant D3 > 0.

  3. Sparse Grid: We seek Q^h(yL,0)Swm,gQ^h(yL,0)Lρ2(Γ)TOL3CPSG. This is satisfied if e0Lρ2(ΓL;H01(U))TOL6amaxFmaxdFmin2CPSG and
    enLρ2(ΓL)TOL6BTCPSG
    for n = 1, … , NS. Now, following a similar approach as in [27] let δ=(elog(2)1)C~2(σ). Thus Q^h(y)Swm,gQ^h(y)Lρ2(Γ)TOL3CPSG if
    η0(TOL)(6amaxFmaxdFmin2CPSGCSFNLexp(σ(β))TOL)1+log(2NL)σ
    for a sufficiently large NL, where CFC1(σ,δ,M~)1C1(σ,δ,M~), and Fmax{1,C1(σ,δ,M~)}. Similarly, for a sufficiently large NL we have that
    ηn(TOL)C(6BTCPSGCFFNLexp(σ(β))TOL)1+log(2NL)σ

    for n = 1, … , NS.

Combining (a), (b) and (c) into equations (25) and (26) we obtain the total work WTotalmean(TOL) and WTotalvar(TOL) as a function of a given user error tolerance TOL.

8. Numerical results

In this section the hybrid collocation-perturbation method is tested on an elliptic PDE with a stochastic deformation of the unit square domain i.e. U = (0, 1) × (0, 1). The deformation map F:UD(ω) is given by

F(η1,η2,ω)=(η1,(η20.5)(e(η1,ω))+0.5)ifη2>0.5F(η1,η2,ω)=(η1,η2)if0η20.5.

According to this map only the upper half of the square is deformed but the lower half is left unchanged. The cartoon example of the deformation on the unit square U is shown in Figure 3.

Fig. 3.

Fig. 3

Stochastic deformation of unit square U according to the rule given by F:UD(ω). The region U~ is not deformed and given by (0, 1) × (0, 0.5).

The Dirichlet boundary conditions are set according to the following rule:

u(η1,η2,ω)D(ω)={ϑ(η1)upper border0otherwise}

where ϑ(η1)exp(114(η10.5)2). Note that the boundary condition on the upper border does not change even after the stochastic perturbation.

For the stochastic model e(η1, w) we use a variant of the Karhunen Loève expansion of an exponential oscillating kernel that are encountered in optical problems [24]. This model is given by

eL(ω,η1)1+cY1(ω)(πL2)12+cn=2NLμnφn(η1)Yn(ω);eS(ω,η1)cn=1NSμn+NLφn(η1)Yn(ω)

with decay μn(πL)12nk, nN, kR+ and

φn(η1)sin(nπη12Lp)cos(nπη12Lp)+cosh(η1)+sinh(η1)n.

It is assumed that {Yn}n=1N are independent uniform distributed in (3, 3), thus E[Yn]=0, E[YnYm]=δ[nm] for n, m = 1 … N where δ[·] is the Kronecker delta function.

It can be shown that for n > 1 we have that

Bn=[00c(η20.5)η1φn(η1)0].

Thus for all lN we have that supxU σmax(Bl(η1)) < C for some constant C. Thus for k = 1 we obtain linear decay on the gradient of the deformation. In Figure 4 (a) a mesh example of the reference domain is shown with Dirichlet boundary conditions. In Figure 4 (b) and (c) two realizations D(ω) of the reference domain U from the deformation model F(η1, η2, ω) are shown also. These realizations correspond to the 15 dimensional example (N = 15) with k = 3, c = 1/15 and L = 1/2.

Fig. 4.

Fig. 4

Random deformation of a reference square domain U. (a) U reference domain with Dirichlet boundary conditions. (b) Realization of the deformed reference square U. (c) Second realization of the deformed reference square U.

The QoI is defined on the bottom half of the reference domain (U~), which is not deformed, as

Q(u^)(0,1)(0,12)ϑ(η1)ϑ(2η2)u^(η1,η2,ω)dη1dη2.

In addition, we have the following:

  1. a(x) = 1 for all xU, L = 1/2, LP = 1, N = 15.

  2. The domain is discretized with a 2049 × 2049 triangular or 4097 × 4097 mesh.

  3. E[Qh], E[Qh2], and n=1NSμS,n E[Uα~n,h]2 are computed with the Clenshaw-Curtis isotropic sparse grid from the Sparse Grids Matlab Kit [31,2].

  4. The reference solutions var[Qh(uref)] and E[Qh(uref)] for N =15 dimensions are computed with a dimension adaptive sparse grid from the Sparse Grid Toolbox V5.1 [15,23,22]). The choice of abscissas is set to Chebyshev-Gauss-Lobatto.

  5. The QoI is normalized by dividing by Q(U) i.e. the QoI of solution u^ on the reference domain U

  6. The reference computed mean value is 1.054 and variance is 0.1122 (0.3349 std) for c = 1/15 and cubic decay (k = 3). This shows a significant aleatory deformation of the solution with respect to the random domain D(ω).

Remark 13 The correction variance term is computed on the fixed reference domain U as described by Problem 1 instead of the perturbed domain. The pure collocation approach (without the variance correction) and reference solution are also computed on U. Numerical experiments confirm that computing the pure collocation approach on U, as described by Problem 1, or the perturbed domain D(ω) lead to the same answer up to the finite element error. This is consistent with the theory.

For the first numerical example we assume that we have cubic decay of the deformation i.e. the gradient terms μnsupxUBn(x) decay as n−3. The domain is formed from a 2049 × 2049 triangular mesh. The reference domain is computed with 30,000 knots (dimension adaptive sparse grid). In Figure 5(a) we show the results for the hybrid collocation-perturbation method for c = 1/15, k = 3 (cubic decay), NL = 2, 3, 4 dimensions and compare them to the reference solution. For the collocation method the level of accuracy is set up to w = 5. For the variance correction we increase the level until w = 3 is reached since the there is no benefit to increasing w further as the sparse grid error is smaller than the perturbation error. The observed computational cost for computing the variance correction is about 10% of the collocation method.

Fig. 5.

Fig. 5

Hybrid Collocation-Perturbation results with k = 3 (cubic decay) and c = 1/15. (a) Variance error for the hybrid collocation-perturbation method as a function of the number of collocation samples with a isotropic sparse grid and Clenshaw Curtis abscissas. The maximum level is set to w = 3. (b) Comparison between the pure collocation (Col) and the hybrid collocation-perturbation (Pert) approaches. As we observe the error decays significantly with the addition of the variance correction. However, the graphs saturate once the perturbation/truncation error is reached. Note that the number of knots of the sparse grid are computed up to w = 5 for the pure collocation method. For the variance correction the sparse grid level is set to w = 3 since at this point the error is smaller than the perturbation error and there is no benefit to increasing w. The sparse grid knots needed for the variance correction are almost negligible compared to the pure collocation.

In Figure 5(b) we compare the results between the pure collocation [6] and hybrid collocation-perturbation method. Notice the hybrid collocation-perturbation shows a marked improvement in accuracy over the pure collocation approach.

In Figure 6 the variance error decay plots for k = 3 (cubic decay) with (a) c = 1/15 and (b) c = 1/120 are shown for the collocation (dashed line) and hybrid methods (solid line). The reference solutions are computed with a dimension adaptive sparse grid with 20,000 knots for (a) and (b). The mesh size is set to 4097 × 4097 for (a) and 2049 × 2049 for (b). The collocation and hybrid estimates are computed with an isotropic sparse grid with Clenshaw-Curtis abscissas.

Fig. 6.

Fig. 6

Variance error comparison of truncation and hybrid collocation-perturbation method as a function of the number of dimensions and different decay rates. (a) Variance error for the pure collocation (dashed line) and hybrid collocation-perturbation (solid line) methods for c = 1/15 and k = 3. (b) Variance error ratio between the collocation and hybrid methods for c = 1/120 (i.e. small perturbation) and k = 3. Note that the finite element error is reached at NS = 2, saturates the overall accuracy.

It is observed that the error for the hybrid collocation-perturbation method decays faster compared to the pure collocation method. Moreover, as the dimensions are increased the accuracy gain of the perturbation method accelerates significantly (c.f. Figure 6(a). The accuracy improves from around 1NL8 to about 1NL16. Note that the computational cost for both the hybrid and collocation methods are relatively equal. This method shows a significant reduction in computational cost for the same accuracy, making it suitable for large dimensional problems.

In Figure 6 (b) the perturbation of geometry is significantly reduced (c = 1/120). Due to the small perturbation, the perturbation approximation is significantly higher and the error of the variance decreases substantially for Ns = 2. This is expected since perturbation methods work well under small variations of the geometry. Notice that for NS = 2, 3, … accuracy of the hybrid method appears not to improve, however the limiting factor at this point is due to the finite element error.

9. Conclusions

In this paper we propose a new hybrid collocation perturbation scheme to computing the statistics of the QoI with respect to random domain deformations that are split into large and small deviations. The large deviations are approximated with a stochastic collocation scheme. In contrast, the small deviations components of the QoI are approximated with a perturbation approach. A rigorous convergence analysis of the hybrid approach is developed.

It is shown that for a linear elliptic partial differential equation with a random domain the variance correction term can be analytically extended to a well defined region Θβ,NL embedded in CNL with respect to the random variables. This analysis leads to a provable sub exponential convergence rate of the QoI computed with an isotropic Clenshaw-Curtis sparse grid. The size of the region Θβ,NL and therefore the rate of convergence of an isotropic sparse grid is a function of the gradient decay of the random deformation.

Error estimates and numerical experiments show that the error decays the square of the polynomial order with respect to the number of dimensions. This shows a marked reduction in effective dimensionality of the problem. Moreover, in practice, the variance correction term can be computed at a fraction of the cost of the low dimensional large variation component.

The hybrid approach is essentially a dimensionality reduction technique. We demonstrate both theoretically and numerically that the variance error with respect to the collocation dimensions NL decays quadratically faster than the pure stochastic collocation approach. Thus, the hybrid method is compatible with other stochastic collocations approaches such as anisotropic sparse grids [26]. This makes this method well suited for a large number of stochastic variables.

Acknowledgments

This material is based upon work supported by the National Science Foundation under Grant No. 1736392. Research reported in this technical report was supported in part by the National Institute of General Medical Sciences (NIGMS) of the National Institutes of Health under award number 1R01GM131409-01.

Footnotes

Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of a an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.

Contributor Information

Julio E. Castrillón-Candás, Boston University, Department of Mathematics and Statistics, 111 Cummington Mall, Boston, MA 02215

Fabio Nobile, École Politechnique Fédérale Lausanne, Station 8, CH1015, Lausanne, Switzerland.

Raúl F. Tempone, Applied Mathematics and Computational Science, 4700 King Abdullah University of Science and Technology, Thuwal, 23955-6900, Saudi Arabia

References

  • 1.Adams RA: Sobolev Spaces. Academic Press; (1975) [Google Scholar]
  • 2.Bäck J, Nobile F, Tamellini L, Tempone R: Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: A numerical comparison. In: Hesthaven JS, Rønquist EM (eds.) Spectral and High Order Methods for Partial Differential Equations, Lecture Notes in Computational Science and Engineering, vol. 76, pp. 43–62. Springer; Berlin Heidelberg: (2011) [Google Scholar]
  • 3.Barthelmann V, Novak E, Ritter K: High dimensional polynomial interpolation on sparse grids. Advances in Computational Mathematics 12, 273–288 (2000) [Google Scholar]
  • 4.Billingsley P: Probability and Measure, third edn. John Wiley and Sons; (1995) [Google Scholar]
  • 5.Brookes M: The matrix reference manual (2017). URL http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html
  • 6.Castrillón-Candás J, Nobile F, Tempone R: Analytic regularity and collocation approximation for PDEs with random domain deformations. Computers and Mathematics with applications 71(6), 1173–1197 (2016) [Google Scholar]
  • 7.Castrillon-Candas J, Xu J: A stochastic collocation approach for parabolic pdes with random domain deformations. ArXiv e-prints (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Chauviere C, Hesthaven J, Lurati L: Computational modeling of uncertainty in time-domain electromagnetics. SIAM J. Sci. Comput 28, 751–775 (2006) [Google Scholar]
  • 9.Chernov A, Schwab C: First order k – th moment finite element analysis of nonlinear operator equations with stochastic data. Math. Comput 82(284), 1859–1888 (2013) [Google Scholar]
  • 10.Chkifa A, Cohen A, Schwab C: High-dimensional adaptive sparse polynomial interpolation and applications to parametric pdes. Foundations of Computational Mathematics 14(4), 601–633 (2014). DOI 10.1007/s10208-013-9154-z. URL 10.1007/s10208-013-9154-z [DOI] [Google Scholar]
  • 11.Dambrine M, Greff I, Harbrecht H, Puig B: Numerical solution of the poisson equation on domains with a thin layer of random thickness. SIAM Journal on Numerical Analysis 54(2), 921–941 (2016) [Google Scholar]
  • 12.Dambrine M, Greff I, Harbrecht H, Puig B: Numerical solution of the homogeneous neumann boundary value problem on domains with a thin layer of random thickness. Journal of Computational Physics 330, 943–959 (2017). DOI 10.1016/j.jcp.2016.10.044. URL http://www.sciencedirect.com/science/article/pii/S0021999116305484 [DOI] [Google Scholar]
  • 13.Dambrine Marc, Harbrecht Helmut, Puig Bénédicte: Computing quantities of interest for random domains with second order shape sensitivity analysis. ESAIM: M2AN 49(5), 1285–1302 (2015) [Google Scholar]
  • 14.Fransos D: Stochastic numerical methods for wind engineering. Ph.D. thesis, Politecnico di Torino; (2008) [Google Scholar]
  • 15.Gerstner T, Griebel M: Dimension-adaptive tensor-product quadrature. Computing 71(1), 65–87 (2003) [Google Scholar]
  • 16.Gohberg I: Holomorphic operator functions of one variable and applications : methods from complex analysis in several variables. Operator theory : advances and applications. Birkhauser, Basel: (2009) [Google Scholar]
  • 17.Guignard D, Nobile F, Picasso M: A posteriori error estimation for the steady Navier-Stokes equations in random domains. Computer Methods in Applied Mechanics and Engineering 313, 483–511 (2017). DOI 10.1016/j.cma.2016.10.008. URL http://www.sciencedirect.com/science/article/pii/S0045782516301803 [DOI] [Google Scholar]
  • 18.Harbrecht H, Peters M, Siebenmorgen M: Analysis of the domain mapping method for elliptic diffusion problems on random domains. Numerische Mathematik 134(4), 823–856 (2016) [Google Scholar]
  • 19.Harbrecht H, Schmidlin M: Multilevel quadrature for elliptic problems on random domains by the coupling of fem and bem (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Harbrecht H, Schneider R, Schwab C: Sparse second moment analysis for elliptic problems in stochastic domains. Numerische Mathematik 109, 385–414 (2008) [Google Scholar]
  • 21.van den Hout M, Hall A, Wu MY, Zandbergen H, Dekker C, Dekker N: Controlling nanopore size, shape and stability. Nanotechnology 21 (2010) [DOI] [PubMed] [Google Scholar]
  • 22.Klimke A: Sparse Grid Interpolation Toolbox – user’s guide. Tech. Rep IANS report 2007/017, University of Stuttgart; (2007) [Google Scholar]
  • 23.Klimke A, Wohlmuth B: Algorithm 847: spinterp: Piecewise multilinear hierarchical sparse grid interpolation in MATLAB. ACM Transactions on Mathematical Software 31(4) (2005) [Google Scholar]
  • 24.Kober V, Alvarez-Borrego J: Karhunen-Loève expansion of stationary random signals with exponentially oscillating covariance function. Optical Engineering (2003) [Google Scholar]
  • 25.Nobile F, Tamellini L, Tempone R: Convergence of quasi-optimal sparse-grid approximation of Hilbert-space-valued functions: application to random elliptic pdes. Numerische Mathematik 134(2), 343–388 (2016). DOI 10.1007/s00211-015-0773-y. URL 10.1007/s00211-015-0773-y [DOI] [Google Scholar]
  • 26.Nobile F, Tempone R, Webster C: An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data. SIAM Journal on Numerical Analysis 46(5), 2411–2442 (2008) [Google Scholar]
  • 27.Nobile F, Tempone R, Webster C: A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM Journal on Numerical Analysis 46(5), 2309–2345 (2008) [Google Scholar]
  • 28.Scarabosio L: Multilevel Monte Carlo on a high-dimensional parameter space for transmission problems with geometric uncertainties. ArXiv e-prints (2017) [Google Scholar]
  • 29.Smolyak S: Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Mathematics, Doklady 4, 240–243 (1963) [Google Scholar]
  • 30.Stacey A: Smooth map of manifolds and smooth spaces. http://www.texample.net/tikz/examples/smooth-maps/
  • 31.Tamellini L, Nobile F: Sparse grids matlab kit (2009–2015). http://csqi.epfl.ch/page-107231-en.html [Google Scholar]
  • 32.Tartakovsky D, Xiu D: Stochastic analysis of transport in tubes with rough walls. Journal of Computational Physics 217(1), 248–259 (2006). Uncertainty Quantification in Simulation Science [Google Scholar]
  • 33.Zhenhai Z, White J: A fast stochastic integral equation solver for modeling the rough surface effect computer-aided design. In: IEEE/ACM International Conference ICCAD-2005, pp. 675–682 (2005) [Google Scholar]

RESOURCES