Skip to main content
Elsevier Sponsored Documents logoLink to Elsevier Sponsored Documents
. 2012 Dec 28;312(24):3553–3560. doi: 10.1016/j.disc.2012.08.009

Multivariate linear recurrences and power series division

Herwig Hauser a,b, Christoph Koutschan c,
PMCID: PMC3587377  PMID: 23482936

Abstract

Bousquet-Mélou and Petkovšek investigated the generating functions of multivariate linear recurrences with constant coefficients. We will give a reinterpretation of their results by means of division theorems for formal power series, which clarifies the structural background and provides short, conceptual proofs. In addition, extending the division to the context of differential operators, the case of recurrences with polynomial coefficients can be treated in an analogous way.

Keywords: Formal power series, Power series division, Linear recurrence equation, Multivariate sequence, C-finite recurrence, P-finite recurrence, Perfect operator


We study multivariate linear recurrences in d variables which define a d-dimensional sequence with values in K (we assume it to be a field of characteristic zero). Given such a sequence we would like to obtain information about the nature of its generating function, a d-variate power series. In the simplest case where d=1 and the recurrence has constant coefficients it is well known that the generating function is always a rational function. But already in two variables, still restricting the coefficients of the recurrence to be constants, the generating function can be rational, algebraic, D-finite and even non-D-finite, as it is shown in the remarkable work by Bousquet-Mélou and Petkovšek [3]. We will reinterpret the functional equation for the generating function as a division with remainder. This allows us to formulate the proofs in a uniform and elegant way using division theorems, and thus to shed a new light on Bousquet-Mélou and Petkovšek’s results.

In the second part we will turn to multivariate linear recurrences with polynomial coefficients (P-finite recurrences). The framework introduced in the first two sections will be extended by considering the division by a differential operator. We will demonstrate that P-finite recurrences are closely related to the concept of perfect operators which enables us to state a result about the convergence of the generating function. However, the situation now is much more involved and therefore we do not believe that similarly nice results as for recurrences with constant coefficients can be stated. Nevertheless we believe that our work gives new insight and a better understanding of such multivariate generating functions.

We use bold letters to denote vectors x=(x1,,xd) where power products (monomials) are written as xn=x1n1xdnd. The scalar product is denoted by uw=u1v1++udvd and by |n| we refer to the sum of the entries n1++nd. The support supp(F(x)) of a formal power series F(x)=nNdfnxnKx is the set of all monomials xn whose coefficients fn are nonzero. By Kxp we denote the set of all power series with support in {xnnNd(p+Nd)}. When we speak of a weight vector, we refer to an element of Rd with positive components. A weight vector w with Q-linearly independent components induces a total order w on Zd as well as on the set of monomials xn in Kx: awb and xawxb if wa<wb. The initial monomial inw(F) of a power series F with respect to a weight vector w is defined to be the w-minimal element of supp(F).

1. Recurrences with constant coefficients

We study the sequence (fn)nNd, which is a d-dimensional sequence in K and which is defined by the recurrence

fn={φ(n),nNd(s+Nd)tHctfn+t,ns+Nd (1)

where sNd is the starting point of the recurrence and HZd is a finite set of shifts that occur in the recurrence; we require that s+HNd. In other words, the values of the sequence in the shifted positive quadrant s+Nd are computed via the recurrence relation, whereas all other values are given as initial conditions specified by the function φ:Nd(s+Nd)K. For this section we restrict the coefficients ct to be constants in K. To ensure that this way of defining a sequence makes sense we have to pose an additional condition formulated in the following theorem.

Theorem 1

If there exists a weight vector wRd with positive components such that wt<0 for all tH, then the recurrence (1) has a unique solution.

Proof

The proof can be found in [3]. □

Next we define the apex p of the recurrence (1) to be the vector p=(p1,,pd) with pimax{titH{0}}. A small example might serve to illustrate the above definitions.

Example 1

Fig. 1 illustrates the 2-dimensional situation for the shifts H={(3,0),(2,1),(0,2),(1,1)} with starting point s=(3,2) and apex p=(1,0). The area s+Nd is shaded; in the L-shaped area outside of it the values of the sequence must be given as initial conditions. The weight vector w=(1,2) matches the conditions of Theorem 1 since all points s+t,tH lie below the line that is perpendicular to w going through the point s.

Fig. 1.

Fig. 1

Example of a bivariate recurrence.

The objective now is to determine properties of the generating function F(x)=nNdfnxn in terms of the given initial conditions and the recurrence relation. We will concentrate on the “most interesting part” of the generating function, namely on Fs(x)=ns+Ndfnxns. We can easily relate these two functions by F(x)=xsFs(x)+nNd(s+Nd)φ(n)xn. A functional equation for Fs(x) can be deduced in a rather straightforward manner (for details see [3]):

Q(x)Fs(x)=K(x)U(x), (2)

where

Q(x)=xptHctxpt,
K(x)=tHn(s+t+Nd)(s+Nd)ctφ(n)xns+pt,
U(x)=tHn(s+Nd)(s+t+Nd)ctfnxns+pt.

Having a closer look at these quantities we observe:

  • Q(x) is a polynomial that is given by the recurrence relation (the characteristic polynomial of the recurrence).

  • K(x) is known since it contains only coefficients which are given by the initial value function φ(n). Note that K(x) is in fact a formal power series, i.e., no negative exponents occur: The exponents of K(x) have the form ns+pt with n(s+t+Nd)(s+Nd), hence ntsNd. Recall that p has only nonnegative components.

  • U(x) is also a formal power series but is unknown. Its exponents have the form (ns)+(pt) with n(s+Nd)(s+t+Nd). Hence nsNd but also pt has only nonnegative entries. Thus no negative exponents occur. Furthermore, from ns+t+Nd we can conclude that ntsNd and therefore the support of U(x) fulfills supp(U)Kxp.

The Eq. (2) involves two unknown series, namely Fs(x) and U(x), and two given ones, Q(x) and K(x). It is now immediate to write (2) in a slightly different way:

K(x)=Q(x)Fs(x)+U(x). (3)

This is nothing else but a Euclidean division of power series with remainder: The formal power series K(x) is divided by the polynomial Q(x) yielding the quotient Fs(x) and the remainder U(x). Since we are dealing with multivariate power series we have to fix a monomial order. We choose that one induced by the weight vector w from Theorem 1 in order to make xp the initial monomial of Q(x): From wt<0 it follows that xpwxpt for all tH, and therefore xp is w-minimal in supp(Q). Note that the division works as soon as the initial monomial is fixed, no matter whether w has Q-linear independent entries or not. We have observed that U(x)Kxp, and this matches exactly the necessary support condition that is imposed on the remainder of the Euclidean division. In the next section we will demonstrate in detail how the power series division works and how it applies to reproduce the results of Bousquet-Mélou and Petkovšek.

2. Division of formal power series

The division (3) can be carried out explicitly by generalizing the usual Euclidean division with remainder in K[x] to the ring of multivariate power series Kx (Weierstraß division). We interpret the division by a power series as a perturbation of the division by its initial monomial. Let us have a short look on a special case:

Example 2

The division of a power series P(x) by a monomial xn, nNd, is equivalent to the direct sum decomposition Kx=xnKxKxn (viewed as vector spaces). For the division we get P(x)=xnF(x)+R(x) where the remainder R(x) has to fulfill the condition supp(R)Kxn. Note that Kxn is isomorphic to Kx/xn, again when viewed as vector spaces.

In a straightforward manner this example can be extended to the division by a power series A(x)Kx with initial monomial xn (w.r.t. some monomial order w on Nd), and one gets Kx=A(x)KxKxn. This can be seen as follows: We consider the map u:Kx×KxnKx,(B,C)AB+C and show that it is a K-linear isomorphism. We split this map into u=v+w such that v(B,C)=xnB+C and w(B,C)=(Axn)B. Clearly v is a K-linear isomorphism due to the definition of Kxn (see Example 2). Thus u is an isomorphism if and only if (v1u)1=(Id+v1w)1 is an isomorphism. This is the case if and only if the geometric series k=0(v1w)k “converges”, in other words if the limit k=0(v1w)k(B,C) exists in the formal power series sense. This is indeed the case because the orders of the summands tend to infinity: let (Bk+1,Ck+1)=(v1w)(Bk,Ck) with Bk+10; then the initial monomial of Bk+1 is strictly w-larger than the initial monomial of Bk. Therefore the initial monomials of the sequence Ck,Ck+1, grow larger and larger, too.

In our setting where the division (3) arises from a recurrence, we are in fact not interested in performing the division explicitly, because we can obtain its result (i.e., a power series representation of the generating function) by just applying the recurrence relation. Instead we are interested in deducing properties of the generating function. The first result of this flavor (Theorem 12 in [3]) is straightforward.

Theorem 2

Assume that the recurrence (1) has apex 0 . The generating function Fs(x) is rational if and only if the initial condition function K(x) is a rational function.

Proof

From the support condition UKxp it follows immediately that U=0 and Eq. (3) simplifies to Fs(x)=K(x)/Q(x). □

Assume that K is a complete valued field, e.g. R,C or Qp. We call a power series over K convergent if it defines an analytic function in a neighborhood of the origin (0,,0), i.e., if it converges for all points of such a neighborhood. The ring of convergent power series is denoted by K{x}. The Weierstrass division theorem and its extension by Grauert–Hironaka–Galligo to ideals of convergent power series [13,10,11,4,5] then provides sufficient conditions for the generating function Fs(x) to be convergent. We only formulate the theorem in the case of the division by one series:

Theorem 3

Let K be any complete valued field, and let A(x)K{x} be a convergent power series. Let xn be the initial monomial of A(x) with respect to some monomial order on Nd . Then

K{x}=A(x)K{x}K{x}n.

Proof

For a power series FKx and a real number r>0, we define |F(x)|rnNd|fn|r|n|. It is clear that F(x)K{x} if and only if there exists an r such that |F(x)|r<. The space K{x}r of F with |F(x)|r< forms a Banach space.

As before we consider the map u=v+w:K{x}×K{x}nK{x}. For sufficiently small r, this map restricts to the respective Banach spaces K{x}r×K{x}rn and K{x}r. Then the convergence of the geometric series k=0(v1w)k follows if we show that the restrictions of v1w have operator norm <1 for sufficiently small r. This can be shown in a few lines using the fact that the norm of the initial monomial of a series is the largest one among the monomials of its expansion [11]. □

We conclude that the solution Fs(x) of (3) is a convergent power series if the initial conditions constitute a convergent series K(x)K{x}. This has been proven in Theorem 7 of [3].

A power series A(x)Kx is called algebraic, if there exists a polynomial P(x,t)K[x][t] such that P(x,A(x))=0, or, more explicitly, if there are polynomials p0,,pmK[x],pm0 such that

pm(x)A(x)m++p1(x)A(x)+p0(x)=0.

Let KxalgKx denote the subalgebra of algebraic power series. In order to deal with this case one can employ the Lafon–Hironaka division theorem:

Theorem 4

Let A(x)Kxalg and let xn be the initial monomial of A(x) with respect to some monomial order where n=(0,,0,nk,0,,0) . Then

Kxalg=A(x)Kxalg(Kxalg)n.

Proof

This is somewhat more delicate to show, see [16,12] for details. The condition on n cannot be omitted as shows an example of Gabber and Kashiwara: dividing xy by xyx3y3+x2y2 with initial monomial xy yields a transcendent remainder series. This example reappears in a different disguise in the article of [3]. □

A constructive version of the algebraic division theorem using polynomial codes of algebraic power series has been developed in [1].

In particular, the theorem implies that in the division (3) the quotient Fs(x) and the remainder U(x) are algebraic, provided that K(x) is algebraic and the initial monomial of Q(x) involves only one variable. Hence, if the apex p of (1) has exactly one nonzero component and if the initial conditions constitute an algebraic power series K(x), then the generating function Fs(x) is algebraic. This has been proven in Theorem 13 of [3].

Example 3

Recently [15,2] the study of Gessel walks has drawn a lot of interest. In short, these are walks in N2 starting at the origin and using only steps from the step set {(1,0),(1,0),(1,1),(1,1)}. One defines f(i,j,n) to be the number of walks that end at point (i,j) after n steps. The step set immediately gives rise to the recurrence

f(i,j,n)=f(i+1,j,n1)+f(i1,j,n1)+f(i+1,j+1,n1)+f(i1,j1,n1).

Its characteristic polynomial is Q(x,y,z)=xy(yz+x2yz+z+x2y2z)=xyz(1+y)(1+x2y). Note that the recurrence–as it is given above–cannot be used to define the sequence f(i,j,n) properly since the required initial values f(i,0,n) and f(0,j,n) are not known a priori (this issue, however, can be easily avoided by shifting the whole array, i.e., by considering a new sequence g(i,j,n) that equals f(i1,j1,n) for i,j1 and that is defined to be 0 otherwise). On the other hand, this recurrence has apex (1,1,0), and therefore Theorem 4 is not applicable. For these reasons the recurrence is rewritten as follows:

f(i,j,n)=f(i1,j1,n+1)f(i,j1,n)f(i2,j1,n)f(i2,j2,n).

Now the set of shifts is H={(1,1,1),(0,1,0),(2,1,0),(2,2,0)}, the apex is (0,0,1), and the characteristic polynomial has changed sign. The starting point is canonically chosen to be s=(2,2,0) and a weight vector fitting the conditions of Theorem 1 is (1,1,1). An easy calculation yields

Fs(x,y,z)=z2+3xz3+12z4+xyz3+3yz4+6x2z4+4x2yz4+
K(x,y,z)=z3xyz2+yz3+3xz42x2yz38xyz42xy2z4+
U(x,y,z)=0.

If we succeeded to prove that the initial condition function K(x,y,z) is algebraic, we could conclude by Theorem 4 that the full generating function for the Gessel walks is algebraic. After it was shown in [15] that the generating function of f(0,0,n) is holonomic, Bostan and Kauers [2] have proven this remarkable algebraicity result.

3. Recurrences with polynomial coefficients

We are now turning to P-finite recurrences, i.e., recurrences with polynomial coefficients ct(n)K[n], that can be written in the following form:

{fn=φ(n),nNd(s+Nd)c0(n)fn=tHct(n)fn+t,ns+Nd. (4)

The existence of a unique solution for P-finite recurrences can be stated in a similar way as in Theorem 1 for constant coefficient recurrences:

Corollary 5

If there exists a weight vector wRd with positive components such that wt<0 for all tH, and if additionally the polynomial c0(n) has no integer root in s+Nd, then the P-finite recurrence (4) has a unique solution.

In contrast to Theorem 1, we additionally require that the polynomial c0(n) does not have integer roots in the region s+Nd where the recurrence relation is applied (this condition is trivially fulfilled for constant coefficients). If it happens that c0(n) does have an integer root there, the whole recursion would break down. This situation can often be avoided by an adequate choice of the starting point s. In the case d=1 this is always possible, whereas for d>1 there are instances for which there is no such s. In the following we will always assume that the recurrence fulfills the conditions of the corollary. It is a well-known fact that a P-finite recurrence translates to a differential equation for the generating function. We want to recall briefly how this can be performed.

Let xk¯ denote the falling factorial x(x1)(xk+1) where x0¯ is defined to be 1. The falling factorials constitute a basis for the polynomial ring K[x] via the formula xn=kS(n,k)xk¯ where S(n,k) denote the Stirling numbers of the second kind. For several variables the falling factorial is defined by xk¯=i=1dxik¯i, and obviously also any multivariate polynomial can be written in terms of falling factorials xk¯. We first rewrite the polynomial coefficients ct(n) using (shifted) falling factorials:

ct(n)=c˜t(ns+p)=kStctk(ns+p)k¯

with certain constants ctkK and a finite index set StNd.

Let Fs(x) again denote the generating function ns+Ndfnxns and let H0 denote the set H{0}. Then the recurrence (4) rewrites as follows:

0=tH0ct(n)fn+t=ns+NdtH0ct(n)fn+txns+p=tH0ns+t+Ndct(nt)fnxns+pt=tH0ns+NdkStctk(ns+pt)k¯fnxns+ptK(x)+U(x)=tH0kStns+Nd(ctkxkkxpt)[fnxns]K(x)+U(x)=tH0kSt(ctkxkkxpt)(Fs(x))K(x)+U(x).

The partial differential operator D=tH0kStctkxkkxpt now plays the rôle of the polynomial Q from before. The power series

K(x)=tHn(s+t+Nd)(s+Nd)ct(nt)φ(n)xns+pt

is known since it is determined by the given initial conditions. The series

U(x)=tHn(s+Nd)(s+t+Nd)ct(nt)fnxns+pt

is unknown and satisfies the support condition supp(U)Kxp as before. Analogously to Eq. (2) we get

K(x)=D(Fs(x))+U(x). (5)

A closer inspection of the differential operator D will reveal an important property: perfectness. Before doing so, we want to review briefly the theory of perfect differential operators and their division (cf. [7]).

4. Perfect differential operators

We consider linear partial differential operators with polynomial coefficients of the form D=a,bNdcabxabAd where Ad denotes the d-th Weyl algebra, i.e., the noncommutative polynomial algebra in x1,,xd and 1,,d respect to the commutation rule ixi=xii+1. Such an operator defines a K-linear map D:KxKx,AD(A). The differences r=abZd with cab0 are called the shifts of D. A differential operator is called a monomial operator if all its summands have the same shift r; this is equivalent to saying that the operator maps monomials to monomials. A monomial operator can be represented in the form (κ(n),r) where κ:NdK is called the coefficient function. For example, the monomial operator xaa+1 has the coefficient function κ(n)=na¯+1 and the shift r=0:

(xaa+1)(xn)=na¯xn+xn=κ(n)xn.

A monomial subspace M is a vector subspace of Kx for which there is a set ΣNd such that M is formed by all power series with support in {xnnΣ}. The canonical monomial direct complement of M is the vector subspace of power series with support in the complement {xnnNdΣ}.

The initial form of D with respect to a weight vector w, denoted by D, is defined by D=ab=rcabxab, where r is the minimal shift of D (i.e., wr is minimal). Clearly D is a monomial operator; we denote its coefficient function with κ(n). Let D¯ denote the tail of the operator, i.e., D=D+D¯. We say that the initial form D dominates D if there exists a constant C>0 such that for all bNd with cab0 for some a, and all nNd with κ(n)0, we have nb¯C|κ(n)|.

A differential operator D is called perfect if for any AKx there exists an nNd such that inw(D(A))=(Dxn)/κ(n). In other words, if for all power series A the initial monomial of D(A) lies in the image Im(D) of D. The image Im(D) is spanned by the monomials {xn+rnNdκ(n)0} where r is the shift of D.

Example 4

Let D=4yxy2xy+x2. The involved shifts are (0,1) and (2,0). We choose a weight vector w such that (0,1)w(2,0) and get the initial form D=4yxy2xy with coefficient function κ(n1,n2)=4n1n2. We see that κ(n)=0 for nZ={(1,4),(2,2),(4,1)}, hence the image Im(D) is spanned (as a vector space) by the monomials {xn1yn2+1:(n1,n2)Z}. This operator is not perfect since, e.g., D applied to x2y2 gives x4y2Im(D).

The example also illustrates that in general it can be impossible to decide whether an operator is perfect or not: The computation of Im(D) requires to solve a diophantine equation.

Example 5

Consider now the operator D=4yxy2xy+x2y4 with D being the same as in Example 4, but now D¯=x2y4. Clearly we have Im(D¯)=x2y4Kx,yIm(D) which implies that in this case D is perfect.

Note that the concept of perfect operators is more subtle than these two examples suggest. For more details we refer to [6,7] from where we cite a division theorem for differential operators (in fact a specialized version that is sufficient for our setting):

Theorem 6

Let K be either Kx or K{x} . Let DK[x][] be a perfect differential operator and let D be its initial form with respect to some weight vector w . Choose the canonical direct monomial complements L of Ker(D) and J of Im(D) in K . In the case of convergent power series, assume in addition that D is dominated by D . Then we have the direct sum decompositions

Im(D)J=KandKer(D)L=K.

5. Back to P-finite recurrences

From Theorem 6 we learned that for a perfect differential operator D the division K=D(Fs)+U exists and is unique; furthermore we have that supp(U)J, the direct monomial complement of Im(D). The next proposition relates this fact to the statement of Corollary 5.

Proposition 7

A differential operator

D=tH0kStctkxkkxpt=tH0Dt

that arises from a recurrence which is of type (4) and satisfies the conditions of Corollary 5 is perfect.

Proof

All summands in any of the operators Dt have the only shift pt. It does not change when Dt is converted to the standard form cabxab by means of the commutation rule x=x+1. Thus all the Dt’s are monomial operators. Let w be a weight vector such that wt<0 for all tH. Then D0 is the monomial operator with the minimal shift, hence we have D=D0 and D¯=tHDt. The coefficient function of the initial form turns out to be κ(n)=kS0c0k(n+p)k¯=c0(n+s). Since the polynomial c0(n) does not have any zeros in s+Nd by assumption, we see that κ(n)0 for all nNd. Consequently Im(D)=xpKx which matches the support condition for U(x). The kernel of D is 0, hence

inw(D(A))=D(xn)/κ(n)for all AKx where xn=inw(A),

and this proves that D is perfect. □

We conclude that the division (5) has always a unique solution. This corresponds exactly to the statement of Corollary 5 which asserts that the recurrence has a unique solution.

Using Theorem 6 we can state a result concerning the convergence in the P-finite case:

Corollary 8

The generating function Fs(x) is a convergent power series if the operator D corresponding to its recurrence relation is dominated by its initial form. This is the case when the polynomial c0(n) dominates all the polynomials ct(n),tH, i.e., there is a constant C>0 such that for all nNd we have

|ct(n)|C|c0(n+s)|.

Proof

Recall that D is dominated by its initial form if nb¯C˜|κ(n)| for all nNd and some C˜>0, and b occurs as an exponent of in D. The correspondence κ(n)=c0(n+s) has been established in the proof of Proposition 7, and the translation of the coefficients ct(n) into falling factorials has been demonstrated in Section 3. □

Note that the condition in Corollary 8 is only sufficient but not necessary for the convergence of the generating function.

6. Examples and outlook

In order to illustrate the statements of the previous sections we choose the Eulerian numbers (see, e.g., [9, Chapter 6.2]):

Example 6

The recurrence

an,k=(k+1)an1,k+(nk)an1,k1 (6)

defines the Eulerian numbers, together with the initial conditions an,0=1 for n0 and a0,k=0 for k1. Since we have H={(1,0),(1,1)} it is natural to choose the starting point s=(1,1). The generating function in question is Fs(x,y)=n=0k=0an+1,k+1xnyk. The known part, the initial condition function, is K(x,y)=x/(x1)2. From the apex being (0,0) it follows that U(x,y)=0. The differential operator corresponding to recurrence (6) is

D=12xx2yx+(xy2xy)y.

By plugging in a truncated power series expansion of Fs(x,y) we convince ourselves that indeed K(x,y)=D(Fs(x,y)) holds.

Gnedin and Olshanski [8] studied nonnegative solutions of the dual recurrence; the dual (or backwards) recurrence is obtained by changing the signs of all shifts. The problem is now to describe initial conditions for the dual recurrence such that its solution does not involve negative values.

Example 7

By inverting the shifts of (6) we produce the dual recurrence Vn,k=(k+1)Vn+1,k+(nk)Vn+1,k+1. Gnedin and Olshanski are interested in the solutions Vn,k for 0kn1 and n1 with respect to the normalization V1,0=1. The remaining initial conditions Vn,0 for n2 have to be determined in a way such that the solution consists of nonnegative entries only.

Applying our method to the dual recurrence (nk)Vn,k=Vn1,k1kVn,k1, with s=(1,1) and apex p=(0,0), delivers the differential operator

D=xy2yxx+(yy2)y

from which we must not expect that it is perfect because the dual recurrence does not match the conditions of Corollary 5. Indeed, D is not perfect! We choose the weight vector (1,2); the initial form of D is D=yyxx and its image Im(D)={xmynmn}. But D(1+2y)=xy6y2+2xy2 with initial monomial being xy.

Instead, we perform the linear transformation nn+k+1 such that the solution lies in the whole first quadrant and obtain the recurrence

(n+1)Vn,k=Vn,k1kVn+1,k1 (7)

together with the normalization condition V0,0=1. The natural starting point is s=(0,1), the apex is p=(1,0) and we observe that the leading coefficient does not become 0 for any point in s+N2, so all conditions of Corollary 5 are fulfilled. Hence the corresponding differential operator for (7)

x2x+y2yxy+x+2y

is perfect. The power series that encodes the initial conditions is

K(x,y)=n=1(φ(n1,0)φ(n,0))xn.

The task is now to determine all possible K(x,y) for which the division (5) yields a power series solution Fs(x,y) with nonnegative coefficients. This seems to be an interesting research problem.

It would be nice if we could also state some results about the algebraicity of the generating function. But here even the univariate case is still open: The famous Grothendieck–Katz p-curvature conjecture [14,17] asserts that a linear differential equation in the variable x with coefficients in Q(x) admits a complete system of algebraic solutions if and only if the differential equation reduced modulo p also has a complete system of algebraic solutions over Fp(x) for almost all p.

Acknowledgments

The authors would like to thank Marko Petkovšek and the anonymous referees for their valuable suggestions and corrections. The first author was supported by the Austrian Science Fund (FWF): P21461. The second author was supported by the Austrian Science Fund (FWF): P20162-N18.

Contributor Information

Herwig Hauser, Email: herwig.hauser@uibk.ac.at.

Christoph Koutschan, Email: Koutschan@risc.jku.at.

References

  • 1.M.E. Alonso, F.C. Jimenez, H. Hauser, Effective algebraic power series, 2005, Preprint. http://www.hh.hauser.cc.
  • 2.Bostan A., Kauers M. The complete generating function for Gessel walks is algebraic. Proceedings of the American Mathematical Society. 2010;138(9):3063–3078. [Google Scholar]
  • 3.Bousquet-Mélou M., Petkovšek M. Linear recurrences with constant coefficients: the multivariate case. Discrete Mathematics. 2000;225(1):51–75. [Google Scholar]
  • 4.Galligo A. À propos du théorème de préparation de Weierstrass. Lecture Notes in Mathematics. 1974;409:543–579. Séminaire François Norguet, à la mémoire d’André Martineau. [Google Scholar]
  • 5.Galligo A. Théorème de division et stabilité en géométrie analytique locale. Annales de l’Institut Fourier. 1979;29(2):107–184. [Google Scholar]
  • 6.S. Gann, Polynomiale und formale Lösungen linearer partieller Differentialgleichungen, Ph.D. Thesis, Universität Innsbruck, 2006.
  • 7.Gann S., Hauser H. Perfect bases for differential equations. Journal of Symbolic Computation. 2005;40:979–997. [Google Scholar]
  • 8.Gnedin A., Olshanski G. The boundary of the Eulerian number triangle. Moscow Mathematical Journal. 2006;6(3):461–475. [Google Scholar]
  • 9.Graham R.L., Knuth D.E., Patashnik O. Addison-Wesley; Reading, Massachusetts: 1989. Concrete Mathematics. [Google Scholar]
  • 10.Grauert H. Über die Deformation isolierter Singularitäten analytischer Mengen. Inventiones Mathematicae. 1972;15:171–198. [Google Scholar]
  • 11.Hauser H., Müller G. A rank theorem for analytic maps between power series spaces. Institut des Hautes Études Scientifiques. Publications Mathématiques. 1994;80:95–115. [Google Scholar]
  • 12.Hironaka H. Algebraic Geometry, The Johns Hopkins Centennial Lectures. 1977. Idealistic exponents of singularity; pp. 52–125. [Google Scholar]
  • 13.Hironaka H. Stratification and flatness. Real and Complex Singularities; Proc. Ninth Nordic Summer School/NAVF Sympos. Math., Oslo, 1976; Alphen aan den Rijn: Sijthoff and Noordhoff; 1977. pp. 199–265. [Google Scholar]
  • 14.Katz N. A conjecture in the arithmetic theory of differential equations. Bulletin de la Société Mathématique de France. 1982;110:203–239. [Google Scholar]
  • 15.Kauers M., Koutschan C., Zeilberger D. Proof of Ira Gessel’s lattice path conjecture. Proceedings of the National Academy of Sciences USA. 2009;106(28):11502–11505. [Google Scholar]
  • 16.Lafon J.-P. Séries formelles algébriques. Comptes Rendus de l’Académie des Sciences Paris. 1965;260:3238–3241. [Google Scholar]
  • 17.Vizio L.D., Ramis J.-P., Sauloy J., Zhang C. Équations aux q-différences. Gazette des mathématiciens. 2003;96:20–49. [Google Scholar]

RESOURCES