Skip to main content
Elsevier Sponsored Documents logoLink to Elsevier Sponsored Documents
. 2015 Jun 20;585:3–24. doi: 10.1016/j.tcs.2015.03.003

A new order-theoretic characterisation of the polytime computable functions

Martin Avanzini 1,, Naohi Eguchi 1, Georg Moser 1
PMCID: PMC4567075  PMID: 26412933

Abstract

We propose a new order-theoretic characterisation of the class of polytime computable functions. To this avail we define the small polynomial path order (sPOP for short). This termination order entails a new syntactic method to analyse the innermost runtime complexity of term rewrite systems fully automatically: for any rewrite system compatible with sPOP that employs recursion up to depth d, the (innermost) runtime complexity is polynomially bounded of degree d. This bound is tight. Thus we obtain a direct correspondence between a syntactic (and easily verifiable) condition of a program and the asymptotic worst-case complexity of the program.

Keywords: Term rewriting, Complexity analysis, Implicit computational complexity, Automation

1. Introduction

In this paper we are concerned with the complexity analysis of term rewrite systems (TRSs for short). Based on a careful investigation into the principle of predicative recursion as proposed by Bellantoni and Cook [1] we introduce a new termination order, the small polynomial path order (sPOP for short). The order sPOP  provides a new characterisation of the class FP of polytime computable functions. Any function f computable by a TRS R compatible with sPOP is polytime computable. On the other hand for any polytime computable function f, there exists a TRS Rf computing f such that R is compatible with sPOP. Moreover sPOP directly relates the depth of recursion of a given TRS to the polynomial degree of its runtime complexity. More precisely, we call a rewrite system R predicative recursive of degree d if R is compatible with sPOP and the depth of recursion of all function symbols in R is bounded by d (see Section 3 for the formal definition). We establish that any predicative recursive rewrite system of degree d admits runtime complexity in O(nd). Here n refers to the sum of the sizes of inputs. Furthermore we obtain a novel, order-theoretic characterisation of DTIME(nd), the class of functions computed on register machines in O(nd) steps.

Thus we obtain a direct correspondence between a syntactic (and easily verifiable) condition of a program and the asymptotic worst-case complexity of the program. In this sense our work is closely related to similar studies in the field of implicit computational complexity (ICC for short). On the other hand the order sPOP entails a new syntactic criteria to automatically establish polynomial runtime complexity of a given TRS. This criteria extends the state of the art in runtime complexity analysis as it is more precise or more efficient than related techniques. Note that the proposed syntactic method to analyse the (innermost) runtime complexity of rewrite systems is fully automatic. For any given TRS, compatibility with sPOP can be efficiently checked by a machine. Should this check succeed, we get an asymptotic bound on the runtime complexity directly from the parameters of the order. It should perhaps be emphasised that compatibility of a TRS with sPOP implies termination and thus our complexity analysis technique does not presuppose termination.

In sum, in this work we make the following contributions:

  • We propose a new recursion-theoretic characterisation Bwsc over binary words of the class FP. We establish that those Bwsc functions that are definable with d nestings of predicative recursion can be computed by predicative recursive TRSs of degree d (cf. Theorem 4). Note that these functions belong to DTIME(nd).

  • We propose the new termination order sPOP; sPOP  captures the recursion-theoretic principles of the class Bwsc. Thus we obtain a new order-theoretic characterisation of the class FP. Moreover, for any predicative recursive TRS of degree d its runtime complexity lies in O(nd) (cf. Theorem 1). Furthermore this bound is tight, that is, we provide a family of TRSs, delineated by sPOP, whose runtime complexity is bounded from below by Ω(nd), cf. Example 5.

  • We extend upon sPOP by proposing a generalisation, denoted sPOPPS, admitting the same properties as above. This generalisations incorporates a more general recursion scheme that makes use of parameter substitution (cf. Theorem 2).

  • We establish a novel, order-theoretic characterisation of DTIME(nd). We show that DTIME(nd) corresponds to the class of functions computable by tail-recursive predicative TRSs of degree d. This characterisation is based on the generalised small polynomial path order sPOPPS  (cf. Theorem 6).

  • sPOP gives rise to a new syntactic method for polynomial runtime complexity method. This method is fully automatic. We have implemented the order sPOP in the Tyrolean Complexity Tool Inline graphic, version 2.0, an open source complexity analyser [2]. The experimental evidence obtained indicates the efficiency of the method and the obtained increase in precision.

1.1. Related work

There are several accounts of predicative analysis of recursion in the (ICC) literature. We mention only those related works which are directly comparable to our work. See [3] for an overview on ICC.

The class Bwsc is a syntactic restriction of the recursion-theoretic characterisation N of the class FEXP of exponential time computable functions, given by Arai and the second author in [4]. To account for the fact that FEXP is not closed under composition in general, the definition of N relies on a syntactically restricted form of composition. The same composition scheme allows a fine-grained control in our class Bwsc through the degree of recursion. In [5] the authors use the class N as a sufficient basis for an order-theoretic account of FEXP, the exponential path order (EPO for short). Due to the close relationship of Bwsc and N, our order is both conceptually and technically close to EPO.

Notably the clearest connection of our work is to Marion's light multiset path order (LMPO for short) [6] and the polynomial path order (POP for short) [7–9]. Both orders form a strict extension of sPOP, but lack the precision of the latter. Although LMPO characterises FP, the runtime complexity of compatible TRSs is not polynomially bounded in general. POP induces polynomial runtime complexities, but the obtained complexity certificate is usually very imprecise. In particular, due to the multiset status underlying POP, for each dN one can form a TRS compatible with POP  that defines only a single function, but whose runtime is bounded from below by a polynomial of degree d, in the sizes of the inputs.

In Bonfante et al. [10] restricted classes of polynomial interpretations are studied that can be employed to obtain polynomial upper bounds on the runtime complexity of TRSs. Polynomial interpretations are complemented with quasi-interpretations in [11], giving rise to alternative characterisations of complexity classes. None of the above results are applicable to relate the depth of recursion to the runtime complexity, in the sense mentioned above. Furthermore it is unknown how the body of work on quasi-interpretations can be employed in the context of runtime complexity analysis. We have also drawn motivation from Leivant's and Marion's characterisations of DTIME(nd) [12,13], that provide related fine-grained classification of the polytime computable functions. Again, these results lack applicability in the context of runtime complexity analysis.

Polynomial complexity analysis is an active research area in rewriting. Starting from [14] interest in this field greatly increased over the last years, see for example [15–18] and [19] for an overview. This is partly due to the incorporation of a dedicated category for complexity into the annual termination competition (TERMCOMP).1 However, it is worth emphasising that the most powerful techniques for runtime complexity analysis currently available, basically employ semantic considerations on the rewrite systems, which are notoriously inefficient.

We also want to mention ongoing approaches for the automated analysis of resource usage in programs. Notably, Hoffmann et al. [20] provide an automatic multivariate amortised cost analysis exploiting typing, which extends earlier results on amortised cost analysis. Finally Albert et al. [21] present an automated complexity tool for Java Bytecode programs, Alias et al. [22] give a complexity and termination analysis for flowchart programs, and Gulwani et al. [23] as well as Zuleger et al. [24] provide an automated complexity tool for C programs.

1.2. Outline

We present the main intuition behind sPOP and provide an informal account of the obtained technical results.

The order sPOP essentially embodies the predicative analysis of recursion set forth by Bellantoni and Cook [1]. In [1] a recursion-theoretic characterisation B of the class of polytime computable functions is proposed. This analysis is connected to the important principle of tiering introduced by Simmons [25] and Leivant [26,27,12]. The essential idea is that the arguments of a function are separated into normal and safe arguments (or correspondingly into arguments of different tiers). Building on this work we present a subclass Bwsc of B. Crucially the class Bwsc admits only a weak form of composition. Inspired by a result of Handley and Wainer [28], we show that Bwsc captures the polytime functions. This establishes our first main result.

We formulate the class Bwsc over the set {0,1} of binary words, the empty word is denoted by ϵ. Arguments of functions are partitioned into normal and safe ones. In notation, we write f(t1,,tk;tk+1,tk+l) where normal arguments are to the left, and safe arguments to the right of the semicolon. Abbreviate x=x1,,xk and y=y1,,yl. The class Bwsc, depicted in Fig. 1, is the smallest class containing certain initial functions and closed under safe recursion on notation (SRN) and weak safe composition (WSC). By the weak form of composition only values are ever substituted into normal argument positions.

Fig. 1.

Fig. 1

Defining initial functions and operations for Bwsc.

Suppose the definition of a TRS R is based on the equations in Bwsc. It is not difficult to deduce a precise bound on the runtime complexity of R by measuring the number of nested applications of safe recursion, the so called depth of recursion. In contrast Bellantoni and Cooks definition [1] of B is obtained from Fig. 1 by replacing weak safe composition with the more liberal scheme of safe composition (SC): f(x;y)=h(i(x;);j(x;y)). As soon as one of the functions i is size increasing, a tight correspondence between the runtime complexity and the depth of recursion is lost.

Our central observation is that from the function algebra Bwsc, one can distill a termination argument for the TRS R. With sPOP, this implicit termination argument is formalised as a termination order. In order to employ the separation of normal and safe arguments, we fix for each defined symbol in R a partitioning of argument positions into normal and safe positions. For constructors we fix (as in Bwsc) that all argument positions are safe. Moreover sPOP restricts recursion to normal argument. Dual, only safe argument positions allow the substitution of recursive calls. Via the order constraints we can also guarantee that only normal arguments are substituted at normal argument positions. We emphasise that our notion of predicative recursive TRS is more liberal than the class Bwsc. Notably values are not restricted to words, but can be formed from arbitrary constructors. We allow arbitrary deep right-hand sides, and implicit casting from normal to safe arguments. Still the main principle underlying Bwsc remains reflected.

The remainder of the paper is organised as follows. After giving some preliminaries, Section 3 introduces the order sPOP. Here we also prove correctness of sPOP  with respect to runtime complexity analysis. In Section 4 we incorporate parameter substitution into the order sPOP. In Section 5 we then show that these orders are complete for FP, in particular we precisely relate sPOP to the class Bwsc. In total we obtain an order-theoretic characterisation of FP. Exploiting the fine-grained control given by the degree of recursion, in Section 6 we provide an order-theoretic characterisation of DTIME(nd). Finally in Sections 7 and 8 we clarify the expressiveness of the established small polynomial path orders and conclude.

2. Preliminaries

We denote by N the set of natural numbers {0,1,2,}. For a finite alphabet A of characters, we denote by W(A) the set of words over A, the empty word is denoted by ε. Let R be a binary relation. We denote by R+ the transitive, by R the transitive and reflexive closure, and Rn denotes for nN the n-fold composition of R. We write aRb for (a,b)R, the relation R is well-founded if there exists no infinite sequence a1Ra2Ra3R. The relation R is a preorder if it is transitive and reflexive, it is a strict partial order if it is irreflexive, antisymmetric and transitive, and R is an equivalence relation if it is reflexive, symmetric and transitive. Note that the transitive and reflexive closure of an order R (on a set S) gives always a preorder. Consider a preorder ⩾. Define ab if ab and ba. Then this equivalence defines a partitioning of ⩾ into the equivalence ∼ and a strict partial order >.

2.1. Term rewriting

We assume at least nodding acquaintance with the basics of term rewriting [29]. We fix a countably infinite set of variables V and a finite set of function symbols F, the signature. For each fF, the arity of f is fixed. The set of terms formed from F and V is denoted by T(F,V). A term tT(F,V) is called ground if it contains no variables. The set of ground terms is indicated by T(F). The signature F contains a distinguished set of constructors CF, elements of T(C)T(F) are called values. Elements of F that are not constructors are called defined symbols and collected in D. If not mentioned otherwise we denote by x,y,z variables, f,g,h, denote defined symbols. Terms are denoted by l,r or s,t, and values by u,v,w. All denotations are possibly followed by subscripts. We use the notation s to abbreviate a finite sequence of terms s1,,sn.

The root symbol of term t is denoted as rt(t). The size of t is denoted by |t| and refers to the number of occurrences of symbols t, the depth dp(t) is given recursively by dp(t)=1 if tV, and dp(f(t1,,tn))=1+max{dp(ti)|i=1,,n}. Here we employ the convention that the maximum of an empty set is equal to 0. A rewrite rule is a pair (l,r) of terms, in notation lr, such that the left-hand side l=f(l1,,ln) is not a variable, the root f is defined, and all variables appearing in the right-hand r occur also in l. A term rewrite system (TRS for short) R is a set of rewrite rules.

We adopt call-by-value semantics and define the rewrite relation R as follows.

(i)f(l1,,ln)rR, σ:VT(C)f(l1σ,,lnσ)Rrσ(ii)sRtf(,s,)Rf(,t,).

If sRt we say that s reduces to t in one step. For (i) we make various assumptions on R: we suppose that there is exactly one matching rule f(l1,,ln)rR; the arguments li (i=1,,n) contains no defined symbols; and variables occur only once in f(l1,,ln). That is, throughout this paper we fix R to denote a completely defined,2 orthogonal constructor TRS [29]. Furthermore we are only concerned with innermost rewriting. Note that orthogonality enforces that our model of computation is deterministic.3 If a term t has a normal form, then this term is unique and denoted by t↓. For every n-ary defined symbol fD, R defines a partial function f:T(C)nT(C) where

f(u1,,un):=f(u1,,un)iff(u1,,un)T(C),

and f(u1,,un) is undefined otherwise. Note that when R is terminating, i.e. when R is well-founded, the function f is total.

Following [30] we adopt a unitary cost model. Bounds are of course expressed with respect to the size of terms. Let Tb(F) denote the set of basic (also called constructor based) terms f(u1,,un) where fD and u1,,unT(C). We define the (innermost) runtime complexity function rcR:NN as

rcR(n):=max{|sTb(F),|s|nands=t0Rt1RRt}.

Hence rcR(n) maximises over the derivation height of terms s of size up to n, regarding only basic terms. The latter restriction accounts for the fact that computations start only from basic terms. The runtime complexity function is well-defined if R is terminating. If rcR is asymptotically bounded from above by a polynomial, we simply say that the runtime of R is polynomially bounded. This unitary cost model is reasonable:

Proposition 1 Adequacy theorem —

(See [31–33].) All functions f computed by R are computable on a conventional models of computation, viz Turing machines, such that the time complexity on the latter is polynomially related to rcR.

In particular, if the runtime of R is polynomially bounded then f is polytime computable on a Turing machine for all fD.

We say that a function symbol f is defined based on g, in notation fRg, if there exists a rewrite rule f(l1,,ln)rR where g occurs in r. We call f recursive if fR+f holds, i.e. if f is defined based on itself. Recursive functions symbols are collected in DrecD. Noteworthy our notion also captures mutual recursion. We denote by ⩾ the least preorder on F containing R and where constructors are equivalent, i.e. cd and dc for all constructors c,dC. The preorder ⩾ is called the precedence of R. We denote by > and ∼ the separation of ⩾ into the strict partial order > and the equivalence ∼. Note that for fg, if fC then also gC; similar fDrec implies gDrec. The rank of fF with respect to ⩾ is inductively defined by rk(f)=1+max{rk(g)|f>g}. The depth of recursion rd(f) of fF is defined in correspondence to the rank, but only takes recursive symbols into account: let d=max{rd(g)|f>g} be the maximal recursion depth of a function symbol g underlying the definition of f; then rd(f):=1+d if f is recursive, otherwise rd(f):=d.

Example 1

Consider following TRS Rarith, written in predicative notation.

1:+(0;y)y3:+(s(;x);y)s(+(x;y))5:f(x,y;)+(x;×(y,y;))2:×(0,y;)04:×(s(;x),y;)+(y;×(x,y;)).

The TRS Rarith follows along the line of Bwsc from Fig. 1. The functions + and × denote addition and multiplication on natural numbers, in particular f(sm(0),sn(0))=sr(0) where r=m+n2. The precedence is given by f>×>+>s0 where addition (+) and multiplication (×) are recursive, but f is not recursive. We have rd(+)=1 since addition is recursive, as f is not recursive but multiplication is recursive we have rd(f)=rd(×)=2.

2.2. Register machines

In this paper, we are considering register machine (RM for short) over words W(A) as initially proposed by Shepherdson and Sturgis in [34]. We employ following notations and conventions. An RM M consists of a finite set of registers that store words over W(A). Like values, i.e. constructor terms, words are denoted by u,v,w, and u,v,w denote sequences of words. No confusion can arise from this. For r a register, we use r to refer to the content of register r. The control of M consists of a finite sequences of (labeled) instructions I1;I2;;Il which are executed sequentially by M. Here an instruction can be one of the following:

  • (i)

    Append instruction A(a)(r): place aA on the left-hand end of r;

  • (ii)

    Delete instruction D(r): remove the left-most character from r, if rε;

  • (iii)

    Conditional jump instruction J(a)(r)[j]: jump to instruction Ij, if the left-most character of r is aA, otherwise proceed with the next instruction;

  • (iv)

    Copy instruction C(r,r): overwrite r by r.

Our definition departs from [34] in following minor respects. Unlike in [34], we suppose that the set of registers is finite. This simplification does not impose any restrictions. Due to the absence of memory indirection instructions, only a fixed number of registers can be accessed by a machine M anyway. The instructions (i)–(iii) correspond to the minimal instruction set given in [34, Section 6], with the difference that in [34] the instruction (i) appends to the right. The additional copy instruction (iv) added from the extended instruction set of [34, Section 2] ensures that copying words has unitary cost. A configuration of the RM M is a tuple j,w1,,wm where w1,,wmW(A) are the content of the m registers and j ranges over the labels 1,,l of instructions I1,,Il of M, and the dedicated halting label l+1. We denote by M the one-step transition relation obtained in the obvious way from our informal account of the instruction set (i)–(iv). For the halting label l+1 we set l+1,u1,,umMl+1,u1,,um for all words ui (i=1,,m). We say that the RM M computes the (partial) function fM:W(A)kW(A) with km defined as follows:

fM(u1,,uk):=vm:.1,u1,,uk,εMl+1,v1,,vk.

We also say that on inputs u1,,uk the computation halts in steps. Denote by |u| the length, or size, of the word u. Extend this to u=u1,,uk so that |u|=i=1k|ui| denotes the sum of the sizes of u. Let dN. We denote by DTIME(nd) the class of functions f:W(A)kW(A) computed by some RM M in the above way, where M halts on all inputs u in no more than O(|u|d) steps.

3. The small polynomial path order

We arrive at the formal definition of the small polynomial path order (sPOP for short). Conceptually this order is a tamed recursive path order with product status, embodying predicative analysis of recursion set forth by Bellantoni and Cook [1].

Throughout this section, fix a TRS R. For each function symbol f, we assume an a priori separation of argument positions into normal and safe ones. Arguments under normal positions play the role of recursion parameters, whereas safe argument positions allow the substitution of recursive results, compare the definition of Bwsc drawn in Fig. 1 on page 5. For constructors c we fix that all argument positions are safe. As in Example 1, we indicate this separation directly in terms and write f(s;t) where the arguments s to the left of the semicolon are normal, the remaining arguments t are safe. This separation and the precedence ⩾ underlying the analysed TRS R induces an instance of sPOP, which is denoted by >spop below.

In order to define >spop, we introduce some auxiliary relations. First of all, we lift equivalence ∼ underlying the precedence ⩾ of R to terms, disregarding the order on arguments: s and t are equivalent, in notation st, if s=t, or s=f(s1,,sn) and t=g(t1,,tn) where fg and there exists a permutation π on argument positions {1,,n} such that sitπ(i) for all i=1,,n. Safe equivalence s takes also the separation of argument positions into account. In the definition of sst, we additionally require that i is a normal argument position of f if and only if π(i) is normal argument position of g. We emphasise that ∼ (and consequently s) preserves values: if st and sT(C) then tT(C). We extend the subterm relation to term equivalence. Consider s=f(s1,,sk;sk+1,,sk+l). Define s/t if either st or s/t, where s/t holds if si/t for some argument si of s (i=1,k+l). We denote by n/ the restriction of / where only normal arguments are considered: sn/t if si/t for some normal argument position i{1,,k}.

Definition 1

Let s and t be terms such that s=f(s1,,sk;sk+1,,sk+l). Then s>spopt if one of the following alternatives holds.

  • 1.

    sispopt for some argument si of s (i{1,,k+l}).

  • 2.
    fD, t=g(t1,,tm;tm+1,,tm+n) where f>g and the following holds:
    • sn/tj for all normal arguments t1,,tm;
    • s>spoptj for all safe arguments tm+1,,tm+n;
    • t contains at most one (recursive) function symbol h with fh.
  • 3.
    fDrec, t=g(t1,,tk;tk+1,,tk+l) where fg and the following holds:
    • s1,,sk>spoptπ(1),,tπ(k) for some permutation π on {1,,k};
    • sk+1,,sk+lspoptτ(k+1),,tτ(k+l) for some permutation τ on {k+1,,k+l}.

Here sspopt denotes that either sst or s>spopt holds. In the last clause we use >spop also for the extension of >spop to products: s1,,snspopt1,,tn means sispopti for all i=1,,n, and s1,,sn>spopt1,,tn indicates that additionally si0>spopti0 holds for at least one i0{1,,n}.

Throughout the following, we write s>spopit if s>spopt follows from the ith clause in Definition 1. A similar notation is employed for the consecutive introduced orders.

We say that the TRS R is compatible with >spop if all rules are oriented from left to right: l>spopr for all rules lrR. As sPOP forms a restriction of the recursive path order, compatibility with sPOP implies termination [29].

Definition 2

We call the TRS R predicative recursive (of degree d) if R is compatible with an instance of sPOP and the maximal recursion depth rd(f) of fF is d.

Consider the orientation of a rule f(l1,,ln)rR. The case >spop2 is intended to capture functions f defined by weak safe composition (WSC), compare Fig. 1. In particular the use of n/ allows only the substitution of normal arguments of f in normal argument positions of g. The last restriction put onto >spop2 is used to prohibit multiple recursive calls. If one drops this restriction, the TRS consisting of f(0;)0 and f(s(;x);)c(;f(x;),f(x;)) is compatible with sPOP but its runtime complexity can be only exponentially bounded. Finally, >spop3 accounts for recursive calls, in combination with >spop2 we capture safe recursion (SRN). The next theorem provides our second main result.

Theorem 1

Suppose R is a predicative recursive TRS of degree d. Then the derivation height of any basic term f(u;v) is bounded by a polynomial of degree rd(f) in the sum of the depths of normal arguments u. In particular, the runtime complexity function rcR is bounded by a polynomial of degree d.

As a consequence of Theorem 1 and the adequacy theorem (c.f. Proposition 1), any predicative recursive (and orthogonal) TRS R computes a function from FP. We remark that Theorem 1 remains valid for the standard notion of innermost rewriting [29] on constructor TRSs. Neither orthogonality nor our fixed call-by-value reduction strategy is essential, compare [9].

We continue with an informal account of Definition 1 in our running example, the admittedly technical proof is shortly postponed.

Example 2 Example 1 continued —

We show that the TRS Rarith depicted in Example 1 is predicative recursive. Recall that the precedence underlying Rarith is given by f>×>+>s0, and that Drec={×,+}. The degree of recursion of Rarith is thus two.

The case >spop1 is standard in recursive path orders and allows the treatment of projections as in rules 1 and 2. We have +(0;y)>spop1y using ysy and likewise ×(0,y;)>spop10 using 0s0. Observe that the rule

5: f(x,y;)+(x;×(y,y;)),

is oriented by >spop2 only: using f>× and twice f(x,y;)n/y, i.e., y is a normal argument of f(x,y;), we have f(x,y;)>spop2×(y,y;). Using that also f>+ and f(x,y;)n/x holds, another application of >spop2 orients rule 5.

Finally, consider the recursive cases of addition (rule 3) and multiplication (rule 4). These can be oriented by a combination of >spop2 and >spop3. We exemplify this on the rule

4: ×(s(;x),y;)+(y;×(x,y;)).

Employing ×>+, case >spop2 is applicable. Thus orientation of this rule simplifies to ×(s(;x),y;)n/y and ×(s(;x),y;)>spop×(x,y;). The former constraint is satisfied by definition. Since × is recursive, using >spop3 the latter constraint reduces to s(;x),y>spopx,y and the trivial constraint spop. Clearly s(;x),y>spopx,y holds as s(;x)>spop1x and ysy. Hence we are done.

Note that any other partitioning of argument positions of multiplication invalidates the orientation of rule 4. The sub-constraint ×(s(;x),y;)>spop×(x,y;) requires that at least the first argument position of times is normal, the sub-constraint ×(s(;x),y;)n/y enforces that also the second parameter is normal. The order thus determines that multiplication performs recursion on its first arguments, and that the second parameter should be considered normal since it is used as recursion parameter in addition. Reconsidering the orientation of rule 5, the use of n/ propagates that f takes only normal arguments.

By Theorem 1 we obtain that addition admits linear, and multiplication as well as f admits quadratic runtime complexity. Overall the runtime complexity of Rarith is quadratic.

The following examples clarifies the need for data tiering.

Example 3 Example 2 continued —

Consider the extension of Rarith by the two rules

6:exp(0,y)s(;0)7:exp(s(;x),y)×(y,exp(x,y);),

that express exponentiation yx in an exponential number of steps. The definition of exp is not predicative recursive, since the recursive result exp(x,y) is substituted as recursion parameter to multiplication. For this reason the orientation with >spop fails.

The next example is negative, in the sense that the considered TRSs admits polynomial runtime complexity, but fails to be compatible with sPOP.

Example 4 Example 3 continued —

Consider now the TRS Rarith where the rule 4 is replaced by the rule

4a: ×(s(;x),y;)+(×(x,y;);y).

The resulting system admits polynomial runtime complexity. On the other hand, Theorem 1 is inapplicable since the system is not predicative recursive.

We emphasise that the bound provided in Theorem 1 is tight, in the sense that for any d we can define a predicative TRS Rd of degree d admitting runtime complexity Ω(nd).

Example 5

We define a family of TRSs Rd (dN) inductively as follows: R0:={f0(x;)a} and Rd+1 extends Rd by the rules

fd+1(x;)gd+1(x,x;)gd+1(s(;x),y;)b(;fd(y;),gd+1(x,y;)).

Let dN. It is easy to see that Rd is predicative recursive (with underlying precedence fd>gd>fd1>gd1>>f0>ab). As only gi (i=1,,d) are recursive, the recursion depth of Rd is d.

But also the runtime complexity of Rd is in Ω(nd): For d=0 this is immediate. Otherwise, consider the term fd+1(sn(;a);) (nN) which reduces to gd+1(sn(;a),sn(;a);) in one step. As the latter iterates fd(sn(a)) for n times, the lower bound is established by inductive reasoning.

We now show that sPOP is correct, i.e. we prove Theorem 1. Suppose R is a predicative recursive TRS of degree d. Our proof makes use of a variety of ingredients. In Definition 3 we define a predicative interpretation S that flatten terms to sequences of terms, separating safe from normal arguments. In Definition 4 we introduce a family of orders ()N on sequences of terms. The definition of (for fixed ) does not explicitly mention predicative notions and is conceptually simpler than >spop. In Lemma 4 we show that predicative interpretations S embeds rewrite steps into , as pictured in Fig. 2. Consequently the derivation height of s is bounded by the length of descending sequences, which in turn can be bounded sufficiently whenever s is basic (cf. Lemma 7).

Fig. 2.

Fig. 2

Predicative embedding of R into .

Consider a step C[f(uσ;vσ)]RC[rσ]=t. Due to the limitations imposed by >spop, it is not difficult to see that if is not a value itself, then at least all normal arguments are values. We capture this observation in the set Tb, defined as the least set such that (i) T(C)Tb, and (ii) if fF, vT(C) and tTb then f(v;t)Tb. This set is closed under rewriting.

Lemma 1

Suppose R is compatible with >spop. If sTb and sRt then tTb.

Proof

The lemma follows by a straightforward inductive argument on Definition 1.  □

Observe that Tb includes all basic terms. For the runtime complexity analysis of R, it thus suffices to consider reductions on Tb only.

3.1. Predicative interpretation of terms as sequences

Predicative interpretations separate safe from normal arguments. To this avail, we define the normalised signature Fn to contain all symbols from F, with the sole difference that the arity of defined symbols f with k normal arguments is k in Fn. A term t is normalised, if tT(Fn). Below we retain the separation into constructors, recursive and non-recursive symbols. As a consequence, the rank and recursion depth coincide with respect to both signatures, and also T(C)T(Fn). Terms Tb(Fn) are also called basic, these are obtained from Tb(F) by dropping safe arguments.

To formalise sequences of (normalised) terms, we use an auxiliary variadic function symbol ∘. Here variadic means that the arity of ∘ is finite but arbitrary. We always write [t1tn] for (t1,,tn), and if we write f(t1,,tn) then f. We use a,b, to denote terms or sequences of terms. In contrast, s,t,u,v, possibly followed by subscripts, denote terms which are not sequences. Abusing set-notation, we write t[t1tn] if t=ti for some i{1,,n}. We lift terms equivalence to sequences by disregarding the order of elements: [s1sn][t1tn] if sitπ(i) for all i=1,,n and some permutation π on {1,,n}. We denote by ab the concatenation of sequences. To avoid notational overhead we overload concatenation to both terms and sequences. For sequences a define lift(a):=a, and for terms t define lift(t):=[t]. We set ab:=[s1sm t1  tn] where lift(a)=[s1sm] and lift(b)=[t1tn].

Definition 3

We define the predicative interpretation S, mapping terms tTb to sequences of normalised terms as follows:

S(t):={[]iftis a value,[f(u1,,uk)]S(tk+1)S(tk+l)otherwise, wheret=f(u1,,uk;tk+1,,tk+1).

Note that the predicative interpretation S(t) is a sequence of (normalised) basic terms for any term tTb. To get the reader prepared for the definition of the order on sequences as defined below, we exemplify Definition 3 on a predicative recursive TRS.

Example 6

Consider following predicative recursive TRS Rf where

1: f(0;y)y2: f(s(x);y)g(x;f(x;y)).

Consider a substitution σ:VT(C). The embedding S(lσ)S(rσ) of root steps (lrRf) results in the following order constraints.

S(f(0;yσ))=[f(0)][]=S(yσ)by rule1,S(f(s(xσ);yσ))=[f(s(xσ))]g(xσ)f(xσ)=S(g(xσ;f(xσ;yσ)))by rule2.

Kindly observe that in the first line we employed S(yσ)=[] because is a value. In the second line we tacitly employed the overloading of concatenation:

S(g(xσ;f(xσ;yσ)))=[g(xσ)]S(f(xσ;yσ))=[g(xσ)][f(xσ)][]=g(xσ)f(xσ).

Consider now a rewrite step sRt below the root for sTb. As sTb, without loss of generality the rewrite step happens below a safe argument position. Hence

s=h(v;s1,,si,,sl)Rh(v;s1,,ti,,sl)=t

for some values v, terms s1,,sl and siRti. To embed such rewrite steps we have to prove

[h(v)]S(s1)S(si)S(sl)[h(v)]S(s1)S(ti)S(sl).

We emphasise that for a root step lσRrσ of a predicative recursive TRS R, the length of S(rσ) does not depend on σ, since images of σ are removed by the predicative interpretation. As a consequence, each step in an R-derivation on Tb increases the length of predicative interpretations by a constant (depending on R) only. Below, we bind this constant by the maximal size of a right-hand side in R.

3.2. Small polynomial path order on sequences

We arrive at the definition of the order on sequences. This order is used to orient images of the predicative interpretation S. The parameter N in controls the width of terms and sequences, and is crucial for the analysis of the length of -descending sequences carried out below.

Definition 4

Let ⩾ denote a precedence. For all 1 we define on terms and sequences of terms inductively such that:

  • 1.
    f(s1,,sn)g(t1,,tm) if fD, f>g and the following conditions hold:
    • f(s1,,sn)/tj for all j=1,,m;
    • m.
  • 2.
    f(s1,,sn)g(t1,,tn) if fDrec, fg and for some permutation π on {1,,n}:
    • s1,,sn/tπ(1),,tπ(n).
  • 3.
    f(s1,,sn)[t1tm] if the following conditions hold:
    • f(s1,,sn)tj for all j=1,,m;
    • at most one element tj0 (j0{1,,m}) contains a symbols g with fg;
    • m.
  • 4.
    [s1sn][t1tm] if there exists terms or sequences bi (i=1,,n) such that:
    • [t1tm]b1bn;
    • s1,,snb1,,bn.

We denote by Inline graphic that either ab or ab holds. We use / and also for their extension to products: s1,,sn/ti,,tn if si/ti for all i=1,,n, and si0/ti0 for at least one i0{1,,n}; likewise s1,,snti,,tn if Inline graphic for all i=1,,n, and si0ti0 for at least one i0{1,,n}.

We point out that misses the case: f(s1,,sn)t if Inline graphic for some argument si. Since predicative interpretations remove values, the clause is not needed, compare the embedding of rule 1 given in Example 6. This case would invalidate the central Lemma 7 given below, which estimates the length of descending sequences. Observe that on constructor based left-hand sides, the order constraints imposed by 1 and 2 translate to the order constraints imposed by >spop2 and >spop3 on normal arguments. The clauses 3 and 4 extend the order from terms to sequences. Noteworthy the second clause in 3 reflects that we do not allow multiple recursive calls, compare >spop2 and the definition of the predicative interpretation. We exercise Definition 4 on the constraints obtained in Example 6.

Example 7 Example 6 continued —

We show that the order constraints drawn in Example 6 can be resolved for =2. Let σ:VT(C) be a substitution. Consider first the root step f(0;yσ)Ryσ due to rule 1. Exploiting the shape of σ, we have S(f(0;yσ))=[f(0)]4[]=S(yσ). For the root step f(s(xσ);yσ)Rg(xσ;f(xσ;yσ)) caused by rule 2 we have

1:s(xσ)/xσ2:f(s(xσ))/xσby 1,3:f(s(xσ))21g(xσ)iff>g,using 2,4:f(s(xσ))22f(xσ)by 1,5:f(s(xσ))23g(xσ)f(xσ)using 3 and 4,6:S(f(s(xσ);yσ))=[f(s(xσ))]24g(xσ)f(xσ)=g(xσ;f(xσ;yσ))using 5.

Note that g(xσ)f(xσ)=[g(xσ)f(xσ)] and thus =2 is needed in the proof step 5.

The next lemma collects frequently used properties of .

Lemma 2

For all 1 we have:

  • +1,

  • , and

  • ab implies acbc.

Proof

All but the third property follow directly from definition. Suppose ab holds, and let lift(c)=[r1rl]. We show acbc. First suppose a=f(s1,,sn). Then we conclude

ac=[f(s1,,sn)r1rl]4br1rl=bc

employing the assumption ab and riri for all i=1,,l. Otherwise a=[s1sn], hence a4b by assumption. Then bb1bn with Inline graphic for all i=1,,n, where at least one orientation is strict. From this and again using riri (i=1,,l) we conclude

ac=[s1snr1rl]4b1bnr1rl=bc.

 □

We emphasise that as a consequence of Lemma 2 we have that c1ac2c1bc2 holds whenever ab holds. The order constraints on sequences are defined so that sequences purely act as containers. More precise, every -descending sequence starting from [s1sn] can be seen as a combination of possibly interleaved, but otherwise independent -descending sequences starting from the elements si (i=1,,n).

3.3. Predicative embedding

We now close the diagram outlined in Fig. 2 on page 9, that is we prove the predicative embedding exemplified in Example 7 on the TRS Rf. As a preparatory step, we consider root steps lσRrσ first. The complete embedding is then established in Lemma 4.

Lemma 3

Consider a rewrite rule lrR. Let σ:VT(C) be a substitution. If l>spopr holds then S(lσ)|r|S(rσ).

Proof

Let l=f(l1,,lm;lm+1,,tm+n). We first show

l>spoprf(l1σ,,lmσ)|r|S(t)for alltS(rσ), (⁎)

by induction on |r|. The non-trivial case is when is not a value, otherwise S(rσ)=[]. Suppose thus r=g(r1,,rm;rm+1,,rm+n) where r is not a value. By definition

S(rσ)=[g(r1σ,,rmσ)]S(rm+1σ)S(rm+nσ).

First consider the element g(r1σ,,rmσ)S(rσ). We either have l>spop2r or l>spop3r by the assumption that r is not a value. In the case l>spop2r, we have f>g and ln/rj for all normal arguments rj (j=1,,m). The latter reveals that the instances rjσ are equivalent to proper subterms of the left-hand side f(l1σ,,lmσ). Using this and that trivially m|r| holds we conclude f(l1σ,,lmσ)|r|1g(r1σ,,rmσ). In the remaining case l>spop3r, we have m=m, fg where f,gDrec and moreover l1,,lm>spoprπ(1),,rπ(m) for some permutation π. By reasoning as above we see l1σ,,lmσ/rπ(1)σ,,rπ(m)σ and conclude f(l1σ,,lmσ)|r|2g(r1σ,,rmσ). Hence overall we obtain f(l1σ,,lmσ)|r|g(r1σ,,rmσ).

Now consider the remaining elements tS(rσ), where tg(r1σ,,rmσ). Then each t occurs in the interpretation of a safe argument of , say tS(rjσ) for some j{m+1,,m+n}. One verifies that, l>spoprj holds: if l>spop2r then by definition, otherwise l>spop3r holds and we obtain l>spop1rj. By induction hypothesis we have f(l1σ,,lmσ)|rj|t. As |rj||r| we hence obtain f(l1σ,,lmσ)|r|t for all tS(rjσ) and safe positions j{m+1,,m+n} of g. This concludes (⁎).

We return to the proof of the lemma. A standard induction gives that the length of S(rσ) is bounded by |r|, compare the remark after Example 6. Using that σ maps to values, a second induction on l>spopr gives that S(rσ) contains at most one (defined) function symbol g equivalent to f. Summing up, using (⁎) we conclude f(l1σ,,lmσ)|r|3S(rσ). Observe that by assumption the direct subterms of are values, and thus S(lσ)=[f(l1σ,,lmσ)] by definition. The lemma thus follows by one application of |r|4.  □

Lemma 4

Let R denote a predicative recursive TRS and let :=max{|r||lrR}. If sTb and sRt then S(s)S(t).

Proof

Let sTb and consider a rewrite step sRt. We prove the lemma by induction on the rewrite position. In the base case we consider a root step s=lσRrσ=t for some rule lrR. Since R is predicative recursive, l>spopr holds. By Lemma 3 we have S(lσ)|r|S(rσ). Since |r| the result follows.

For the inductive step, consider a rewrite step below the root. Since sTb this step is of the form

s=f(v;s1,,si,,sn)Rf(v;s1,,ti,,sn)=t,

where siRti for some i{1,,n}. Wlog. we assume t is not a value. Using the induction hypothesis S(si)S(ti) and Lemma 2 we conclude

S(sσ)=f(v)S(s1)S(si)S(sn)f(v)S(s1)S(ti)S(sn)=S(tσ).

 □

3.4. Binding the length of -descending sequences

The following function G relates a term, or sequence of terms, to the length of its longest -descending sequence.

Definition 5

For all 1, we define G(a):=max{m|aa1am}.

This function is well-defined, as is well-founded. The latter can be seen as forms a restriction of the multiset path order [35]. We remark that due to Lemma 2, G(a)=G(b) whenever ab. The following lemma confirms that sequences act as containers only.

Lemma 5

For all sequences [s1sn], G([s1sn])=i=1nG(si).

Proof

Let a=[s1sn] be a sequence and observe G(a1a2)G(a1)+G(a2). This is a consequence of Lemma 2. Hence, in particular we obtain: G(a)=G(s1sn)i=1nG(si).

To complete the proof, we proceed by induction on G(a). The base case G(a)=0 follows trivially. For the induction step, we show that ab implies G(b)<i=1nG(si). From this, we obtain G([s1sn])i=1nG(si), which together with the above observation yields G([s1sn])=i=1nG(si). Suppose ab. Then this is only possible due to 4. Hence b is equivalent to b1bn, where Inline graphic for all i=1,,n and si0bi0 for at least one i0{1,,n}. In particular, G(bi)G(si) and G(bi0)<G(si0). As we have G(bi)G(b)<G(a) for all i=1,,n, induction hypothesis is applicable to b and all bi (i{1,,n}). It follows that

G(b)=tbG(t)=i=1ntbiG(t)=i=1nG(bi)<i=1nG(si).

 □

We now approach Lemma 7, where we show that G(f(u1,,uk))c(2+m)rd(f) for some constant cN and m=i=1kdp(ui). The proof of Lemma 7 is slightly involved, and requires induction on the rank r of f and side induction on m. The constant c is defined in terms of c(r,d) for natural numbers r,dN:

c(r,d):={1ifr=1,andc(r1,d)d+1+1otherwise.

Below the argument r will be instantiated by the rank, and d by the depth of recursion of a function symbol f. The next lemma is a technical lemma to ease the presentation of the proof of Lemma 7. The assumptions correspond exactly to the main induction hypothesis (IH) and side induction hypothesis (SIH) of Lemma 7.

Lemma 6

Consider f(u1,,uk)g(v1,,vl) and suppose that

f>gG(g(v1,,vl))c(rk(g),rd(g))(2+i=1ldp(vi))rd(g), (IH)
fg,i=1ldp(vi)<i=1kdp(ui)G(g(v1,,vl))c(rk(g),rd(g))(2+i=1ldp(vi))rd(g). (SIH)

Then

f(u1,,uk)1g(v1,,vl)G(g(v1,,vl))c(rk(f)1,rd(f))rd(f)(2+i=1kdp(ui))rd(g), (†)
f(u1,,uk)2g(v1,,vl)G(g(v1,,vl))c(rk(f),rd(f))(1+i=1kdp(ui))rd(f). (‡)

Proof

First consider the case f(u1,,uk)1g(v1,,vl). Then f>g and so rk(f)>rk(g) and rd(f)rd(g) hold. From the order constraints on arguments we can derive i=1ldp(vi)li=1kdp(ui). Observe that the assumption gives also l. Summing up, simple arithmetical reasoning gives the implication (†) from (IH). Similar, when f(u1,,uk)2g(v1,,vl) holds we have rk(f)=rk(g) and rd(f)=rd(g). The order constraints on arguments give i=1ldp(vi)<i=1kdp(ui). From this, the implication (‡) follows directly from (SIH).  □

Lemma 7

For all fD, G(f(u1,,uk))O((i=1kdp(ui))rd(f)).

Proof

Let be fixed. To show the theorem, we show

f(u1,,uk)bG(b)<c(rk(f),rd(f))(2+i=1kdp(ui))rd(f).

In proof we employ induction on rk(f) and side induction on m:=i=1kdp(ui). Abbreviate r:=rk(f) and d:=rd(f). Assume that f(u1,,uk)b holds. We prove G(b)<c(r,d)(2+m)d, where we show only the more involved inductive case r>1. The base case r=1 follows by similar reasoning. We analyse two cases.

If b=[t1tl] is a sequence, then by assumption f(v1,,vk)3b. Thus l and f(v1,,vk)tj, i.e. either f(v1,,vk)1tj or f(v1,,vk)2tj holds for all j=1,,l. Due to the second condition imposed on 3, we even have f(v1,,vk)1tj for all but one jj0{1,,l}. Suppose first that f is recursive. Then f(v1,,vk)1tj implies d>rd(rt(tj))0. Using induction and side induction hypothesis to satisfy the assumptions of Lemma 6, we obtain

G(tj)c(r1,d)d(2+m)d1(for alljj0)G(tj0)max{c(r1,d)d(2+m)d1,c(r,d)(1+m)d}.

Here the second inequality is obtained by combining the conclusions of the two implications provided by Lemma 6. We conclude the case as follows.

G(b)=j=1lG(tj)by Lemma 5,c(r,d)(1+m)d+l(c(r1,d)d(2+m)d1)above consequences of Lemma 6,<c(r,d)(1+m)d+c(r,d)(2+m)d1usingland unfoldingc(r,d),c(r,d)(2+m)d.

Suppose now that f is not recursive. Then also f(v1,,vk)1tj0. Employing that f>rt(tj) implies drd(rt(tj)), using Lemma 5 and Lemma 6 we see

G(b)=j=1lG(tj)l(c(r1,d)d(2+m)d)<c(r,d)(2+m)d.

This finishes the cases when b is a sequence.

Finally, when b=g(t1,,tl) is a term we conclude directly by Lemma 6, using c(r1,d)d<c(r,d) similar to above.  □

Putting things together, we arrive at the proof of our first theorem.

Proof of Theorem 1

Let R denote a predicative recursive TRS. We prove the existence of a constant cN such that for all values u,v the derivation height of f(u;v) is bounded by cnrd(f), where n is the sum of the depths of normal arguments u.

Consider a derivation f(u;v)Rt1RRtn. Let i{0,,n1}. By Lemma 1 it follows that tiTb, and consequently S(ti)S(ti+1) due to Lemma 4. So in particular the length n is bounded by the length of descending sequences starting from S(f(u;v))=[f(u)]. By Lemma 5, G([f(u)])=G(f(u)). Thus Lemma 7 gives the constant cN as desired.  □

4. Parameter substitution

Bellantoni already observed that his definition of FP is closed under safe recursion on notation with parameter substitution. Here a function f is defined from functions g,h0,h1 and p by

f(ϵ,x;y)=g(x;y)f(zi,x;y)=hi(z,x;y,f(z,x;p(z,x;y)))(i=0,1). (SRNPS)

We introduce the small polynomial path order with parameter substitution (sPOPSP for short), where clause >spop3 is extended to account for the schema (SRNPS). Theorem 1 remains valid under this extension.

Definition 6

Let s and t be terms such that s=f(s1,,sk;sk+1,,sk+l). Then s>spoppst if one of the following alternatives holds.

  • 1.

    sispoppst for some argument si of s (i{1,,k+l}).

  • 2.
    fD, t=g(t1,,tm;tm+1,,tm+n) where f>g and the following holds:
    • sn/tj for all normal arguments t1,,tm;
    • s>spoppstj for all safe arguments tm+1,,tm+n;
    • t contains at most one (recursive) function symbols g with fg.
  • 3.
    fDrec, t=g(t1,,tk;tk+1,,tk+l) where fg and the following holds:
    • s1,,sk>spoppstπ(1),,tπ(k) for some permutation π on {1,,k};
    • s>spoppstj for all safe arguments tj;
    • arguments t1,,tk+l contain no (recursive) symbols g with fg.

Here sspoppst denotes that either sst or s>spoppst. In the last clause, we use >spopps also for the product extension of >spopps (modulo permutation).

We adapt the notion of predicative recursive TRS of degree d to sPOPPS  in the obvious way. Note that >spop>spopps does not hold in general, due to the third constraint put onto >spopps3. Still, the above order extends the analytic strength of small polynomial path orders.

Lemma 8

If a TRS R is compatible with >spop then R is also compatible with >spopps using the same precedence and separation of argument positions.

Proof

Consider the orientation l>spopr of a rule lrR. To prove the lemma, we show that l>spoppsr holds by replacing every application of >spopi by >spoppsi. We prove this claim by induction on >spop. We consider the only non-trivial case where s>spop3t appears in the proof of l>spoppsr. Compare the case >spop3 with the new case >spopps3. Using the induction hypothesis, the order constraints on normal arguments are immediately satisfied. Now fix a safe argument tj of t. From s>spop3t we obtain a safe argument si of s with si>spoptj. Hence si>spoppstj holds by induction hypothesis. Thus s>spopps1tj holds as desired. Observe that the safe arguments si of s are proper subterms of the left-hand side l, hence the terms si contain no defined symbols. Since >spop collapses to the subterm relation on constructor terms, it follows that the safe argument tj of t are constructor terms too. From this we see that the final constraint of >spopps3 is satisfied.  □

Parameter substitution extends the analytic power of sPOP significantly. Noteworthy, sPOPPS  can deal with tail-recursive rewrite systems.

Example 8

The TRS Rrev consisting of the three rules

rev(xs;)revtl(xs;nil)revtl(nil;ys)ysrevtl(cons(;x,xs);ys)revtl(xs;cons(;x,ys)),

which reverses lists formed from the constructors nil and cons. Define the separation of argument positions as indicated in the rules. The underlying precedence is given as rev>revtl>cons. Since revtl is the only recursive symbol, the degree of recursion of Rrev is one.

Notice that orientation of the final rule with the induced sPOP reduces to the unsatisfiable constraint ys>spopcons(;x,ys). In contrast, orientation with the induced POPPS  reduces to the constraint revtl(cons(;x,xs);ys)>spoppscons(;x,ys), which can be resolved by one application of >spopps2 followed by three applications of >spopps1.

As a consequence of the next theorem, the runtime of Rrev is inferred to be linear.

Theorem 2

Let R be a predicative recursive TRS of degree d (with respect to Definition 6). Then the derivation height of any basic term f(u;v) is bounded by a polynomial of degree rd(f) in the sum of the depths of normal arguments u. In particular, the runtime complexity function rcR is bounded by a polynomial of degree d.

Proof

We observe that under the new definition all proofs, in particular the predicative embedding shown in Section 3.3, go through unchanged.  □

5. Predicative recursive rewrite systems compute all polytime functions

In this section we show that sPOP is complete for FP. Indeed, we can even show a stronger result. Let f be a function from Bwsc that makes only use of d nestings of safe recursion on notation, then there exists a predicative recursive TRS Rf of degree d that computes the function f.

By definition BwscB for Bellantoni and Cooks predicative recursive characterisation B of FP given in [1]. Concerning the converse inclusion, the following theorem states that the class Bwsc is large enough to capture all the polytime computable functions.

Theorem 3

Every polynomial time computable function belongs to Bwsc.

One can show this fact by following the proof of Theorem 3.7 in [28], where the unary variant of Bwsc is defined and the inclusion corresponding to Theorem 3 is shown. The completeness of sPOP for the polytime computable functions is an immediate consequence of Theorem 3 and the following result.

Theorem 4

For any Bwsc-function f there exists a predicative recursive TRS R computing f and of degree d, where d equals the maximal number of nested application of (SSRN) in the definition of f.

Proof

Let f be a function coming from Bwsc. A witnessing TRS R is obtained via a term rewriting characterisation of the class Bwsc depicted in Fig. 1 on page 5. The term rewriting characterisation expresses the definition of Bwsc as an infinite TRS RBwsc. We define a one-to-one correspondence between functions from Bwsc and the set of function symbols for RBwsc as follows. Constructor symbols ϵ, s0 and s1 are used to denote binary words. The function symbols Si, P, Ijk,l, C and Ok,l correspond respectively to the initial functions Si, P, Ijk,l, C and Ok,l of Bwsc. The symbol SUB[h,i1,,in,g] is used to denote the function obtained by composing functions h and g according to the schema of (WSC). Finally, the function symbol SRN[g,h0,h1] corresponds to the function defined by safe recursion on notation from g, h0 and h1 in accordance to the schema (SRN). With this correspondence, RBwsc is obtained by orienting the equations in Fig. 1 from left to right.

By induction according to the definition of f in Bwsc we show the existence of a TRS Rf and a precedence f such that:

  • 1.

    Rf is a finite restriction of RBwsc,

  • 2.

    Rf contains the rule(s) defining the function symbol f corresponding to f,

  • 3.

    Rf is compatible with >spop induced by f,

  • 4.

    f is maximal in the precedence f underlying Rf, and

  • 5.

    the depth of recursion rd(f) equals the maximal number of nested application of (SRN) in the definition of f in Bwsc.

It can be seen from conditions (1), (3) and (5) that the theorem is witnessed by Rf. To exemplify the construction we consider the step case that f is defined from some functions g,h0,h1Bwsc by the schema (SRN). By induction hypothesis we can find witnessing TRSs Rg,Rh0,Rh1 with underlying precedences g,h0,h1 respectively for g,h0,h1. Extend the set of function symbols by a new recursive symbol f:=SRN[g,h0,h1]. Let Rf be the TRS consisting of Rg, Rh0, Rh1 and the following three rules:

f(ϵ,x;y)g(x;y)f(si(;x),x;y)hi(z,x;y,f(z,x;y))(i=0,1).

It is not difficult to see that the precedence f of Rf extends the precedences g, h0 and h1 by ff and f>g for g{g,h0,h1}.

Let >spop be the sPOP induced by f. Then it is easy to check that Rf enjoys conditions (1) and (2). In order to show condition (3), it suffices to orient the three new rules by >spop. For the first rule, f(ϵ,x;y)>spop2g(x;y) holds by the definition of f. For the remaining two rules we only orient the case i=0. Since f is a recursive symbol and s0(;z)>spop1z holds, f(s0(;z),x;y)>spop3f(z,x;y) holds. This together with the definition of the precedence f allows us to conclude f(s0(;z),x;y)>spop2h0(z,x;y,f(z,x;y)).

Consider condition (4). For each function g{g,h0,h1} from Bwsc, the corresponding function symbol g is maximal in the precedence g by induction hypothesis for g. Hence by the definition of f, f is maximal in f.

It remains to show condition (5). Notice that rd(f)=1+max{rd(g),rd(h0),rd(h1)}, since f is a recursive symbol. Without loss of generality let us suppose rd(g)=max{rd(g),rd(h0),rd(h1)}. Then by induction hypothesis for g, rd(g) equals the maximal number of nested application of (SRN) in the definition of g in Bwsc. Hence rd(f)=1+rd(g) equals the one in the definition of f in Bwsc.  □

We obtain that predicative recursive TRSs give a sound and complete characterisation of the polytime computable functions.

Theorem 5

The following classes of functions are equivalent:

  • 1.

    The class of functions computed by predicative recursive TRSs.

  • 2.

    The class of functions computed by predicative recursive TRSs allowing parameter substitution.

  • 3.

    The class FP of functions computable in polynomial time.

Proof

Let PR1 and PR2 refer to the classes defined in clauses (1) and (2) respectively. We have

PR1(Def.)PR2(Thm. 2)FP(Thm. 3)Bwsc(Thm. 4)PR1.

For the second inclusion we tacitly employed the adequacy theorem.  □

We remark that from our standing restriction on TRSs, orthogonality is essentially used to ensure that semantics of TRSs are well-defined. Orthogonality could be replaced by the less restrictive, although undecidable, notion of confluence.

As a corollary to Theorem 5 we obtain that the class FP, viz Bwsc, is closed under parameter substitution.

Corollary 1

For any functions g,h0,h1,pBwsc, there exists a unique polytime computable function f such that f(ϵ,x;y)=g(x;y) and f(zi,x;y)=hi(z,x;y,f(z,x;p(z,x;y))) for each i=0,1.

6. Predicative recursion precisely captures register machine computations

Exploiting the fine-grained control given by the degree of recursion, we now provide an order-theoretic characterisation of DTIME(nd) via predicative tail-recursive TRSs.

Definition 7

A TRS R is tail-recursive if for every rule f(v)rR, if g with gf occurs in r then it occurs at the root position in r. The TRS R is predicative tail-recursive (of degree d), if it is tail-recursive and predicative recursive (of degree d), with respect to Definition 6.

For instance, the TRS Rrev from Example 8 is a predicative tail-recursive TRS. The restriction to tail-recursion is unarguably severe. Still, predicative tail-recursive TRSs of degree d can compute polynomials cnd+e (in unary notation) for all c,eN.

Example 9

Let p(n):=cnd+e denote a polynomial with constants c,d,eN. The TRS Rp is given by the following rules.

p0(x,y;z)sc(;z)pr(0,y;z)zforr=1,,d,pr(s(;x),y;z)pr(x,y;pr1(y,y;z)))forr=1,,d,p(x;)pd(x,x;se(;0))

This TRS is tail-recursive, moreover it is predicative recursive with recursive symbols p1,,pd and precedence pd>p1>p0>s0. In total, Rp is thus predicative tail-recursive, of degree d.

Let n=sn(;0) denote the denotation of nN as value with constructors s and 0. One verifies that for u,v,wN, pr(u,v,w) reduces to the value cuvr1+w, for r=1,,d. Thus p(n)=cnnd1+e=cnd+e for all nN.

6.1. Predicative tail-recursive TRSs of degree d are complete for DTIME(nd)

Fix a register machine M that computes a function f:W(A)kW(A) in time O(nd). We show that this function is computable by a predicative tail-recursive TRS of degree d. Let CA denote the set of constructors that contains a symbol ϵ, and for each aA a unary symbol a. Then the word w=a1,,alW(A) can be represented as value a1((al(ϵ))) over CA. Having this correspondence in mind, we confuse words with such values below. Furthermore, we suppose for each instruction label j=1,,l+1 of M a designated constant j used to denote this label. The following lemma shows that the one-step transition relation of M is expressible by a predicative TRS R0M of degree 0.

Lemma 9

Let M be a RM with m registers. There exists a predicative tail-recursive TRS R0M of degree 0 defining the symbols M0,M1,,Mm, such that

M0(;j,u1,,vm)R0MjandMi(;j,u1,,um)R0Mvi(i=1,,m)

iff j,u1,,umMj,v1,,vm.

Proof

Suppose j,u1,,umMj,v1,,vm. For the definition of M0, note that jj+1 only if the jth instruction in the control of M is a jump instruction. In this case, j can be determined by the left-most character of one of the values ui (i{1,,m}). And so j can be computed in one step using pattern-matching on the inputs j,u1,,um only. Similarly, for the definition of Mi (i=1,,m), the word vi is either a(ui), ϵ, one of u1,,um or the direct subterm of ui. Again the precise shape can be determined purely by pattern matching on the inputs j,u1,,um.  □

Lemma 10

Let fDTIME(nd). Then f is computed by a predicative tail-recursive TRS Rf of degree d.

Proof

Suppose the function f:W(A)kW(A) is computed by an RM M in time p(n)O(nd). Let c,eN denote constants such that p(n)c|n|d+e for all nN. The construction of Rf is an adaption of the TRS Rp given in Example 9, using the TRS R0M provided in Lemma 9 to simulate one step of the RM M. Let mk be the numbers of registers of M. For function symbols M:=M0,,Mm as provided in Lemma 9, let M(;t) be the -fold parallel composition of M on terms t, given by M0(;t):=t and

M+1(;t):=M0(;M(;t)),,Mm(;M(;t)).

Observe that iterated application of Lemma 9 yields:

Mj(;l,u1,,um)R0Mvj(l,u1,,um)M(l,v1,,vk)for all1andj=1,,m. (1)

For each r=1,,d and i=0,,m, let fr,i be fresh a function symbol with 2k normal and m safe argument positions. Let x:=x1,,xk, y:=y1,,yk and z:=z0,,zm denote pairwise distinct variables. The TRS Rf extends R0M by the following rules.

f0,i(x,y;z)Mic(;z)
fr,i(ϵ,y;z)zi
fr,i(ϵ,a(xj),,xk,y;z)fr,i(ϵ,xj,,xk,y;fr1,0(y,y;z),,fr1,m(y,y;z))
f(x;)fd,m(x,x;Me(;1,x,ϵ)).

Here the index r ranges over 1,,d, the index i ranges over 0,,k and aA. Let u and v be vectors of words of length k. Observe that

fr,i(u,v;w1,,wk,ϵ)RfMic|u||v|r1(;w1,,wk,ϵ).

This derivation can be shown by induction on r and |u|, in correspondence to Example 9. For words w=w1,,wk, this thus yields

f(w;)RfMic|w||w|d1(;Mme(;1,w,ε))=Mmc|w|d+e(;1,w,ε). (2)

Putting the derivations (1) and (2) together, and using that RM M runs in time p(n)c|w|d+e on input w1,,wk, we conclude that f(w1,,wk)=f(w1,,wk) holds.

Observe that the precedence ⩾ of Rf on defined symbols is given by

f>fd,0,,fd,m>>f0,0,,f0,m>M0,,Mm,

where only the symbols fr,i for r>0 are recursive in Rf. In particular, the recursion depth of Rf is thus d. It is also not difficult to see that Rf is predicative recursive. As Rf is tail-recursive, the lemma follows.  □

6.2. Predicative tail-recursive TRSs of degree d are sound for DTIME(nd)

We now show the converse of Lemma 10. Fix a predicative tail-recursive TRS R of degree d. Call a function symbol monadic if its arity is at most one. Suppose all constructors of R are monadic and consider a defined symbol f in R. We show that the function f computed by R can be implemented on an RM Mf, operating in time O(nd). The restriction to monadic constructors allows us to identify values of R with words over the alphabet AC, which contains for every constructor aiC a distinct letter ai. We use the word c1,,cl to denote the value c1(c2(cl1(cl))). Having this correspondence in mind, we again confuse words with values.

To ease presentation, we first consider the sub-case where R is simple. Here a rule f(u;v)r is called simple if r is a constructor term or r=g(w;h1(u;v),,hk(u;v)) where g is either a defined or a constructor symbol and h1,,hkD. Furthermore R is called simple if all its rules are simple.

Lemma 11

If R is simple, then fDTIME(nrd(f)) for every defined symbol f from R.

Proof

For each defined symbol f in R with k normal and l safe arguments, we define a corresponding RM Mf with input registers xf=x1f,,xk+lf and output variable zf. On input u=u1,,uk and v=v1,,vl the RMs Mf run in time O(|u|rd(f)). To simplify the presentation, we first suppose that the precedence of R is strict on defined symbols, i.e. fg for f,gD implies f=g. The construction is by induction on the rank p of f, the bound is proven by induction on p and side induction on |u|. Suppose the input registers xf hold the values u,v.

First observe that Mf is able to determine in constant time (depending only on R) the (unique) rewrite rule applicable to f(u;v). Since there are only a constant number of rules in R, it suffice to realise that the time required for pattern matching depends only on R. To this end, suppose we want to match f(u;v) against the left hand-side f(ln;ls)rR. Due to linearity, Mf can match the arguments u,v against ln,ls individually. For this, the RM Mf just has to copy sequentially each argument wu,v to a temporary register, wi can then be matched against the corresponding argument liln,ls using a constant number of jump and delete instructions.

Once the applicable rewrite rule has been identified, the RM Mf can proceed according to its right-hand side as follows. If f(u;v) rewrites in one step to a value, say w, then w=C[xσ] for some constructor context C and substitution σ:VT(C). Then some input register xixf holds the word C[xσ]. Notice that the contexts C and C depend only on the applied rewrite rule. Hence Mf can provide the result w in register zf in constant time. Thus suppose f(u;v) does not rewrite to a value in one step. Since R is simple

f(u;v)Rg(w1,,wm;h1(u;v),,hn(u;v))whereh1,,hnD.

As R is predicative recursive, f>hj holds for all j=1,,n. Furthermore, either f>g or f=g holds by our assumption that ⩾ is strict on defined symbols. In both cases, order constraints on normal arguments give f(u;v)n/wi (i=1,,m), i.e.  some input register holds a superterm of wi. The RM Mf can prepare the arguments wi in dedicated registers xig for all i=1,,m in constant time. By induction hypothesis, there exist RMs Mhj (j=1,,n) that on input registers xhj initialised by u,v, compute the value hj(u;v) in time O(|u|rd(hj)). The RM Mf can use these machines as sub-procedures (cf. [34]) in order to compute hj(u;v) (j=1,,n). Overall this requires at most O(|u|d) steps, where d:=max{rd(h1),,rd(hn)}rd(f) is the maximal recursion depth of the defined symbols hi. The interesting case is now when gD. We analyse the cases f>g and f=g independently.

If f>g then as before we can use a machine Mg given by induction hypothesis that computes g(w;h1(u;v),,hn(u;v))=f(u;v) where w:=w1,,wm in time O(|w|rd(g)), from the already initialised input registers xg. As a consequence of the order constraints on R we see |wi|max{|uj||uju} for all i=1,,m. Thus |w|m|u|, and hence overall the procedure takes time O(|u|d)+O(|w|rd(g))O(|u|rd(f)). For the inclusion we employ drd(f) and rd(g)rd(f) as given by the assumptions.

Otherwise f=g, hence f is recursive. Recall that >spopps collapses to the subterm relation (modulo equivalence) on values. From the order constraint on normal arguments u>spoppsw it is thus not difficult to derive |w|<|u|. Recall d=max{rd(h1),,rd(hn)}<rd(f) since f is recursive. Thus it follows that |u|d+|w|rd(f)|u|rd(f)+1. Using the side induction hypothesis we conclude that Mf operates in time O(|u|d)+O(|w|rd(f))=O(|u|rd(f)) overall. We conclude this final case.

To lift the assumption on the precedence, suppose {f1,,f} is the set of all function symbols equivalent to fD, i.e., f1,,f are defined by mutual recursion. Since this class is finite, one can store i (for i=1,,) in a dedicated register of Mf, say r. Although more tedious, it is not difficult to see that the above construction can then be altered, so that Mf computes fr(u;v) on input u,v.  □

We now remove the restriction that R is simple. For that we define the relation ⇝ on TRSs as follows. Let h1,,hk be fresh symbols not appearing in R. Then

R{f(uf;vf)g(ug;t1,,tk)}R{f(uf;vf)g(ug;h1(uf;vf),,hk(uf;vf))}{hi(uf;vf)ti|i=1,,k},

provided the transformed rule f(uf;vf)g(ug;t1,,tk) is not already simple. The relation ⇝ enjoys following properties.

Lemma 12

  • 1.

    The relationis well-founded.

  • 2.

    If RS then RS+.

  • 3.

    Let R be a predicative tail-recursive TRS of degree d that uses only monadic constructors. If RS then S enjoys the same properties.

Proof

Let R:=rR|r|, where R={r|lrRis not a simple rule}. Then an infinite chain R1R2 translates into an infinite descend R1>R2>. Hence property 1 follows.

Suppose now RS. For property 2, consider a rewrite step C[f(ufσ;vfσ)]RC[g(ugσ;t1σ,,tkσ)] using a transformed rule f(uf;vf)g(ug;t1,,tk)R. Then

C[f(ufσ;vfσ)]SC[g(ugσ;h1(ufσ;vfσ),,hk(ufσ;vfσ))]SkC[g(ugσ;t1σ,,tkσ)],

simulates the considered step. So clearly RS+ follows.

Finally consider property 3, and consider a TRS S such that RS. Let f(uf;vf)g(ug;t1,,tk), denote the rule which is replaced by

f(uf;vf)g(ug;h1(uf;vf),,hk(uf;vf))andhi(uf;vf)ti(i=1,,k).

Let ⩾ denote the precedence underlying R, and ⊒ the precedence underlying the simplified TRS S. Notice that ⊒ is an extension of ⩾, which collapses to ⩾ on the signature F of R. As the freshly introduced symbols hi are not recursive, the recursion depth of every symbol hF is preserved by the transformation.

It is obvious that when R is tail-recursive, so is S. It thus remains to verify that S is oriented by the order spopps. Since , and as a consequence >spoppsspopps, it suffices to show that the orientation f(uf;vf)>spoppsg(ug;t1,,tk) of the replaced rule implies

f(uf;vf)spoppsg(ug;h1(uf;vf),,hk(uf;vf)) (3)
hi(uf;vf)spoppsti(i=1,,k). (4)

We perform case analysis on the assumption.

Suppose first f(uf;vf)>spopps1g(ug;t1,,tk) holds. Note that by the shape of left-hand sides in R and definition of the precedence, g is a constructor in the considered case. In particular g admits only safe argument positions. Thus f(uf;vf)spopps2g(ug;h1(uf;vf),,hk(uf;vf)) holds using f(uf;vf)spopps2hi(uf;vf) (i=1,,k). This concludes (3). The assumption give also f(uf;vf)/ti for all i=1,,k, thus hi(uf;vf)spopps1ti holds and we conclude (4).

Finally suppose that f(uf;vf)>spoppsg(ug;t1,,tk) follows by >spopps2 either or >spopps3. Using the order constraint f(uf;vf)>spoppshi(uf;vf) for all i=1,,k, we see that (3) follows by spopps2 or spopps3 respectively. For Eq. (4), observe that since R is tail-recursive, the assumption gives f(uf;vf)>spoppsti (i=1,,k) using only applications of >spopps1 and >spopps2. Repeating these proofs, but employing hig instead of f>g yields a proof of (4).  □

Lemma 13

Let R be a predicative tail-recursive TRS of degree d, and suppose all constructors are monadic. Then fDTIME(nrd(f)) for every defined symbol f from R.

Proof

Let S be a ⇝-normal form of our analysed TRS R. Then S is simple as otherwise ⇝-reducible. Using the assumptions on R, Lemma 12 yields that S satisfies the preconditions of Lemma 11. Moreover, it shows that S computes all functions computed by R. We conclude by Lemma 11.  □

6.3. Predicative tail-recursive TRSs of degree d characterise DTIME(nd)

By Lemma 10 and Lemma 13, we obtain following correspondence.

Theorem 6

For each dN, the following classes of functions are equivalent:

  • 1.

    The class of functions computed by predicative tail-recursive TRSs of degree d, using only monadic constructors.

  • 2.

    The class DTIME(nd) of functions computed by register machines operating in time O(nd).

This theorem is closely connected to the recursion-theoretic characterisation of the polytime computable functions provided by Leivant [12], and the one of Marion [13]. Leivant uses ramified recurrence schemes to impose data tiering on functions defined by recursive means. Restricted to word algebras and two tiers, a function f in Leivant's class belongs to DTIME(nd), where d corresponds to the number of nested recursive definitions in f. Vice verse, any function in DTIME(nd) is expressible in Leivant's class using two tiers, and maximal d nested recursive definitions. Hence there is a precise correspondence between the functions f defined in Leivant's class based on d nested recursive definitions, and the functions definable by predicative recursive TRS of degree d. Syntactically, the restriction to two tiers in Leivant's class results in a composition scheme conceptually similar to weak safe composition. Substitution is only allowed on arguments not used for recursion.

Our result as well as Leivant's characterisation, relies on recursion schemes that go beyond primitive recursion with data tiering. Leivant allows recursive definitions by simultaneous recursion. We note that in our context, simultaneous recursion cannot be permitted. In general, such an extension would invalidate Theorem 1 and Theorem 2 respectively. Instead, we resort to parameter substitution, which is essential for our completeness result. Still simultaneous recursion can be reduced to primitive recursion, preserving the data tiering principle underlying [12]. However, this program transformation relies on a form of tupling, and does not preserve the number of nestings of recursive definitions. In our context, parameter substitution can be eliminated in recursive definitions in a similar spirit, at the expense of the depth of recursion.

Restoring to strict ramified recurrence schemes, Marion [13] provides a fine-grained characterisation of DTIME(nd) in the spirit of Leivant's characterisation and our result above. The underlying strict ramification principle requires that each recursive definition increases the tier of an input. As a consequence, the exponent d is reflected in the maximal level of an input tier. Crucial here again is the restriction to a composition scheme akin to our weak form of composition.

In Marion's characterisation, functions can return multiple values. As a consequence, the simulation of register machine computations requires neither simultaneous recursion nor similar concepts. It is not difficult to show that a modification of our computational model, which accounts for multi-valued functions, allows the completeness result given in Lemma 10 even if we disallow parameter substitution. We feel however that such a modification, tailored specifically to register machines, introduces a rather ad-hoc flavor to our formulation of computation by TRSs.

7. Examples and experimental evaluation

We briefly contrast the orders sPOP and sPOPPS to its predecessor POP [9], Marion's LMPO [6], as well as interpretation methods found in state-of-the-art complexity provers. Furthermore, we present experimental results.

Lightweight multiset path order and polynomial path orders

The order sPOP forms a restriction of POP and LMPO, whereas the latter two orders are incomparable in general. In contrast to the family of polynomial path orders, LMPO allows multiple recursive calls in right-hand sides. As clarified in the next example, extending our methods would invalidate the corresponding main results (Theorem 1 and Theorem 2 respectively).

Example 10

The TRS Rbin is given by the following rules:

bin(x,0)s(0)bin(0,s(y))0bin(s(x),s(y))+(bin(x,s(y)),bin(s(x),y)).

For a precedence ⩾ that fulfills bin>s and bin>+ we obtain that Rbin is compatible with LMPO. This TRS can however neither be handled by sPOP nor sPOPPS. It is straightforward to verify that the family of terms bin(sn(0),sm(0)) admits derivations whose length grows exponentially in n. Still the underlying function can be proven polynomial, essentially relying on memoisation techniques [6].

On the other hand, POP integrates a multiset status. In contrast, both LMPO and sPOP  are restricted to product status.

Example 11

Consider the following one-ruled TRS Rlevy originally stemming from Jean-Jaques Lévy4:

f(g(;x),y;y)g(;f(x,x;y)).

Polynomial runtime complexity of this system can be shown by POP. The system is neither compatible with an instance of LMPO nor sPOP, because the product of arguments to f cannot be ordered.

However, the system becomes orientable with an instance of sPOPPS, if we make also the second argument of f safe. Observe that f(g(;x);y,y)>spopps3f(x;x,y) holds, using g(;x)>spoppsx, f(g(;x);y,y)>spopps1x as well as f(g(;x);y,y)>spopps1y. From this, one application of >spopps2 orients the rule. Since f is the only recursive symbol, Theorem 2 shows that the runtime complexity of Rlevy is at most linear.

Even though sPOP forms a restriction of POP and LMPO, its extension by parameter substitution is incomparable to LMPO and POP. Consider the following TRS.

Example 12

The TRS Radd consists of the following rules.5

+(0;y)y+(s(;x);y)s(;+(x;y))+(s(;x);y)+(x;s(;y))

Due to the last rule, the TRS Radd is neither compatible with sPOP, POP nor LMPO. The system is however compatible with the instance >spopps of sPOPPS  as induced by the precedence underlying Radd and separation of argument positions indicated in the rules. The degree of recursion of Radd is one. The runtime complexity of Radd is inferred to be linear by Theorem 2.

We remark that POP can be extended by parameter substitution [9]. Unless PSPACE=P, this extension does however not carry over to LMPO, without sacrificing polytime computability. For instance, the natural extension of LMPO by parameter substitution can handle Example 36 from [11]. This example encodes the PSPACE complete problem of quantifier elimination on quantified Boolean formulas.

Polynomial and matrix interpretations

Small polynomial path orders are in general incomparable to interpretation methods, notably matrix [37] and polynomial interpretations [38]. These are the most frequently used base techniques in complexity tools nowadays. A polynomial interpretation is an F-algebra [29] A with carrier N, where interpretations fA:NkN (for every k-ary function symbol f) are monotone polynomials. A TRS R is compatible with a polynomial interpretation, aka polynomially terminating, if for every rule lrR, under any assignment the left-hand side l is interpreted in A larger than the interpretation of the right-hand side r. We say that a polynomial interpretation A induces polynomial runtime complexity if the interpretation of every basic term is bounded by a polynomial in the size of s. For such an interpretation A compatible with R, the runtime complexity of R is bounded by a polynomial. Additive polynomial interpretations [10], where all constructors c are interpreted by additive polynomials cA(x1,,xk)=δ+i=1kxi (δN) induce polynomial runtime complexity.

Not every polynomially terminating TRS is predicative recursive, even if only additive interpretations are employed. Vice verse, not every predicative recursive TRS is polynomially terminating so that the underlying interpretation induces polynomial runtime complexity. This is clarified in the next two examples.

Example 13

The one-ruled TRS {f(c(x))f(d(x))} is polynomially terminating, using interpretations cA(x)=x+1 and fA(x)=dA(x)=x, but it is not compatible with any of the above mentioned restrictions of recursive path orders.

Example 14

Consider the predicative tail-recursive TRS Rbtree, which consists of the following two rewrite rules:

f(0;y)yf(s(;x);y)f(x;c(;y,y)).

Suppose this rewrite system is compatible with a polynomial interpretation A. Consider the reduction of a basic term sn:=f(sn(;0);s(;0)) for nN. This yields a binary tree vn of height n, with leafs s(;0). Observe that by monotonicity, cA(x,y)x+y holds. Note that orientation requires that sA(x)>x. As a consequence, the interpretation of terms vn grows exponentially in n. As by compatibility the interpretations of terms necessarily decrease during reduction, it follows that A does not induce polynomial runtime complexity.

A matrix interpretation A is similar to a polynomial interpretation, but the underlying carrier of the F-algebra is Nd (d1), and interpretation functions are of shape fA(x1,,xk)=F1x1++Fkxk+f. Here Fi (i=1,,k) denote matrices of size d×d, and f is vector over N. The notions of compatibility and induced polynomial complexity carry over naturally from polynomial interpretations. As for (additive) polynomial interpretations it can be shown that matrix interpretations are incompatible to small path orders. This is clarified in the next example.

Example 15 Continued from Examples 2 and 13

Reconsider the predicative recursive TRS Rarith from Example 1. This system cannot be shown compatible with matrix interpretations. Intuitively this holds due to the linear form of matrix interpretations. The interpretation of a basic term ×(x,y;) has to be a non-linear expression in both x and y.

Vice verse, the (additive) polynomial interpretation given in Example 13 turns naturally into a matrix interpretation compatible with the one-ruled TRS depicted in Example 13. On the other hand, this TRS is not predicative recursive.

Our final example shows that even in cases where semantic methods apply, order-based techniques might deduce a tighter bound.

Example 16 Continued from Example 11

While the TRS Rlevy given in Example 11 can be handled with semantic methods, the polynomial interpretations can only verify a quadratic upper bound. To the contrary, sPOPPS  can verify the (non-optimal) linear bound.

Experimental assessment

The small polynomial path order sPOP gives rise to a new, fully automatic, syntactic method for polynomial runtime complexity analysis. We have implemented this technique in our complexity tool Inline graphic [2]. In particular the complexity proofs above have been obtained automatically with Inline graphic.

In order to further test the viability of small polynomial path orders, we performed experiments on the relative power of sPOP (respectively sPOPPS) with respect to LMPO [6], POP [8] and interpretations [10,37] suited to polynomial complexity analysis. Experiments were conducted with Inline graphic version 2.0,6 on a machine with 8 Dual-Core Opteron 885 processors (2.6 GHz). We abort Inline graphic if a complexity certificate could not be found within 10 seconds. We selected two data-sets: data-set TC constitutes of 597 terminating constructor TRSs and data-set TCO, containing 290 examples, resulting from restricting test-suite TC to orthogonal systems.7

Table 1 summarises the results obtained on data-sets TC  and TCO.8 On the larger benchmark TC, the total of 39 examples drawn in column sPOP are necessarily a subset of the 54 examples compatible with LMPO, and also the 43 examples compatible with POP. Note that LMPO induces only exponential bounded runtime complexity. On three examples, including the TRS Rbin depicted in Example 10, this bound is indeed tight. Whereas POP can only give an unspecified polynomial bound, sPOP assesses the complexity of compatible systems between constant and cubic. Thus sPOP brings about a significant increase in precision, accompanied with only minor decrease in power. This assessment remains true, if we consider the smaller benchmark set TCO. Parameter substitution increases the analytic power of POP on test-suite TC from 39 to 54 examples. From the 15 new examples, 13 examples are neither compatible with LMPO  nor POP.

Table 1.

Number of oriented problems and average execution times (seconds) on data-sets TC and TCO.

LMPO POP sPOP sPOPPS SEM SEM+sPOPPS
TC O(1) 9/0.13 9/0.13 3/0.12
O(n) 23/0.16 37/0.21 83/0.73 89/0.70
O(n2) 6/0.22 7/0.23 20/2.17 17/1.84
O(n3) 1/0.58 1/0.62 1/6.66
kNO(nk) 43/0.12
Compatible 54/0.14 43/0.12 39/0.17 54/0.21 103/1.01 110/0.91
Incompatible 543/0.25 554/0.25 558/0.24 543/0.25 25/4.48 25/4.54
Timeout 469/10.0 462/10.0



TCO O(1) 5/0.12 5/0.12 3/0.12
O(n) 14/0.15 19/0.18 44/0.84 45/0.78
O(n2) 4/0.20 4/0.21 13/2.04 11/1.94
kNO(nk) 24/0.11
Compatible 29/0.13 24/0.11 23/0.15 54/0.17 57/1.11 59/0.96
Incompatible 261/0.13 266/0.17 267/0.17 702/0.17 8/4.36 8/4.29
Timeout 225/10.0 223/10.0

The last two columns in Table 1 indicate the strength of semantic techniques and their combination with sPOP (column SEM+sPOPPS). In column SEM we employed matrix interpretations [37] (dimension 1 to 3) as well as additive polynomial interpretations [10] (degrees 2 and 3). Here we make use of the modular combination technique proposed by Zankl and Korp [39] to combine the interpretation techniques. Coefficients, respectively entries in coefficients, range up to 7. To ensure that matrix interpretations induce polynomial runtime complexity, we resort to the non-trivial criteria found in [17]. Column SEM+sPOPPS corresponds to column SEM, where sPOPPS is additionally integrated.

It is immediate that syntactic techniques alone cannot compete with the expressive power of interpretations. If we consider the total number of compatible systems only, semantic techniques are roughly twice as powerful as the strongest syntactic technique (sPOPPS). Still, the syntactic techniques proposed in this work provide a fruitful addition to the interpretation method. Contrasting columns SEM and SEM+sPOPPS, not only the total number of certified systems, but also the precision of the obtained certificates, is increased by the addition of sPOPPS. Note also the slight decrease in execution time.

8. Conclusion

We propose a new order, the small polynomial path order sPOP, together with its extension sPOPPS to parameter substitution. Based on sPOP, we delineate a class of rewrite systems, dubbed systems of predicative recursion of degree d, such that for rewrite systems in this class we obtain that the runtime complexity lies in O(nd). Exploiting the control given by the degree of recursion, we establish a novel characterisation of the functions computable in time O(nd), on register machines via the small polynomial path order sPOPPS.

Thus small polynomial path orders induce a new order-theoretic characterisation of the class of polytime computable functions. This order-theoretic characterisation enables a fine-grained control of the complexity of functions in relation to the number of nested applications of recursion. One the other hand, small polynomial path orders provide a novel syntactic, and very fast, criteria to automatically establish polynomial runtime complexity of a given TRS. The latter criteria extend the state of the art in runtime complexity analysis as it can be more precise or more efficient than previously known techniques.

Acknowledgements

We would like to thank the anonymous reviewers for their valuable comments that greatly helped in improving the presentation.

Footnotes

This work has been partially supported by the Austrian Science Fund, project number I-603-N18, by the John Templeton Foundation, grant number 20442, and the Japan Society for the Promotion of Science, project number 25⋅726.

2

The restriction is not necessary, but simplifies our presentation, compare [9].

3

As in [9] it is possible to adopt nondeterministic semantics, dropping orthogonality.

4

This is Example 2.59 in Steinbach and Kühler's collection of TRSs [36].

5

This is Example 2.09 in Steinbach and Kühler's collection of TRSs [36].

7

The test-suites are taken from the Termination Problem Database (TPDB), version 8.0; http://termcomp.uibk.ac.at.

8

Full experimental evidence is provided under http://cl-informatik.uibk.ac.at/software/tct/experiments/spopstar-ICC.

Contributor Information

Martin Avanzini, Email: martin.avanzini@uibk.ac.at.

Naohi Eguchi, Email: naohi.eguchi@uibk.ac.at.

Georg Moser, Email: georg.moser@uibk.ac.at.

References

  • 1.Bellantoni S., Cook S. A new recursion-theoretic characterization of the polytime functions. Comput. Complexity. 1992;2(2):97–110. [Google Scholar]
  • 2.Avanzini M., Moser G. Proc. of 24th RTA. vol. 21. 2013. Tyrolean complexity tool: features and usage; pp. 71–80. (Leibniz International Proceedings in Informatics). [Google Scholar]
  • 3.Baillot P., Marion J.-Y., Rocca S.R.D. Guest editorial: special issue on implicit computational complexity. ACM Trans. Comput. Log. 2009;10(4) [Google Scholar]
  • 4.Arai T., Eguchi N. A new function algebra of EXPTIME functions by safe nested recursion. ACM Trans. Comput. Log. 2009;10(4):24:1–24:19. [Google Scholar]
  • 5.Avanzini M., Eguchi N., Moser G. Proc. of 22nd RTA. vol. 10. 2011. A path order for rewrite systems that compute exponential time functions; pp. 123–138. (Leibniz International Proceedings in Informatics). [Google Scholar]
  • 6.Marion J.-Y. Analysing the implicit complexity of programs. Inform. and Comput. 2003;183:2–18. [Google Scholar]
  • 7.Arai T., Moser G. Proc. of 15th FSTTCS. vol. 3821. 2005. Proofs of termination of rewrite systems for polytime functions; pp. 529–540. (Lecture Notes in Computer Science). [Google Scholar]
  • 8.Avanzini M., Moser G. Proc. of 9th FLOPS. vol. 4989. 2008. Complexity analysis by rewriting; pp. 130–146. (Lecture Notes in Computer Science). [Google Scholar]
  • 9.Avanzini M., Moser G. Polynomial path orders. Log. Methods Comput. Sci. 2013;9(4) [Google Scholar]
  • 10.Bonfante G., Cichon A., Marion J.-Y., Touzet H. Algorithms with polynomial interpretation termination proof. J. Funct. Programming. 2001;11(1):33–53. [Google Scholar]
  • 11.Bonfante G., Marion J.-Y., Moyen J.-Y. Quasi-interpretations: a way to control resources. Theoret. Comput. Sci. 2011;412(25):2776–2796. [Google Scholar]
  • 12.Leivant D. Feasible Mathematics II. vol. 13. Birkhäuser; Boston: 1995. Ramified recurrence and computational complexity I: word recurrence and poly-time; pp. 320–343. (Progress in Computer Science and Applied Logic). [Google Scholar]
  • 13.Marion J.-Y. On tiered small jump operators. Log. Methods Comput. Sci. 2009;5(1):1–19. [Google Scholar]
  • 14.Moser G., Schnabl A. Proc. of 19th RTA. vol. 5117. 2008. Proving quadratic derivational complexities using context dependent interpretations; pp. 276–290. (Lecture Notes in Computer Science). [Google Scholar]
  • 15.Hofmann M., Moser G. Proc. RTA–TLCA 2014. vol. 8560. 2014. Amortised resource analysis and typed polynomial interpretations; pp. 272–286. (Lecture Notes of Computer Science). [Google Scholar]
  • 16.Hirokawa N., Moser G. Proc. RTA–TLCA 2014. vol. 8560. 2014. Automated complexity analysis based on context-sensitive rewriting; pp. 257–271. (Lecture Notes of Computer Science). [Google Scholar]
  • 17.Middeldorp A., Moser G., Neurauter F., Waldmann J., Zankl H. Proc. of 4th CAI. vol. 6472. 2011. Joint spectral radius theory for automated complexity analysis of rewrite systems; pp. 1–20. (Lecture Notes in Computer Science). [Google Scholar]
  • 18.Avanzini M., Moser G. Proc. of 24th RTA. vol. 21. 2013. A combination framework for complexity; pp. 55–70. (Leibniz International Proceedings in Informatics). [Google Scholar]
  • 19.Moser G. Proof theory at work: complexity analysis of term rewrite systems. arXiv:0907.5527 [abs] CoRR. Habilitation Thesis.
  • 20.Hoffmann J., Aehlig K., Hofmann M. Multivariate amortized resource analysis. ACM Trans. Program. Lang. Syst. 2012;34(3):14. [Google Scholar]
  • 21.Albert E., Arenas P., Genaim S., Gómez-Zamalloa M., Puebla G., Ramírez D., Román G., Zanardini D. Termination and cost analysis with COSTA and its user interfaces. Electron. Notes Theor. Comput. Sci. 2009;258(1):109–121. [Google Scholar]
  • 22.Alias C., Darte A., Feautrier P., Gonnord L. Proc. 17th SAS. vol. 6337. 2010. Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs; pp. 117–133. (Lecture Notes in Computer Science). [Google Scholar]
  • 23.Gulwani S., Mehra K., Chilimbi T. Proc. of 36th POPL. ACM; 2009. SPEED: precise and efficient static estimation of program computational complexity; pp. 127–139. [Google Scholar]
  • 24.Zuleger F., Gulwani S., Sinn M., Veith H. Proc. of 18th SAS. vol. 6887. 2011. Bound analysis of imperative programs with the size-change abstraction; pp. 280–297. (Lecture Notes in Computer Science). [Google Scholar]
  • 25.Simmons H. The realm of primitive recursion. Arch. Math. Logic. 1988;27:177–188. [Google Scholar]
  • 26.Leivant D. Proc. of 6th LICS. IEEE Computer Society; 1991. A foundational delineation of computational feasibility; pp. 2–11. [Google Scholar]
  • 27.Leivant D. Proc. of 20th POPL. 1993. Stratified functional programs and computational complexity; pp. 325–333. [Google Scholar]
  • 28.Handley W.G., Wainer S.S. Computational Logic. vol. 165. 1999. Complexity of primitive recursion; pp. 273–300. (NATO ASI Series F: Computer and Systems Science). [Google Scholar]
  • 29.Baader F., Nipkow T. Cambridge University Press; 1998. Term Rewriting and All That. [Google Scholar]
  • 30.Hirokawa N., Moser G. Proc. of 4th IJCAR. vol. 5195. 2008. Automated complexity analysis based on the dependency pair method; pp. 364–380. (Lecture Notes in Artificial Intelligence). [Google Scholar]
  • 31.Dal Lago U., Martini S. Proc. of 36th ICALP. vol. 5556. 2009. On constructor rewrite systems and the lambda-calculus; pp. 163–174. (Lecture Notes in Computer Science). [Google Scholar]
  • 32.Avanzini M., Moser G. Proc. of 10th FLOPS. vol. 6009. 2010. Complexity analysis by graph rewriting; pp. 257–271. (Lecture Notes in Computer Science). [Google Scholar]
  • 33.Avanzini M., Moser G. Proc. of 21st RTA. vol. 6. 2010. Closing the gap between runtime complexity and polytime computability; pp. 33–48. (Leibniz International Proceedings in Informatics). [Google Scholar]
  • 34.Shepherdson J.C., Sturgis H.E. Computability of recursive functions. J. ACM. 1963;10:217–255. [Google Scholar]
  • 35.Ferreira M.F. University of Utrecht, Faculty for Computer Science; 1995. Termination of term rewriting: well-foundedness, totality and transformations. Ph.D. thesis. [Google Scholar]
  • 36.Steinbach J., Kühler U. University of Kaiserslautern; 1990. Check your ordering – termination proofs and open problems. Tech. Rep. SEKI-Report SR-90-25. [Google Scholar]
  • 37.Endrullis J., Waldmann J., Zantema H. Matrix interpretations for proving termination of term rewriting. J. Automat. Reason. 2008;40(3):195–220. [Google Scholar]
  • 38.Lankford D. Louisiana Technical University; 1979. On proving term rewriting systems are Noetherian. Tech. Rep. MTP-3. [Google Scholar]
  • 39.Zankl H., Korp M. Proc. of 21st RTA. vol. 6. 2010. Modular complexity analysis via relative complexity; pp. 385–400. (Leibniz International Proceedings in Informatics). [Google Scholar]

RESOURCES