Abstract
In this paper we introduce the concept of preHamiltonian pairs of difference operators, demonstrate their connections with Nijenhuis operators and give a criteria for the existence of weakly nonlocal inverse recursion operators for differential–difference equations. We begin with a rigorous setup of the problem in terms of the skew field of rational (pseudo–difference) operators over a difference field with a zero characteristic subfield of constants and the principal ideal ring of matrix rational (pseudo–difference) operators. In particular, we give a criteria for a rational operator to be weakly nonlocal. A difference operator is called preHamiltonian, if its image is a Lie subalgebra with respect to the Lie bracket on the difference field. Two preHamiltonian operators form a preHamiltonian pair if any linear combination of them is preHamiltonian. Then we show that a preHamiltonian pair naturally leads to a Nijenhuis operator, and a Nijenhuis operator can be represented in terms of a preHamiltonian pair. This provides a systematic method to check whether a rational operator is Nijenhuis. As an application, we construct a preHamiltonian pair and thus a Nijenhuis recursion operator for the differential–difference equation recently discovered by Adler and Postnikov. The Nijenhuis operator obtained is not weakly nonlocal. We prove that it generates an infinite hierarchy of local commuting symmetries. We also illustrate our theory on the well known examples including the Toda, the Ablowitz–Ladik, and the Kaup–Newell differential–difference equations.
Introduction
The existence of an infinite hierarchy of commuting symmetries is one of a characteristic property of integrable systems. Symmetries can be generated by recursion operators [1, 2], which are often pseudo–differential and map a symmetry to a new symmetry. An important property of recursion operators, called the Nijenhuis property, is to generate an abelian Lie algebra of symmetries. Such property has been independently studied by Fuchssteiner [3] and Magri [4]. To prove that a pseudo–differential operator is a Nijenhuis operator and it generates an infinite hierarchy of local symmetries is a challenging problem. In the most common case of weakly nonlocal Nijenhuis operators this problem has been addressed in [5–7]. The relations between bi-Hamiltonian structures and Nijenhuis operators have been studied in papers of Gel’fand and Dorfman [8, 9] and Fuchssteiner and Fokas [10, 11]. Recently a rigorous approach to pseudo–differential Hamiltonian operators have been developed in the series of papers by Barakat, De Sole, Kac and Valeri [12–14].
The theory of integrable differential–difference equations is much less developed. The basic concepts for symmetries, conservation laws and Hamiltonian operators were formulated in the frame of a variational complex in [15]. The aim of this paper is to build up a rigorous setting for rational matrix (pseudo–difference) operators suitable for the study of integrable differential–difference systems. We introduce and study preHamiltonian pairs of difference operators, their connections with Nijenhuis operators and the existence of weakly nonlocal inverse recursion operators for differential–difference equations.
Let us consider the well-known Volterra chain
| 1 |
where u is a function of a lattice variable and time t. Here we use the notations
and is the shift operator. It possesses a recursion operator
where stands for the inverse of . Thus this operator is only defined on . It is a Nijenhuis operator and generates a commutative hierarchy of symmetries:
The concept of Hamiltonian pairs was introduced by Magri [16]. He found that some systems admitted two distinct but compatible Hamiltonian structures (a Hamiltonian pair) and named them twofold Hamiltonian system, nowadays known as bi-Hamiltonian systems. The Volterra chain is a bi-Hamiltonian system and it can be written
where is variational derivative with respect to the dependent variable u and two difference operators
form a Hamiltonian pair. The Nijenhuis recursion operator of the Volterra chain can be obtained via the Hamiltonian pair, that is, . This decomposition is known as the Lenard scheme used to construct the hierarchies of infinitely many symmetries and cosymmetries.
Notice that the above difference operators have a right common factor:
This implies that
| 2 |
Here operators A and B are not skew-symmetric, and thus not Hamiltonian. However, like in the case of Hamiltonian pairs, the image of A and B, as well as the image of linear combinations of these two operators, form a Lie subalgebra. Such operators we call preHamiltonian operator. In this paper, we explore properties of such operators and their relations with Nijenhuis operators. For the differential case some of these results have been obtained in [17]. The main difference between differential operators and difference operators lies in that the total derivative is a derivation and the shift operator is an automorphism. The set of invertible difference operators is much richer than in the differential case. In the scalar case all difference operators of the form , where a is a difference function and , are invertible, while in the differential case, the only invertible operators are operators of multiplication by a function. The definition of the order of difference and differential operators are essentially different.
The arrangement of this paper is as follows: In Sect. 2, we define a difference field , the Lie algebra of its evolutionary derivations (or evolutionary vector fields) which is a subalgebra of and discuss algebraic properties of the noncommutative ring of difference operators. In particular, we show that it is a right and left Euclidean domain and satisfies the right (left) Ore property. Then we define the skew field of rational (pseudo–difference) operators, i.e. operators of the form , where A and B are difference operators. Next we discuss the relation between rational operators and weakly nonlocal operators, namely we formulate a criteria for a rational operator to be weakly nonlocal. Finally we adapt all these results to rational matrix difference operators by defining the order of the operator as the order of its Dieudonné determinant. In Sect. 3 we define preHamiltonian difference operators as operators on whose images define a Lie subalgebra in . We explore the interrelation between preHamiltonian pairs and Nijenhuis operators. We show that if operators A and B form a preHamiltonian pair, then is Nijenhuis. Conversely, if R is Nijenhuis and B is preHamiltonian, then A and B form a preHamiltonian pair. These two sections are the theoretical foundation of the paper. In Sect. 4, we give basic definitions such as symmetries, recursion operators and Hamiltonian for differential–difference equations. We also show how operators A and B are related to the equation if is its recursion operator. In the next two sections we apply the theoretical results in Sects. 2 and 3 to integrable differential–difference equations. In Sect. 5, we construct a recursion operator for a new integrable equation derived by Adler and Postnikov in [18]:
using its Lax representation presented in the same paper. The obtained recursion operator is no longer weakly nonlocal. We show that it is indeed Nijenhuis by rewriting it as a rational difference operator and that it generates infinitely many commuting local symmetries. To improve the readability, we put some technical lemmas used for the proof of the main result on the locality of commuting symmetries in “Appendix B”. For some integrable differential–difference equations, such as the Ablowitz–Ladik Lattice [19], the recursion operator and its inverse are both weakly nonlocal. In Sect. 6, we apply the theoretical results from Sect. 2 to check whether the inverse recursion operators are weakly nonlocal, and if so, we demonstrate how to cast them in the weakly nonlocal form. To illustrate the method we choose four typical examples. However, the method is general and it can be applied to any integrable differential–difference system, including all systems listed in [20]. At the end of the paper we give a short conclusion and discussion on our new results on relation between preHamiltonian and Hamiltonian operators. To be self-contained, we also include “Appendix A”, containing some basic definitions for a unital non-commutative ring.
Algebraic Properties of Difference Operators
In this section, we give a definition of rational difference operators and explore their properties. The main objects of our study in this paper are systems of evolutionary differential–difference equations and hidden structures associated with them. We first consider the scalar case. A generalization to the multi-component case will be discussed in the end of this section.
Difference field and its derivations
Let be a zero characteristic base field, such as or . We define the polynomial ring
of the infinite set of variables and the corresponding field of fractions
It is assumed that every element of and depends on a finite number of variables only. We will denote the subset of nonzero elements of .
There is a natural automorphism of the field , which we call the shift operator, defined as
For we will often use notation
and omit index zero at or when there is no ambiguity. The field equipped with the automorphism is a difference field and the base field is its subfield of constants.
The reflection of the lattice defined by
is another obvious automorphism of and . The composition is the identity map. Thus the automorphisms generate the infinite dihedral group and the subgroup generated by is normal.
The automorphism defines a grading of the difference field (and ring ):
where .
Partial derivatives are commuting derivations of satisfying the conditions
| 3 |
A derivation of is said to be evolutionary if it commutes with the shift operator . Such derivation is completely determined by one element of and is of the form
| 4 |
An element f is called the characteristic of the evolutionary derivation . The action of for can also be represented in the form
where is the Fréchet derivative of in the direction f defined as
The Fréchet derivative of is a difference operator represented by a finite sum
| 5 |
It is obvious that
Evolutionary derivations form a Lie subalgebra in the the Lie algebra . Indeed,
where denotes the Lie bracket
| 6 |
Lie bracket (6) is –bilinear, anti-symmetric and satisfies the Jacobi identity. Thus , equipped with the bracket (6), has a structure of a Lie algebra over .
The reflection acts naturally on evolutionary vector derivations
Thus the is a graded Lie algebra
where .
Rational difference operators
In this section we give definitions of difference operators and rational pseudo–difference operators, which for simplicity we shall call rational operators. We refer to the “Appendix A” for general results and definitions related to principal ideal domains. Although Corollary 1 and the first part of Proposition 2 follow directly from Proposition 1 in the abstract setting of Euclidean domains, we provide complete proofs for the sake of completeness.
Definition 1
A difference operator B of order with coefficients in is a finite sum of the form
| 7 |
The total order of B is defined as . The total order of the zero operator is minus infinity by definition.
The Fréchet derivative (5) is an example of a difference operator of order (p, q) and total order . For an element the order and total order are defined as and respectively.
Difference operators form a unital ring of Laurent polynomials in with coefficients in , where multiplication is defined by
| 8 |
This multiplication is associative, but non-commutative. The definitions of some basic concepts for a unital associative ring are presented in the “Appendix A”.
From the above definition it follows that if A is a difference operator of order , then and . For any we have . Thus the total order is homomorphisms of the multiplicative monoid to .
The reflection can be extended to an automorphism of given by
and defines a grading of as follows:
It is obvious that .
A difference operator which has only one term is called a monomial difference operator. The set of monomial difference operators are of the form . They have total order equal to zero and are invertible in . Monomial difference operators equipped with multiplication (8) form a nonabelian group
We will use the notation for a monomial difference operator representing the leading term of a difference operator which is the naturally ordered sum. For the operator B in (7), we have .
Proposition 1
The ring is a right and left Euclidean domain.
Proof
Let us show that is a right Euclidean, that is, for any there exist unique such that and either or . First we prove the existence of Q, R. If , then we can take . If and , we can take . For we proceed by induction on , then for some and they are invertible. Thus and we can take . Finally, consider the case and assume that the statement is true for all operators A with total order less than n. Let the leading terms and . The difference operator has . Hence we can use the induction assumption and find , such that and either or . Thus
that is,
Therefore and . As for the uniqueness, if one has with , then . If we arrive to a contradiction since . Thus and . The proof of the left Euclidean property is similar.
Corollary 1
Every right (left) ideal of the ring is principal and generated by a unique element of minimal possible order with the leading term .
Proof
The zero ideal is obviously principal, it is generated by 0. Let be a right ideal and be an element of least possible total order. The element , is of the same total order and with the leading term . Then for any other element we have with either or . Since , we conclude that , otherwise , which is in contradiction with the assumption that A has the least possible order. Such element A is obviously unique. If we assume the existence of , then and . The latter is in contradiction with the assumption that A has the least possible order. In a similar way we show that is a left principal ideal ring.
Proposition 2
The ring satisfies the right (left) Ore property, that is, for any their exist , not both equal to zero, such that , (resp. ). In other words, the right (left) ideal (resp. ) is nontrivial. Its generator M has total order , where D is the greatest left (resp. right) common divisor of A and B.
Proof
Let us assume that (otherwise we swap and rename A, B). If , then . If , we prove the claim by induction on . We assume that the statement is true for any B with and we will show that it is also true for any . Since is right Euclidean, there exist Q, R such that and either or . If we take and we are done. Since , there exist , such that , and . Thus
and we can take , . Finally, we can see that and . The proof of the left Ore property is similar.
We proved that for any not both zero, the ideal is not trivial. Since is both a right and left principal ideal ring, is generated by a difference operator M, . In particular, for some difference operators and . From the first part of the proof, we know that . Let us assume that A and B are left coprime and that . The ideal is also nontrivial and generated by a difference operator N. We know that is at most . M is an element of and , hence there exists a difference operator C such that and . Let and be such that . Then and , which contradicts the hypothesis that A and B are left coprime.
The fact that is a principal ideal domain gives sense to the notions of greatest common divisors and least common multiples (see “Appendix A”). The following lemma, which will be used in Proposition 13, relates the images of two difference operators to the image of their right least common multiple.
Lemma 1
Let A and B be two nonzero left coprime difference operators with coefficients in . Suppose that for some . Let be their right least common multiple. Then, there exists such that and . In particular .
Proof
By definition of M, C and D are right coprime. It follows from the Bezout’s Lemma that there exist two difference operators U and V such that
| 9 |
Multiplying (9) on D and on C from the left we obtain
| 10 |
| 11 |
By assumption A and B are left coprime therefore it follows from Lemma 5 (ii) that there exist two difference operators P and Q such that
| 12 |
Using the assumption and the first line of (12) we get
| 13 |
and similarly using the second line of (12) we get
| 14 |
Hence, the statement holds with .
The domain can be naturally embedded in the skew field of rational pseudo–difference operators, which we will call simply rational operators.
Definition 2
A rational (pseudo–difference) operator L is defined as for some and . The set of all rational operators is
Remark 1
The skew field is a minimal subfield of the skew field of the Laurent formal series
containing . As well as it is a minimal subfield of the skew field of the Taylor formal series
containing . The skewfields and are isomorphic. The isomorphism is given by the reflection map .
Proposition 3
Any rational operator can also be written in the form with and .
Proof
It follows from the Ore property that for any there exist and such that . Multiplying this expression on from the left and from the right we obtain .
Thus any statement for the representation can be easily reformulated to the representation . In particular,
Proposition 4
is the skew field of rational operators over .
Proof
We need to show that the set is closed under addition and multiplication. Let with . It follows from the Ore property that there exist nonzero such that . Hence
Also there exist nonzero such that . Hence
implying that is also closed under multiplication.
Proposition 5
The decomposition of an element is unique if we require that B has a minimal possible total order with leading term . For any other decomposition there exists such that . Moreover, if is a (left) minimal decomposition of L, then .
Proof
For a given the set
is a right ideal in . Indeed, if , then meaning that , and J is stable under right multiplication by any element of . The ideal J is principal, and according to Corollary 1 it is generated by a unique element B of the least possible order, if we require that the leading term . Any other can be represented as where , since B is a generator of the principal right ideal J. By Proposition 2, we know that a generator M of the left ideal generated by A and B has total order . By definition of M there exist left coprime difference operators D and E such that . Therefore is a left minimal decomposition of L and .
The definition of total order for difference operators (Definition 1) can be extended to rational operators:
| 15 |
Definition 3
A formal adjoint operator for any can be defined recursively:
for any ,
,
for any ,
for any ,
for any .
In particular, We say an operator is skew-symmetric if .
For example, we have
For any , if then . Obviously .
Rational and weakly nonlocal difference operators
In the theory of integrable systems, the majority of -dimensional integrable equations possesses weakly nonlocal [21] Nijenhuis recursion operators. For integrable differential–difference equations, weakly nonlocal operators are often rational operators with only a finite number of nonlocal terms of the form , where . In this section, we show how to write a weakly nonlocal operator as a rational operator and provide a way to test whether a rational operator is indeed weakly nonlocal. For the differential case, the answers are given by Lemma 4.5 in [17].
First we give a definition of the full kernel difference operators. We then prove that for such operators, their inverse are weakly nonlocal.
For a difference operator it is obvious that
| 16 |
Indeed, if there is an element such that , then we can represent where . Zero total order difference operator is invertible and thus it has a trivial kernel space. A difference operator of a nonzero order may also have a trivial kernel in as well. For example since equation does not have a solution .
Definition 4
We say that a difference operator has a full kernel in (is a full kernel operator) if the dimension of its kernel over the field equals to the total order of the operator.
In what follows, we show how to construct a full kernel operator given the generators of its kernel and prove an important property of such operators.
Proposition 6
Assume that are linearly independent over in . Then there exists a full kernel difference operator such that the span .
Proof
We prove the statement by induction on n. If , we define
It is clear that and its kernel is spanned by . Assume that Q is a full kernel operator with and its kernel is spanned by . Since are linearly independent, we have by construction of Q. We define
Clearly it is the required full kernel operator and its kernel is spanned by .
Remark 2
A difference operator with full kernel spanned by the –linearly independent elements , can be obtained using the determinant expression
Proposition 7
The inverse operators of full kernel operators are weakly nonlocal.
Proof
We prove the statement by induced on the total order of such operator B. If B is a full kernel operator with , it can be written as for some . Thus
is weakly nonlocal.
Let B be a full kernel operator with the total order of n and . It follows from Proposition 6 that there is a full kernel operator C with total order of such that
By the induction assumption, is weakly nonlocal, that is, there exist two sets of linearly independent functions and , such that
Multiplying C on its left, we get
implying Note that for any , , there exists , which is in such that Therefore, we have
whose nonlocal terms are
where we used the identity
This leads to the conclusion that is weakly nonlocal.
We are now ready to prove the statement on the relation between the rational and weakly nonlocal difference operators.
Theorem 1
Let R be a rational operator with minimal right fractional decomposition and . Then the following three statements are equivalent:
-
(i)
The operator B has a full kernel in ;
-
(ii)
The operator R is weakly nonlocal, that is, , where , and and are two linearly independent sets over in ;
-
(iii)
The operator has a full kernel in .
Proof
The statement directly follows from Proposition 7 since the multiplication of a difference operator and a weakly nonlocal operator is weakly nonlocal.
We now prove that . Knowing
we multiply it on the right by B and obtain its nonlocal terms
which implies that all ’s are in the kernel of and thus .
Let C be a common multiple of the difference operators , that is, a difference operator such that for all i there exists a difference operator satisfying . Thus we have
Since is a minimal right fractional decomposition for R, there exists a difference operator D such that
This leads to . Note that and . Therefore, we have
implying that has a full kernel spanned by all ’s.
Finally we prove that . It follows from Proposition 7 that the inverse of is weakly nonlocal. Using the proof of , we obtain that statement of .
From the proof of Theorem 1, we are able to specify the nonlocal terms for weakly nonlocal operator.
Corollary 2
Under the condition of Theorem 1, for , the linearly independent functions ’s span and the linearly independent functions ’s span , .
Following from this theorem, we are immediately able to get the statement for the inverse of rational operator:
Corollary 3
Let with . Then is weakly nonlocal if and only if A has a full kernel in .
Corollary 2 combined with Proposition 6 provides us with a method to write a weakly nonlocal operator in the form of a rational operator : We first construct a full kernel operator using ’s. Then we have . We use such construction for the examples in Sect. 6, where we will also apply Corollary 3 to the recursion operators of integrable differential–difference equations to see whether their inverse operators are weakly nonlocal or not. If so, we are going to compute the seeds for symmetry and co–symmetry hierarchies (its nonlocal terms), that is, the ’s and ’s for in the above theorem.
Matrix difference and rational pseudo–difference operators
We recall here some facts from linear algebra over non-commutative rings and skew fields, which is a specialisation of the general theory [22, 23] to the case of difference algebra (the ring and skew field ). We denote by and the rings of matrices over the ring and skew field respectively. Since is a principal ideal ring, then the ring is also a principal ideal ring (see proof in [24], as well as the short and useful review of non-commutative principal ideal rings [25]).
Let denote the i–th row of the matrix and denote the (i, j) entry of . For and arbitrary (or ) the –elementary (resp. –elementary) row operation changes the row and leaves the other rows unchanged. The transformation is invertible () and can be represented by a multiplication from the left by the matrix , where I is the unit matrix and is the matrix with the (i, j) entry equal to 1 and zero elsewhere. Note that the transformation replaces by and by , leaving other rows unchanged.
–elementary row operations generate a group , which is a subgroup of the group of invertible matrix difference operators. Similarly, –elementary row operations generate a group , a subgroup of the group of invertible matrix pseudo–difference operators.
Lemma 2
Let . Then there exist two invertible matrices and such that is diagonal.
Proof
Let be an element of the set such that for all , either or . We claim that all entries in the first column of are divisible on the right by . Otherwise, using elementary row operations which amounts to multiply on the left by an invertible matrix, one can find such that and , which contradicts the definition of . Similarly, must divide all the entries of the first row of on the left. Therefore, there exist invertible matrix difference operators and such that has only zero entries in its first row and first column, apart from the first coefficient which is . We conclude by induction on n.
Proposition 8
Let . Then it can be brought to a upper triangular form with for by –elementary row operations and
Proof
We prove the claim by induction on n. If , the matrix is already in the form required. Now we assume that any matrix from can be brought to a upper triangular form by –elementary row transformation. Therefore the first rows of matrix can be brought to the upper triangular form.
-
(i)
If , then by deleting the first row and the first column of we reduce the problem to the case and we are done due to the induction hypothesis.
-
(ii)
If , we use the transformation to reduce the problem to the case (i).
-
(iii)
The remaining case are . Suppose (otherwise, we can swap the rows by the transformation ). Then there exist such that and either or and we apply the transformation replacing by . If , then the updated row has zero entry and we are done (ii), or and we use to swap the rows. Iterating this procedure we can make the entry (n, 1) vanish, reducing the problem to the first case (i).
The ring has zero divisors. We will denote by the multiplicative monoid of regular elements, i.e. the elements which are not zero divisors. A difference matrix operator is regular if and only if its upper triangular form is regular, i.e. if and only if
| 17 |
Definition 5
The total order of a matrix difference operator is defined as the sum of total orders of the diagonal entries of a corresponding upper triangular operator , i.e.
Proposition 9
A difference matrix operator is invertible in (i.e. and thus ), if and only if .
Proof
If , then all entries on the diagonal part of have total order zero and thus invertible. Multiplying on the left by matrix we obtain an upper triangular matrix with the unit matrix on the diagonal. By induction on n it is easy to show that there is a composition of –elementary row transformations such that . If there is nothing to do. We assume the existence of the inverse matrix in . The entries of the last column can be set to zero by the transformation which reduces the problem to the case in . The necessity is obvious from the consideration of a diagonal matrix .
Example 1
Let us consider the following matrix difference operator
| 18 |
where and if . The transformation brings to an upper triangular form and . Thus and the inverse matrix difference operator of exists. Indeed,
If we use a different sequence of elementary row transformations
which also brings the difference matrix operator to an upper triangular form, then , but the total order of does not depend on the choice of the sequence (see below).
The correctness of Definition 5, i.e. the independence of from the choice of row transformations, can be justified by the theory of Dieudonné determinants () (in the case of skew polynomial rings it has been discussed in [26]). The above definition of total order for matrix difference operators is a restriction of the map to . This observation results in a simpler way to compute the total order of matrix difference operators by treating them as elements of .
The Dieudonné determinant is defined for matrices with entries in an arbitrary skew field (see [22, 23, 27]). In our case the skew field is and we are dealing with matrix rational operators , but what is presented below is equally applicable to rational operators or any skew field of fraction of a left principal ideal domain. The Dieudonné determinant is a map from to or zero, where is the multiplicative group of nonzero elements of , and denotes the commutator subgroup , which is normal. The group is generated by elements of the form . The quotient group is commutative and its elements are cosets . There is a natural projection given by for any .
Dieudonné has shown that is a normal subgroup of and that there is a group isomorphism given by a map (Theorem 1. in [27]), which is now called the Dieudonné determinant. The function is:
multiplicative: ;
if , then ;
- if is obtained from by multiplying one row of on the left by , then
if a matrix is degenerate (i.e. one row is a left –linear combination of other rows), then .
In order to find for one can use the algorithm given by Dieudonné [27] (see also §1, Ch. IV [22]), or use the Bruhat normal form approach (§20, Part III, [23]). A simple way to find the Dieudonné determinant of a matrix is to use a composition of –elementary row transformations in order to bring the matrix to a upper triangular form , then multiply the diagonal entries of (in an arbitrary order) and apply the projection to the result
It follows from [27] that does not depend on the choice of elementary row transformations, neither on the order in the product of diagonal elements of .
It follows from Definition 1 and (15) that for any , thus function has a constant value on a coset and the map
is defined correctly.
Definition 6
The total order of a matrix rational operator is
In the case of difference operators we have defined a function (17). Although the value of this function depends on the choice of –elementary row transformations, its natural projection to does not, since it coincides with the Dieudonné determinant
This restriction of the total order definition to the ring of matrix difference operators together with Proposition 9 results in the exact sequence of monoid homomorphisms (similar to Theorem 1.1 in [26]):
Definition 5 is a way to define the total order of a matrix difference operator, bypassing the skew field of rational operators, its quotient group and the theory of Dieudonné determinants.
Note that the Dieudonné determinant and the total order of a matrix (rational) difference operator and the transposed matrix operator may not coincide. In the above Example (18):
A formally conjugated matrix (rational) difference operator has a usual definition, i.e. the corresponding matrix is transposed and each entry is formally conjugated: . For formally conjugated operators we have and therefore .
There are many ways to represent a matrix rational operator as a ratio of matrix difference operators. For example any can be represented as
Indeed, the entries and thus . Since the ring satisfies the Ore property (Proposition 2) there exists a least right common multiple of the elements and therefore there exist such that . Taking we obtain the first representation. Let M be the least right common multiple of . There exist such that , therefore .
Since the ring of difference operators is a principal ideal domain, the ring of matrices satisfies the left and right Ore property (see proof in [24]) and thus
A representation of matrix rational operators as right (left) fractions is not unique. However, once we clear the common right (resp. left) divisors, we get a minimal fraction, in the following sense:
Theorem 2
For any there is a minimal right (resp. left) decomposition (resp. ) with right (resp. left) coprime. Any other right decomposition (resp. left decomposition ) is of the from (resp. ), where . Moreover and is minimal possible among all decompositions.
Proof
We will first prove by induction on n that if A and B are matrix difference operators of size with B regular, if M is a generator of the right ideal and N a greatest left common divisor of A and B, then .
It is true for by Proposition 2. Let us now consider A and B of size . Using invertible matrices we can assume that A and B are both upper triangular. Indeed, one can factorize them as and with upper triangular and , invertible. Hence if there exist C and D such that with , then we can write . Let us consider A and B in block matrix form:
where E and F are of size , P and Q are difference operators and X and Y have size . First, let be a generator of the right ideal in , be a generator of the right ideal in and K be a generator of the right ideal in (which is also called the greatest left common divisor of E and F). We have by the induction hypothesis . One can find a difference operator R with and a vector difference operator Z such that . Indeed, by Lemma 2 one can assume that K is a diagonal matrix . Let us call by the entries of the vector . Then we can find for all difference operators and such that and . Let R be a generator of the right ideal . Then and there exists a vector Z such that . Finally, by definition of K there exist two matrix difference operator V and W such that . Let
Then and .
The proof of the remaining parts of the statement are identical to the scalar case, see the proofs of Propositions 2 and 5.
The inequality (16) is also true for a regular matrix difference operator and we say that is a full kernel operator if . Theorem 1, Corollary 2 and Corollary 3 from the previous section are also true for matrix rational operators.
PreHamiltonian Pairs and Nijenhuis Operators
Zhiber and Sokolov, in their study of Liouville integrable hyperbolic equations [28], have discovered a family of special differential operators with the property that they define a new Lie bracket and are homomorphisms from the Lie algebra with the newly induced bracket to the original Lie algebra. These operators can be viewed as a generalization of Hamiltonian operators, although they are not necessarily skew–symmetric. Inspired by the work of Zhiber and Sokolov, infinite sequences of such scalar differential operators of arbitrary order were constructed in [29] using symbolic representation [30, 31]. Kiselev and van de Leur gave some examples of such matrix differential operators [32] and investigated the geometric meaning of such operators. They named them preHamiltonian operators in [33] and defined the compatibility of two such operators. Recently, Carpentier renamed them as integrable pairs and investigated the interrelations between such pairs and Nijenhuis operators [17]. In principle, many results for differential operators also work for difference operators since is a principal ideal domain. In this section, we develop further the theory of preHamiltonian operators and extend it to the difference case. Similarly to the previous section, we illustrate our results for the scalar case.
Definition 7
A difference operator A is called preHamiltonian if is a Lie subalgebra of , i.e. if
| 19 |
By a direct computation, it is easy to see ([29]) that an operator A is preHamiltonian if and only if there exists a 2-form on denoted by such that
| 20 |
For a given , both and are in , i.e. difference operators on .
For a Hamiltonian operator H, the Jacobi identity is equivalent to (cf. [9])
| 21 |
for all , where is the adjoint of the operator. Clearly, Hamiltonian operators are preHamiltonian with We are going to explore the relation between preHamiltonian pairs and Hamiltonian pairs in the forthcoming paper [34]. Here we look at their relations with Nijenhuis operators.
Similarly to Hamiltonian operators, in general, the linear combination of two preHamiltonian operators is no longer preHamiltonian. This naturally leads to the following definition:
Definition 8
We say that two difference operators A and B form a preHamiltonian pair if is preHamiltonian for all constant .
A preHamiltonian pair A and B implies the existence of 2-forms , and . They satisfy
| 22 |
Gel’fand and Dorfman [8] and Fuchssteiner and Fokas [10, 11] discovered the relations between Hamiltonian pairs and Nijenhuis operators. These pairs naturally generate Nijenhuis operators. In what follows, we show that preHamiltonian pairs also give rise to Nijenhuis operators. This also explains why we chose the terminology ‘preHamiltonian’ instead of ‘integrable’ for such operators. These operators naturally appear in the description of the invariant evolutions of curvature flows [35].
Definition 9
A difference operator R is Nijenhuis if
| 23 |
Clearly, a Nijenhuis operator is also preHamiltonian with
For a rational operator , which is defined on , we define the Nijenhuis identity as
| 24 |
where the bracket denotes the commutator of two difference operators.
Theorem 3
If two difference operators A and B form a preHamiltonian pair, then is Nijenhuis.
Proof
Since A and B are preHamiltonian we can write for all
| 25 |
Hence, we see that, provided that A and B are preHamiltonians, (24) is equivalent to
| 26 |
where the expression inside the parentheses is nothing else than (22). Therefore, given two preHamiltonians difference operators A and B, the ratio is Nijenhuis if and only if A and B form a preHamiltonian pair.
Conversely, we have the following statement:
Theorem 4
Let R be a Nijenhuis rational difference operator with minimal decomposition such that B is preHamiltonian. Then A and B form a preHamiltonian pair.
Proof
Since B is preHamiltonian, we have for all
| 27 |
Therefore, we can transform (24) into the equivalent form
| 28 |
Let be the left least common multiple of the pair A and B. It is also the right least common multiple of the pair C and D since is minimal. By Lemma 5 (i) there exists a difference operator and thus 2–form on such that
| 29 |
which implies that A and B form a preHamiltonian pair.
There is a simple algorithm to determine whether a given difference operator is preHamiltonian and to find the corresponding 2–form . Theorem 3 provides an efficient method to check the Nijenhuis property for rational operators, which is important in the theory of integrability.
Example 2
The operators A and B defined in (2) form a preHamiltonian pair. Thus the recursion operator for the Volterra chain (1) is Nijenhuis.
Proof
Let . According to Definition 8, we check the existence of a 2-form in (20). By direct computation, we have
where stands for anti-symmetrisation with respect to ’s and ’s. We can now compute its preimage by comparing its highest order either of a or b and we get
It follows from Theorem 3 that the recursion operator for the Volterra chain (1) is Nijenhuis.
The previous two theorems provide the interrelations between preHamiltonian pairs and Nijenhuis operators. The following theorem (analogous to its differential counterpart in [17]) gives another motivation to the definition of a preHamiltonian pair: it is a necessary condition for a rational operator to ‘generate’ an infinite commuting hierarchy.
Theorem 5
Let R be a rational operator with minimal decomposition . Suppose that there exist spanning an infinite dimensional space over such that for all , and such that for all . Then A and B form a preHamiltonian pair.
Proof
Since for all by assumption, we have
| 30 |
Similarly, replacing B with A we get for all
| 31 |
Let be the left least common multiple of the pair A and B. A non-zero difference operator has a finite dimensional kernel over , therefore one must have for all that
| 32 |
By minimality of the fraction , we deduce that for all there exists a difference operator such that
| 33 |
For all we can write , where is defined by for all . is a bidifference operator, i.e., is a difference operator and its coefficients are difference operators applied to f. In other words for all f, where are difference operators. We can find a unique pair of bidifference operators Q and L such that for all f and
| 34 |
From (33) we see that for all . This implies that since the span an infinite dimensional space over . Therefore, for all f, g, we have
implying that B is preHamiltonian. Finally, since for all constant , operator
satisfies the same hypothesis as R, we conclude that is preHamiltonian.
Towards Applications to Differential–Difference Equations
In this section we introduce some basic concepts for differential–difference equations relevant to the contents of this paper. More details on the variational difference complex and Lie derivatives can be found in [15, 36].
Let be a vector function of a discrete variable and time variable t, where n and t are “independent variables” and will play the role of a “dependent” variable in an evolutionary differential–difference system
| 35 |
The Eq. (35) is an abbreviated form to encode the infinite sequence of ordinary differential systems of equations
A vector function is assumed to be a locally holomorphic function in its arguments. In the majority of cases it will be a rational or polynomial function which does not depend explicitly on the variables n, t. The corresponding vector field coincides with (4). Thus there is a bijection between evolutionary derivations of and differential–difference systems with .
Definition 10
There are three equivalent definitions of symmetry of an evolutionary equation. We say that is a symmetry of (35) if
Symmetries of an equation form a Lie subalgebra in . The existence of an infinite dimensional commutative Lie algebra of symmetries is a characteristic property of an integrable equation and it can be taken as a definition of integrability.
Often the symmetries of integrable equations can be generated by recursion operators [2]. Roughly speaking, a recursion operator is a linear operator mapping a symmetry to a new symmetry. For an evolutionary Eq. (35), it satisfies
| 36 |
Recursion operators for nonlinear integrable equations are often Nijenhuis operators. Therefore, if the Nijenhuis operator R is a recursion operator of (35), the operator R is also a recursion operator for each of the evolutionary equations in the hierarchy , where
Nijenhuis operators are closely related to Hamiltonian and symplectic operators. The general framework in the context of difference variational complex and Lie derivatives can be found in [15, 36]. Here we recall the basic definitions related to Hamiltonian systems.
For any element , we define an equivalent class (or a functional) by saying that two elements are equivalent if . The space of functionals is denoted by .
For any functional (simply written without confusion), we define its difference variational derivative (Euler operator) denoted by (here we identify the dual space with itself) as
Definition 11
An evolutionary Eq. (35) is said to be a Hamiltonian equation if there exists a Hamiltonian operator H and a Hamiltonian such that
This is the same to say that the evolutionary vector field is a Hamiltonian vector field and thus the Hamiltonian operator is invariant along it, that is,
| 37 |
Nijenhuis recursion operators for some integrable difference equations, e.g., the Narita-Itoh-Bogoyavlensky lattice [37], are no longer weakly nonlocal, but rational difference operators of the form . The following statement tells us how operators A and B are related to a given equation.
Theorem 6
If a rational difference operator R with minimal decomposition is a recursion operator for Eq. (35), then there exists a difference operator P such that
| 38 |
Proof
To say that is a minimal decomposition of R means that A and B are right coprime. Let C and D be two left coprime matrix operators with C regular such that . Such a pair exists by Lemma 5. Since is a recursion operator of (35), substituting it into (36) we have
that is,
| 39 |
We rewrite (39) as
By Lemma 5 there exists an operator P such that
Thus the operators A and B satisfy the same relation (38).
Comparing to (37), for Hamiltonian operators, we have . Conversely, it can be easy to show that
Proposition 10
For an Eq. (35) if there exist two operators A and B satisfying (38), then is a recursion operator for the equation.
Proof
By direct computation, we have
satisfying (36). Thus is a recursion operator.
This proposition has been used in [38] in constructing recursion operators for integrable noncommutative ODEs.
Example 3
For the operators A and B defined in (2) of the Volterra chain (1), the difference operator P in Theorem 6 is .
In what follows, we give the conditions for a rational recursion operator to generate infinitely many local commuting symmetries. We first prove the following lemma:
Lemma 3
Assume that B is a preHamiltonian operator with minimal decomposition is a recursion operator for , where . Then .
In particular, if there exists such that R is a recursion operator for and , then .
Proof
We know that B is preHamiltonian. So for any , we have
| 40 |
From Theorem 6, it follows, when or , that
| 41 |
Using (41) for , we get
If there exists such that R is a recursion operator for then from the former we deduce that
| 42 |
Hence .
Proposition 11
Assume that A and B form a preHamiltonian pair and is a recursion operator for , where . If there exists such that for all , then for all .
Proof
We can assume that is a minimal decomposition of R. Indeed, if not we write and where is minimal and replace by . By Theorem 3, we know that R is Nijenhuis and thus it is a recursion operator for all , . We proceed the proof by induction on . If there is nothing to prove. If , we deduce as a direct application of Lemma 3 since for all . Suppose that for all such that , which implies . Hence by Lemma 3, we have .
Rational Recursion Operator for Adler–Postnikov Equation
In this section, we construct a recursion operator of system
| 43 |
from its Lax representation and show that it is Nijenhuis and generates local commuting symmetries. In general, it is not easy to construct a recursion operator for a given integrable equation although the explicit formula is given. The difficulty lies in how to determine the starting terms of R, i.e., the order of the operator, and how to construct its nonlocal terms. Many papers are devoted to this subject, see [5, 39, 40]. If the Lax representation of the equation is known, there is an amazingly simple approach to construct a recursion operator proposed in [41]. The idea in [41] can be developed for the Lax pairs that are invariant under the reduction groups, which applies for both differential and differential–difference equations [7, 37].
The Eq. (43) first appeared in [18], where the authors presented its scalar Lax representation. We rewrite it in the matrix form as follows:
| 44 |
| 45 |
where is a spectral parameter. The commutativity of the above operators leads to the zero curvature condition
| 46 |
and subsequently it leads to the system (43). The system (43) defines a derivation of with . The representation (44), (45) is invariant with respect to the transformations:
| 47 |
and
| 48 |
where
The transformation (47) reflects the symmetry of the Eq. (43).
For a given matrix , we can build up a hierarchy of nonlinear systems by choosing different matrices with the degree of from to 2l. The way to construct a recursion operator directly from a Lax representation is to relate the different operators using ansatz
and then to find the relation between the two flows corresponding to and . The multiplier is the automorphic function of the group generated by the transformations and . Here is the remainder and we assume that it has the same symmetry as :
| 49 |
where are matrices of the following form [invariant under (48)]
and since is invariant under (47), they satisfy
The zero curvature condition leads to
| 50 |
Substituting the ansatz (49) into (50) and collecting the coefficient of powers of , we obtain six matrix equations for . For example, the equation corresponding to linear terms of is
| 51 |
Through them we are able to determine the entries of matrices and we finally get
| 52 |
Note that
We simplify the above expression of . It becomes
Substituting and into (52), we obtain the relation between two symmetry flows and . Thus we obtain the following statement:
Proposition 12
A recursion operator for Eq. (43) is
| 53 |
We represent R as
where
Note that is a recursion operator for [37] and that is the inverse recursion operator for the Volterra chain [20].
The recursion operator (53) is not weakly nonlocal. We now rewrite it as a rational difference operator. It is convenient to first write R as
| 54 |
where
| 55 |
| 56 |
| 57 |
where is a difference operator and .
Lemma 4
The recursion operator R given by (54) can be factorized as with
| 58 |
where
and
| 59 |
Proof
To find A and B for (54) we need to rewrite as a right fraction. It turns out that
from which we can find that and as stated is a solution. Then by definition as given in the statement.
The authors in [42] showed that the recursion operators derived from certain Lax representations under certain boundary conditions are Nijenhuis once every step is uniquely determined. Here we prove the Nijenhuis property using the results in Sect. 3.
Theorem 7
The operators A and B defined by (59) and (58) are compatible preHamiltonian operators. In particular, the recursion operator R for Eq. (43) given by (53) is Nijenhuis.
Proof
We know from Lemma 4 that . To prove that it is Nijenhuis, we only need to show operators A and B form a preHamiltonian pair following from Theorem 3.
Let . For any and constant , we use computer algebra package Maple to compute , which is linear in a and its shifts. We take the coefficient of the highest order term (here ) in and denote it by . Notice that the highest order term in I is . We set . We then compute and repeat the procedure. Finally we get after steps implying I is preHamiltonian.
Since the operator R is not weakly nonlocal, the results on the locality of symmetries generated by R in [7] are no longer valid. In the rest of this section, we are going to show that R generates infinitely many commuting symmetries of (43) starting from the equation itself.
Proposition 13
Let h be a difference polynomial such that R is a recursion operator for . Then h lies in the image of B. More precisely for some and A(x) is a difference polynomial. Moreover, R is a recursion operator for .
We will break the proof of this proposition in two parts using (54). First we will prove that for some difference polynomial g. Second we will show that for some difference polynomial k. We begin with proving a few lemmas. To improve the readability, we put them in “Appendix B”. We now write the proof for Proposition 13 using these lemmas.
Proof
By Lemma 8, we know that for some difference polynomial h. By Lemmas 9 and 10, for some constant we get that
Since g is a difference polynomial, the constant term in is . This constant term must be divisible on the left by , which implies . Moreover, we can divide the congruence relation by on the right since has a trivial kernel:
After applying Lemma 11 we deduce that for some difference polynomial k.
Let M be a generator of the right ideal in . This means that for some pair of right coprime difference operators D and E. By Lemma 1, there exists such that and . Since , we conclude that . Finally, , hence is a difference polynomial. R is a recursion operator for since R is Nijenhuis following from Theorem 7.
Theorem 8
There exists a sequence in such that
;
;
is a difference polynomial for all ;
for all ;
The order of is ;
R is a recursion operator for all the .
Finally, let . If commutes with some element , then .
Proof
We already know that R is a recursion operator for (43), hence by Proposition 13 there exists such that statement (1) is satisfied and is a difference polynomial. Since R is Nijenhuis (following from Theorem 7) it must be a recursion operator for as well. Using Proposition 13 a second time we find such that and is a difference polynomial. Iterating this argument we prove the statements (2), (3) and (6). Statement (5) is obvious and statement (4) follows from Proposition 11 and Theorem 7. Finally, if commutes with , let us sketch the proof of how to show that . If (M, N) is the order of f and it is not hard to prove from the equation
| 60 |
Note that the leading term of is up to multiplication by a constant the leading term of for some . Similarly, if , one sees that the negative leading term of is up to multiplication by a constant the negative leading term of for some . We conclude by induction on the total order of f, after checking that the only f commuting with an element of V and which depend either on or on for is .
Remark 3
Note that is in but is not a difference polynomial.
Remark 4
Let be the automorphism of defined in Sect. 2. Then we have and This implies that and for all .
On Inverse Nijenhuis Recursion Operators
In [20], the authors listed integrable differential–difference equations with their algebraic properties. For some systems, they presented both recursion operators and their inverse in weakly nonlocal form. In this section, we’ll explain the (non)existence of weakly nonlocal inverse recursion operators and how to work out the nonlocal terms based on Theorem 1 and its corollaries in Sect. 2.3 using examples in [20].
We select four examples: in Sect. 6.1, we show the nonexistence of weakly nonlocal inverse recursion operator for the Toda lattice; in Sect. 6.2, we show the existence of weakly nonlocal inverse recursion operator with only one nonlocal term for a relativistic Toda system; in Sect. 6.3, we deal with a recursion operator with two nonlocal terms; for our last example, we demonstrate that the inverse operator R itself is not weakly nonlocal, but that of is!
The Toda lattice
The Toda equation [43] is given by
In the Manakov-Flaschka coordinates [44, 45] defined by , it can be rewritten as two-component evolution system:
| 61 |
which admits two compatible Hamiltonian local structures
It is clear to see that and that the kernel of is spanned by and . One can check that the kernel of is spanned by . In other words, and have a common right divisor C of the total order being 1 and can be written as and , where and , that is,
Thus B has full kernel and A has trivial kernel. Thus the recursion operator
is weakly nonlocal but is not. Indeed,
A relativistic Toda system
The relativistic Toda system [46] is given by
Introducing the dependent variables as follows [47]:
then the equation can be written as
It admits two compatible Hamiltonian local structures
It is clear to see that and that the kernel of is spanned by and . Similarly and the kernel of is spanned by and . In other words, and have a common right divisor and can be written as
where A and B are of the total order 1 and their kernels are of dimension 1 Therefore both recursion operator and its inverse are weakly nonlocal, and
Note that the kernel of A is spanned by , the kernel of is spanned by and . This explains the nonlocal term in the inverse of the recursion operator.
The Ablowitz–Ladik lattice
Consider the Ablowitz–Ladik lattice [19]
Its recursion operator [48]
| 62 |
can be written as where by letting , and we have
and
The operator A can be factorized as follows:
where
![]() |
Note that and , which is spanned by
Thus the operator A is a full kernel operator and hence the inverse of is weakly nonlocal. Note that is spanned by
Thus is spanned by
and similarly . Moreover, we have
These give us the nonlocal term appearing in the inverse operator as stated in Theorem 1, and indeed
The Kaup–Newell lattice
Consider the Kaup–Newell lattice [49]:
Its recursion operator
can be written as where
and
The operator A does not have a full kernel since and its kernel is spanned by . Surprisingly, operator can be factorised as follows:
where
Note that and , which is spanned by . Thus operator C is a full kernel operator and hence the inverse of is weakly nonlocal as presented in [20] and it equals to
| 63 |
Note that is spanned by and thus is spanned by
Moreover, we have
These give us the nonlocal term appearing in the inverse operator as shown in (63).
Conclusions
In this paper we have built a rigorous algebraic setting for difference and rational (pseudo–difference) operators with coefficients in a difference field and study their properties. In particular, we formulate a criteria for a rational operator to be weakly nonlocal. We have defined and studied preHamiltonian pairs, which is a generalization of the well known bi-Hamiltonian structures in the theory of integrable systems. By definition a preHamiltonian operator is an operator whose images form a Lie subalgebra in the Lie algebra of evolutionary derivations of . The latter can be directly verified and it is a relatively simple problem comparing to the verification of the Jacobi identity for Hamiltonian operators. We have shown that a recursion Nijenhuis operator is a ratio of difference operators from a preHamiltonian pair. Thus for a given rational operator, to test whether it is Nijenhuis or not can be done systematically. We applied our theoretical results to integrable differential difference equations in two aspects:
We have constructed a rational recursion operator R (53) for Adler–Postnikov integrable Eq. (43) and shown that it can be written as the ratio of a preHamiltonian pair and thus it is Nijenhuis. Moreover, we proved that R produces infinitely many commuting local symmetries;
For a given recursion operator we can answer the question whether the inverse operator is weakly nonlocal and, if so, how to bring it to the standard weakly nonlocal from (examples in Section 6).
In Sect. 6.4 we show that for a weakly nonlocal recursion operator R which does not have a weakly nonlocal inverse, may exist a constant such that is weakly nonlocal. In other words, the total order of the difference operator in the factorisation may be lower for a certain choice of . This observation requires further investigation.
The concept of preHamiltonian operators deserves further attention. These operators naturally appear in the description of the invariant evolutions of curvature flows in homogeneous spaces in both continuous [50] and discrete [35] setting. In the future, we’ll look into the geometric implication of such operators.
In this paper, we mainly explored the relation between PreHamiltonian operators and Nijenhuis operators. We are going to investigate how preHamiltonian pairs relate to biHamiltonian pairs. In our forthcoming paper [34], we’ll present the following main result: if His a Hamiltonian (a priori nonlocal, i.e. rational) operator, then to find a second HamiltonianKcompatible withHis the same as to find a preHamiltonian pairAandBsuch thatis skew-symmetric.
We have discovered that Adler–Postnikov integrable equation (43) is indeed a Hamiltonian system. This equation can be written as , where H is the following skew-symmetric rational operator
In [34], we are going to show that H is a Hamiltonian operator for Eq. (43) and explain how it is related to the recursion operator (53).
Acknowledgements
The paper is supported by AVM’s EPSRC grant EP/P012655/1 and JPW’s EPSRC grant EP/P012698/1. Both authors gratefully acknowledge the financial support. JPW and SC were partially supported by Research in Pairs grant no. 41670 from the London Mathematical Society; SC also thanks the University of Kent for the hospitality received during his visit in July 2017. SC was supported by a Junior Fellow award from the Simons Foundation. AVM is grateful for a partial support by the Ministry of Education and Science of Russian Federation, project 1.13560.2019/13.1.
Appendix A. Basic Concepts for a Unital Associative Principal Ideal Ring
Recall the definitions of some basic concepts for a unital associative ring (see for example [25]).
A left (respectively right) ideal of is an additive subgroup such that (resp. ).
A left (resp. right) principal ideals generated by is, by definition, (resp. ).
A ring is called a principal ideal ring, if every left and right ideal of the ring is principal. In what follows we assume that the ring is both a left and a right principal ideal ring, meaning that every left ideal of and every right ideal of is principal.
Given an element , an element d is called a right (resp. left) divisor of a if (resp. ) for some . An element is called left (resp. right) multiple of a if (resp. ) for some .
Given elements , their right (resp. left) greatest common divisor (gcd) is the generator d of the left (resp. right) ideal generated by a and b: (resp. ). It is uniquely defined up to multiplication by an invertible element. It follows that d is a right (resp. left) divisor of both a and b, and we have the Bezout identity (resp. ) for some .
Similarly, the left (resp. right) least common multiple (lcm) of a and b is an element defined uniquely, up to multiplication by an invertible element, as the generator of the intersection of the left (resp. right) principal ideals generated by a and by b: (resp. ).
We say that a and b are right (resp. left) coprime if their right (resp. left) greatest common divisor is 1 (or invertible), namely if the left (resp. right ) ideal that they generate is the whole ring (resp. ). In particular there exist such that (resp. ).
An element is called a right zero divisor if there exists (called a left zero divisor) such that .
A non-zero element is called regular if it is neither a left nor a right zero divisor. A set of regular elements is a multiplicative monoid of .
A ring is called a domain, if it does not have zero divisors.
A domain is called right (left) Euclidean, if there exists a function
such that
,
- for any there exist unique (resp. ), such that
and or (resp. or ).
A principal ideal ring satisfies the right (and left) Ore property (Theorem 2.2 (c) in [25]). Namely, for any there exist (resp. ) such that (resp. ).
Lemma 5
Let be a principal ideal ring. Let a and b be two right coprime elements in with b regular. Then there exists two left coprime elements with c regular such that . Moreover,
-
(i)
if for some then there exists such that and ;
-
(ii)
if for some then there exists such that and .
Proof
It follows from the left Ore property that for regular, there exist regular, such that . We can assume that c and d are left coprime. Otherwise, one can simplify on the left by their left greatest common divisor, which is regular since c is.
-
(i)
Let . is a right ideal in , hence it can be written as for some . Obviously , thus there exists such that and both g, h are regular. Element h itself lies in , therefore there exists such that . Multiplying the latter on the right by g we have , which implies that since c is regular. Recall that a and b are right coprime. Therefore the equalities and imply that g is invertible in .
Now let us assume that for some elements . By definition of there exists such that . We can rewrite q as where . Finally we note that which implies since c is regular.
Taking the left ideal we prove part (ii) of the Lemma in a similar way.
Appendix B. Lemmas Used for the Proof of Proposition 13
We denote by the projection from the space of Laurent difference polynomials to the space of difference polynomials defined by letting being the nonsingular part of b for all difference Laurent monomial . For example,
If is a Laurent series with coefficients being Laurent difference polynomials, we denote by the series .
Lemma 6
Let and . Then is a difference operator.
Proof
We have
| 64 |
It is clear that for n large enough and similarly .
Similarly,
Lemma 7
Let and . Then is a difference operator.
Proof
Let us expand as a Laurent series in :
| 65 |
After applying to the coefficients of this Laurent series expansion of L, we get a difference operator. Let us show it for the first summand in the last line of (65), namely that
is a difference operator (the same argument applies to the remaining three summands). This follows from the claim that for large enough m, and for all ,
| 66 |
Indeed, if e can be written as a sum of Laurent monomials for which the degree of the numerators, as polynomials in the ’s are bounded by , and if and denote the degrees of a and c as polynomials in the ’s, then (66) holds for .
Lemma 8
Let f be a difference polynomial such that R is recursion for the equation . Then there exists a difference polynomial k such that .
Proof
Operator R given by (53) is recursion for which implies that is a conserved density of f, or in other words that there is a difference polynomial g such that .
To conclude we need to prove that for some difference polynomial k, which is equivalent to say that for some constant . We claim that this is the same as saying that
| 67 |
Indeed, it is clear that by (3) and that any constant satisfies (67). Conversely, if a difference polynomial g of order (M, N) satisfies (67), then there exists a difference polynomial k and a constant such that . To check this, we proceed by induction on the total order of g. If it is zero, meaning that g is a function of for a single N, then g must be a constant. If not, say if g has order (M, N) with , then does not depend on . Consequently, we can write g as a sum where k has order with and h has order with . Since g and both satisfy (67), it follows that must satisfy (67) as well, i.e. we reduced the problem to a difference polynomial of lesser total order.
The difference polynomial (67) is the remainder of the division of by on the left. Let us call it r:
| 68 |
where X is some difference operator. We want to prove that . It is equivalent to prove that the remainder of the division of by on the left is 0. Indeed and r is a difference polynomial, therefore .
We are going to deduce that from the fact that R is recursion for . Note that . Recall Eq. (54) where R was expressed as . By Definition (36) of a recursion operator we have
| 69 |
The idea is to expand (69) as a Laurent series in and to project the coefficients in front of for large N on the space of difference polynomials. Let us start by rearranging (69) using two Euclidean divisions
| 70 |
where Y and Z are two difference operators. Combining (69) with (70), we get:
| 71 |
By Lemmas 6 and 7, if M is the RHS of (71), is a difference operator. Therefore,
| 72 |
must be a difference operator as well. Let us write where and and let . Looking only at even powers of in the Laurent series expansion of (72) we obtain
| 73 |
where the Laurent difference polynomials , satisfy
| 74 |
It is clear that for all and for all , we have
In other words, there exists such that
| 75 |
If , is either a constant or the order of must go to as N grows. In both cases we must have:
| 76 |
This quantity can be computed directly, and we obtain
| 77 |
which is a contradiction to p given in (57). Thus we have and hence . By now we have proved the statement.
Lemma 9
Let be such that R is recursion for . Then
is a difference operator.
Proof
We have and . From (69) we deduce that
| 78 |
is a difference operator. It remains to rewrite the first nonlocal term. We have modulo left multiplication by and we have
and
Therefore
| 79 |
Lemma 10
Let a, b, c, d, e, f, g, h be difference Laurent polynomials such that and
is a difference operator. Then there exists a constant such that
Proof
Recall the definition of the Laurent monomials for
| 80 |
We have
| 81 |
Therefore, we must have for large enough n
Here has poles at () and has poles at . Moreover, the Laurent polynomials inside the parenthesis can only have a bounded number of poles, independently of n. Combining these two facts we deduce that for large n the arguments inside the four parenthesis must vanish:
| 82 |
Since , either , in which case we can take , or . In the latter case we conclude using the fact that, if two Laurent difference polynomials x and y are such that for infinitely many , then x and y are both equal to the same constant.
Lemma 11
Let d be a difference polynomial. Then d is in the image of if and only if
| 83 |
where P is a difference operator. In this case, we have
| 84 |
Here for all , (resp. ) is the unique difference Laurent polynomial such that (resp. ) is divisible on the left by . Moreover,
is a difference polynomial.
Proof
Suppose that for a difference Laurent polynomial . Then the Fréchet derivative of d expands as:
Hence (we use to to denote modulo left multiplication by ) we get
Conversely assume that
| 85 |
Recall that the ’s are defined so that and for all . The following identity can be easily checked by induction
| 86 |
Let us rewrite the LHS of (85):
| 87 |
Combining (85), (86) and (87) we obtain
| 88 |
from which it follows that
We proved that there exists a Laurent difference polynomial such that . It implies that cannot have poles (since its highest pole should be lesser or equal than 0 and its lowest pole should be greater than 0), therefore that it is a difference polynomial.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Ablowitz MJ, Kaup DJ, Newell AC, Segur H. Inverse scattering transform-Fourier analysis for nonlinear problems. Stud. Appl. Math. 1974;53(4):249–315. [Google Scholar]
- 2.Olver PJ. Evolution equations possessing infinitely many symmetries. J. Math. Phys. 1977;18(6):1212–1215. [Google Scholar]
- 3.Fuchssteiner B. Application of hereditary symmetries to nonlinear evolution equations. Nonlinear Anal. Theory Methods Appl. 1979;3(11):849–862. [Google Scholar]
- 4.Magri, F.: A Geometrical Approach to the Nonlinear Solvable Equations. Volume 120 of Lecture Notes in Physics, pp. 233–263. Springer (1980)
- 5.Sanders JA, Wang JP. Integrable systems and their recursion operators. Nonlinear Anal. 2001;47:5213–5240. [Google Scholar]
- 6.Sergyeyev A. Why nonlocal recursion operators produce local symmetries: new results and applications. J. Phys. A: Math. Gen. 2005;38:3397–3407. [Google Scholar]
- 7.Wang, J.P.: Lenard scheme for two-dimensional periodic Volterra chain. J. Math. Phys. 50, 023506 (2009)
- 8.Gel’fand IM, Dorfman IY. Hamiltonian operators and algebraic structures related to them. Funct. Anal. Appl. 1979;13(4):248–262. [Google Scholar]
- 9.Dorfman I. Dirac Structures and Integrability of Nonlinear Evolution Equations. Chichester: Wiley; 1993. [Google Scholar]
- 10.Fokas AS, Fuchssteiner B. On the structure of symplectic operators and hereditary symmetries. Lett. Nuovo Cimento (2) 1980;28(8):299–303. [Google Scholar]
- 11.Fuchssteiner B, Fokas AS. Symplectic structures, their Bäcklund transformations and hereditary symmetries. Phys. D. 1981;4(1):47–66. [Google Scholar]
- 12.Kac VG, Barakat A, De Sole A. Poisson vertex algebras in the theory of Hamiltonian equations. Jpn. J. Math. 2009;4:141–252. [Google Scholar]
- 13.Kac VG, De Sole A. Non-local Poisson structures and applications to the theory of integrable systems. Jpn. J. Math. 2013;8:233–347. [Google Scholar]
- 14.Valeri D, De Sole A, Kac VG. A new scheme of integrability for (bi)Hamiltonian PDE. Commun. Math. Phys. 2016;347:449–488. [Google Scholar]
- 15.Kupershmidt, B.A.: Discrete Lax Equations and Differential–Difference Calculus, Volume 123 of Astérisque. Société mathématique de France, Paris (1985)
- 16.Magri F. A simple model of integrable Hamiltonian equation. J. Math. Phys. 1978;19(5):1156–1162. [Google Scholar]
- 17.Carpentier S. A sufficient condition for a rational differential operator to generate an integrable system. Jpn. J. Math. 2017;12:33–89. [Google Scholar]
- 18.Adler VE, Postnikov VV. Differentialdifference equations associated with the fractional Lax operators. J. Phys. A Math. Theor. 2011;44(41):415203. [Google Scholar]
- 19.Ablowitz MJ, Ladik JF. Nonlinear differential–difference equations and Fourier analysis. J. Math. Phys. 1976;17(6):1011–1018. [Google Scholar]
- 20.Khanizadeh F, Mikhailov AV, Wang JP. Darboux transformations and recursion operators for differential–difference equations. Theor. Math. Phys. 2013;177(3):1606–1654. doi: 10.1007/s00220-019-03548-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Ya, A., Maltsev and S.P. Novikov: On the local systems Hamiltonian in the weakly nonlocal Poisson brackets. Physica D: Nonlinear Phenomena 156(1–2), 53–80 (2001)
- 22.Artin E. Geometric Algebra. New York: Interscience Publ; 1957. [Google Scholar]
- 23.Draxl, P.K.: Skew Fields. London Mathematical Society Lecture Note Series. Cambridge University Press (1983)
- 24.McConnell JC, Robson JC, Small LW. Noncommutative Noetherian Rings. Graduate Studies in Mathematics. Providence: American Mathematical Society; 2001. [Google Scholar]
- 25.Carpentier S, De Sole A, Kac VG. Some remarks on non-commutative principal ideal rings. C. R. Math. 2013;351(1):5–8. [Google Scholar]
- 26.Taelman L. Dieudonné determinants for skew polynomial rings. J. Algebra Appl. 2006;05(01):89–93. [Google Scholar]
- 27.Dieudonné MJ. Les déterminants sur un corps non commutatif. Bull. Soc. Math. Fr. 1943;71:27–45. [Google Scholar]
- 28.Zhiber AV, Sokolov VV. Exactly integrable hyperbolic equations of Liouville type. Uspekhi Mat. Nauk. 2001;56(1(337)):63–106. [Google Scholar]
- 29.Sanders JA, Wang JP. On a family of operators and their Lie algebras. J. Lie Theory. 2002;12(2):503–514. [Google Scholar]
- 30.Sanders JA, Wang JP. On the integrability of homogeneous scalar evolution equations. J. Differ. Equ. 1998;147(2):410–434. [Google Scholar]
- 31.Mikhailov AV, Novikov VS, Wang JP. Symbolic representation and classification of integrable systems. In: MacCallum MAH, Mikhailov AV, editors. Algebraic Theory of Differential Equations. Cambridge: Cambridge University Press; 2009. pp. 156–216. [Google Scholar]
- 32.Kiselev AV, van de Leur JW. Symmetry algebras of Lagrangian Liouville-type systems. Theor. Math. Phys. 2010;162(2):149–162. [Google Scholar]
- 33.Kiselev, A.V., van de Leur, J.W.: Pre-Hamiltonian structures for integrable nonlinear systems. arXiv:math-ph/0703082v1
- 34.Mikhailov, A.V., Carpentier, S., Wang, J.P.: PreHamiltonian and Hamiltonian Operators for Integrable Differential–Difference Equations (2018). arXiv:math-ph/1808.02957 [DOI] [PMC free article] [PubMed]
- 35.Mansfield E, Beffa GM, Wang JP. Discrete moving frames and discrete integrable systems. Found. Comput. Math. 2013;13(4):545–582. [Google Scholar]
- 36.Mikhailov, A.V., Wang, J.P., Xenitidis, P.: Cosymmetries and Nijenhuis recursion operators for difference equations. Nonlinearity 24(7), 2079–2097 (2011). arXiv:1009.2403
- 37.Wang JP. Recursion operator of the Narita-Itoh-Bogoyavlensky lattice. Stud. Appl. Math. 2012;129(3):309–327. [Google Scholar]
- 38.Mikhailov AV, Sokolov VV. Integrable odes on associative algebras. Commun. Math. Phys. 2000;211(1):231–251. [Google Scholar]
- 39.Fuchssteiner B, Oevel W, Wiwianka W. Computer-algebra methods for investigation of hereditary operators of higher order soliton equations. Comput. Phys. Commun. 1987;44(1–2):47–55. [Google Scholar]
- 40.Hereman, W., Sanders, J.A., Sayers, J., Wang, J.P.: Symbolic computation of polynomial conserved densities, generalized symmetries, and recursion operators for nonlinear differential-difference equations. In: Group Theory and Numerical Analysis Book series title: CRM proceedings and Lecture Notes, vol. 39, pp. 133–148. Amer. Math. Soc., Providence (2005)
- 41.Gürses M, Karasu A, Sokolov VV. On construction of recursion operators from Lax representation. J. Math. Phys. 1999;40(12):6473–6490. [Google Scholar]
- 42.Zhang D, Chen D. Hamiltonian structure of discrete soliton systems. J. Phys. A Math. Gen. 2002;35(33):7225–7241. [Google Scholar]
- 43.Toda M. Wave propagation in anharmonic lattices. J. Phys. Soc. Jpn. 1967;23(3):501–506. [Google Scholar]
- 44.Flaschka H. The Toda lattice. II. Existence of integrals. Phys. Rev. B. 1974;9:1924–1925. [Google Scholar]
- 45.Manakov SV. Complete integrability and stochastization in discrete dynamical systems. Sov. Phys. JETP. 1975;40:269–274. [Google Scholar]
- 46.Ruijsenaars SNM. Relativistic Toda systems. Commun. Math. Phys. 1990;133(2):217–247. [Google Scholar]
- 47.Oevel W, Fuchssteiner B, Zhang H, Ragnisco O. Mastersymmetries, angle variables, and recursion operator of the relativistic Toda lattice. J. Math. Phys. 1989;30(11):2664–2670. [Google Scholar]
- 48.Zhang H, Gui-Zhang T, Oevel W, Fuchssteiner B. Symmetries, conserved quantities, and hierarchies for some lattice systems with soliton structure. J. Math. Phys. 1991;32(7):1908–1918. [Google Scholar]
- 49.Tsuchida T. Integrable discretizations of derivative nonlinear Schrödinger equations. J. Phys. A: Math. Gen. 2002;35(36):7827–7847. [Google Scholar]
- 50.Marí Beffa G, Sanders JA, Wang JP. Integrable systems in three-dimensional Riemannian geometry. J. Nonlinear Sci. 2002;12(2):143–167. [Google Scholar]

