Abstract
In this article, we consider decoding Grassmann codes, linear codes associated to the Grassmannian and its embedding in a projective space. We look at the orbit structure of Grassmannian arising from the multiplicative group in . We project the corresponding Grassmann code onto these orbits to obtain a subcode of a –ary Reed-Solomon code. We prove that some of these projections contain an information set of the parent Grassmann code. By improving the decoding capacity of Peterson’s decoding algorithm for the projected subcodes, we prove that one can correct up to errors for Grassmann code, where is the minimum distance of Grassmann code.
I. Introduction
Let be a prime power and be a finite field with elements. Let and be positive integers satisfying . The Grassmann variety is an algebraic variety over whose points are -dimensional subspaces of an dimensional vector space over . It is customary to assume although can be any -linear vector space of dimension . To every such algebraic variety, one can associate a natural linear code, thinking the variety as a projective system [20]. The linear code associated to the Grassmanian , in this way, is known as the Grassmann code and is denoted by . Ryan [16], [17] initiated the study of Grassmann codes over the binary field. Later, Nogin [14] continued the study of Grassmann code over a general finite field. They proved that the Grassmann code is an code where
| (1) |
Here is the Gaussian binomial coefficient given by
Mathematicians have been studying different aspects of Grassmann codes since they were discovered. For example, the weight spectrum of Grassmann codes , was computed by Nogin [14], [15]. Kaipa-Pillai [11] computed the weight spectrum of the code. . Some of the initial and terminal generalized Hamming weights of are also known [4], [6], [14]. The automorphism group of is quite large and fully determined [5]. Further, the structure of the minimum weight codewords of the dual Grassmann code is well understood [1]. There is was proved that the support of any minimum weight codeword of are three points of a line of the Grassmannian and conversely, any three points of a line of the Grassmannian is the support a minimum weight codeword of .
Apart from Grassmann codes, Schubert codes, the codes associated to Schubert varieties in Grassmannians, has been a topic of equal interest among researchers. The study of Schubert codes was initiated by Ghorpade-Lachaud [4] and a conjecture about the minimum distance of Schubert codes was proposed. The conjecture is known as the MDC for Schubert codes. After several attempts and proving the MDC in many different cases [3], [9], [8], the MDC was settled in affirmative sense [21], [7]. Different aspects of Grassmann and Schubert codes have been studied by several mathematicians but the the decoding problem for these codes have not been explored in much detail. So far, no effective decoding algorithm for Grassmann code or Schubert code is known. In a recent work, the second named author together with P. Beelen proposed a decoding algorithm for Grassmann codes [2]. In this article, paths in Grassmannians were used to construct certain parity checks for Grassmann codes and a majority logic decoder was proposed using these parity checks. But unfortunately, the proposed majority voting algorithm is not effective, since the proposed decoder can correct approximately errors. In other words, even in the simplest cases, the decoder could not correct up to errors. Further, the second named author extended the majority voting decoder to Schubert codes corresponding to Schubert varieties in Grassmannian [18]. Interestingly, in some cases, the proposed decoder for Schubert code is effective but in most of the cases it is not. Therefore, the problem of proposing an effective decoder for Grassmann and Schubert code is still open, even in the simplest case such as codes associated to Grassmannian and Schubert varieties in .
In this article, we study the decoding problem for Grassmann code We consider the action of the cyclic group onto , thinking as ordered pair of elements in , and study the orbits of this action. We see that the projection of Grassmann code onto these orbits can be thought of as a subcode of certain Reed-Solomon code. Moreover, most of these projected subcodes contains an information set of . We use such subcodes and use Peterson’s decoding algorithm to correct the errors. As a consequence, we are able to correct errors for the Grassmann code where d is the minimum distance of the code.
II. Preliminaries
In this section, we recall the definition of Grassmann variety and the construction of Grassmann codes. As earlier, positive integers satisfying and a finite field with elements are fixed throughout the article. Define the set by
and fix a linear order on . Let be an -dimensional vector space over . The Grassmannian of all -planes of vector space is defined by
The Grassmannian can be embedded into a projective space via the Plucker map. More precisely, fix an ordered basis of . For , let be an matrix whose rows are coordinates of some basis of with respect to . The Plucker map is
| (2) |
where denotes the minor of corresponding to columns of indexed by tuple . The image of the Plucker map is given by the zero set of a bunch of quadratic polynomials and hence defined a projective algebraic variety, known as Grassmann variety. For a detail study on Grassmann varieties we refer to [10], [12]. It is known that if and are two vector spaces of dimension over , then there exists an automorphism of mapping to . Therefore, for the rest of the article, we denote by , the Grassmannian of all -planes of . To every projective algebraic variety, one can associate a linear code using the language of projective systems [20, Ch.1]. More precisely, each nondegenerate subset of a projective space over corresponds to a unique linear code. Further, the minimum distance and the generalized Hamming weights of the corresponding code can be studied from hyperplane sections of with linear subspaces of . Therefore, there is a linear code which corresponds to the Grassmannian . The code associated to in this way is known as the Grassmann code and is denoted by . To get into more details, we would like to recall the construction of Grassmann code.
Let be an matrix of indeterminates over . For every , let denote the minor of corresponding to columns labeled by . Let be the linear space spanned by all minors . For each , let be a matrix corresponding to point as in equation (2), and represent the Grassmannian as be a set of such matrices corresponding to each point in some fixed order, where . Consider the evaluation map
The image of the evaluation map Ev is known as the Grassmann code and is denoted by . The Grassmann code is an linear code where and are given by equation (1). Clearly, the codewords of Grassmann codes are indexed by points of . Therefore, we may use points as an indexing set for coordinates of codewords in .
In this article, we only study the Grassmann code . We write a generic matrix as
and write the first row of the indeterminate matrix as and the second row of as . In the next section, we will study the orbit structure of Grassmannian under the natural action induced by the cyclic group but before we get into the orbit structure, we recall the definition of the trace function of field extensions.
Let be the field extension of of degree .
Definition 1:
The trace function of over is defined and denoted by
Note that is an dimensional vector space over and Tr is an linear map. Trace functions are play a key role in our decoder.
III. Orbit Structure of Grassmannian
In this section, we study the natural action of the cyclic group of on Grassmannian . Our goal is to understand the orbits of under this action and the behavior of the projection of the code onto these orbits. Before going into further details, we shall fix some notations that we will be using throughout the article. As fixed in the last section, let be the field extension of of degree . We know that is an -dimensional -vector space and hence and are isomorphic as an space. We fix an isomorphism between and . Keeping this in mind, we may think of as a subset of consisting of tuples , where span a two dimensional subspace over . In other words, we think as the set of matrices over of rank one and two and is, up to equivalence, the set of those matrices that are of rank two. For , we denote by , the coordinates of and vice-versa. Furthermore, we treat the subspace span by tuple same as the point of spanned by coordinates and . Having said that, we recall the following trivial lemma.
Lemma 2:
For every , the map given by is a nonzero -linear functional of . Note that, the trace map is -linear in both and . Therefore, for every and , there exist some such that . Furthermore, if is a coordinate of (via isomorphism treating it in ), then there exist some such that . This plays a very important role. The next lemma is an immediate consequence of Lemma 2.
Lemma 3:
Let and let be the vector space. Then functions of the form
are determinantal functions on as a vector space over .
What we mean is that the function is an alternating bilinear map on . Therefore, 2×2 minors of the matrix , can be written as linear combination of functions for . In other words, one can think of Grassmann code as evaluation functions on Grassmannian as a subset of .
Now we are ready to look at the natural group action of on Grassmannian . Let be a generator of . The natural action of on is defined by
| (3) |
Let be the orbits of this action. Therefore, if is some element of orbit then . Since generates and for any , we have and hence we may assume that each orbit has one element of the form . We denote the orbit containing as and we call the element an orbit representative of .
Example 4:
Assume and . Let be such that . In this case and . Note that if is an orbit representative of the orbit , then . By remowing 0 we may just write this set as . Further, as is a generator of the field , we have and for some and . As the subspace is of dimension 2, we get . This leaves possibilities for . Further, each of those and generates the same space. Therefore, there are 7 different spaces of the form . These are and . Since the action of maps to . A direct computation shows that there are three orbits, namely: The orbits and . The orbit has 15 elements and contains orbit representatives and . The orbit also has 15 elements and it contains orbit representatives and . Finally the orbit has 5 elements and it has only one orbit representative, namely . Note that this gives in total 35 spaces, i.e. full Grassmannian .
The Grassmannian is represented as elements of . Here, the field has 31 nonzero elements. If is an orbit representative of the orbit , then there are 30 choices for . Further, since and generate the same space, we have only 15 choices of the subspace . Furthermore, if is a generator of , then the action of does not fix any elements of Thus, all orbits will have size 31 and hence there are 5 orbits of size 31. Likewise, for the Grassmannian represented as elements of has 21 orbits of size 127.
In the next two lemmas we’ll understand why in the case of and , the orbits structure of are quite uniform.
Lemma 5:
Let be a two dimensional -linear subspace of . Suppose that . Then if and only if , i.e. .
Proof:
Let be as in the hypothesis and let such that . Then, as and , we have for some . Since we have . Also, as there exist such that . Putting the value of , we get . Therefore, and hence satisfies a polynomial equation over of degree 2. It follows that . For the reverse implication, note that if , then and in this case, clearly . ■
In the next lemma, we consider when is odd and count the number of orbits in under the action defined in equation (3) and compute the size of each orbit.
Lemma 6:
If is an odd integer, then there are orbits and the size of each orbit is .
Proof:
The proof is a simple consequence of the orbit-stabilizer theorem and Lemma 5. Since is odd, there does not exist any such that . Therefore, from lemma 5, we know that for any , the stabilizer of has size , namely elements of . Now from the orbit-stabilizer theorem, we get that the orbit of each of is of size . Further, as is the disjoint union of orbits of , the number of orbits is , where is an arbitrary orbit. As a result, we get the total number of orbits in are . ■
This lemma justifies the nature of orbits of in cases that we discussed in the Example 4. The next lemma counts the size of orbits and total number of orbits in when is even.
Lemma 7:
If is even, then there are orbits of size and exactly one orbit of size .
Proof:
Let be an arbitrary element of the Grassmannian. If then we have if and only if . In other words, the stabilizer of in this case is of size and hence from orbit-stabilizer theorem we get that the orbit of in this case is of size . On the other hand, if , then from lemma 5 we know that in this case we will have if and only if . In other words, in this case the stabilizer of is of size and hence the orbits of is of size . Now as the cardinality of is and if there are orbits of size , then we have
Solving for gives there are orbits of size . ■
Now we count the number of orbit representatives a particular orbit might have. For convenience we use the notation to represent -subspace spanned by .
Lemma 8:
Let . Then if and only if
where .
Proof:
Assume that . Note that this is equivalent to .
Suppose that . Then there exists a nonzero such that . Since , it follows that there exist such that and there exist such that . Therefore which implies . Note that if then which is a contradiction of the hypothesis.
Conversely, assume that . Consider the action of on . Note that if then which is a contradiction of the hypothesis. Then
This completes the proof of the lemma. ■
Lemma 9:
Suppose that . If
where , then there exists a nonzero such that .
Proof:
Suppose
where . Then
The polynomial
is a polynomial of degree at most two with as a root. If is a root of then either or is the zero polynomial. But as , we get . This implies
Consequently, either or must be a nonzero multiple of . Now, if , then which contradicts the hypothesis of the Lemma. Therefore, we get and hence
Thus
which implies . This establishes the proof. ■
Now we are ready to count exact number of orbit representatives of an orbit of of the form provided .
Lemma 10:
Let be an orbit under the action of on and let . Then there are different elements such that .
Proof:
We shall count the number of distinct elements of the form where . Lemma 9 implies that is represented by distinct multiples of . There are different quadruples in such that Therefore there are distinct multiples defining different quotients. ■
IV. Evaluation of the determinant function on Orbits of
Our bound and decoding algorithm of Grassmann codes hinges on the fact that the Grassmann code is a subcode of quadratic forms in variables, namely one variable for each entry of the generic matrix. But in this section we will think this code in a slightly different way. We know that reordering the points of Grassmannian only gives an equivalent code. Therefore, we first fix an order on orbits and then order points in each orbits. We may think as evaluation of determinantal functions on representatives of points in each orbit in fixed order. Also, we have seen that the determinantal functions can be written as a -linear combinations of functions where . Recall that, the orbit of is the set where is a generator of the multiplicative cyclic group . Note that, when determinantal functions is evaluated on an arbitrary point of the orbit , it gives
Since and are fixed, we may think as polynomial in one variable when it is evaluated on orbit . Hence, we consider polynomials
where . It implies that the evaluation of determinantal functions on orbits is also given by the evaluation of polynomials on certain elements of . This evaluation also gives a linear code which we denote by . We now try to understand the nature of code for the orbit .
Let . We have seen that all orbits (except possibly one) in are of size . Let be such an orbit and let are representative points of . As we discussed, each of these is of the form for some . Once is fixed, we may think the point as . Therefore, for the rest of the draft an orbit of size and representative points as above is fixed. Let be the linear span of the set and let Ev be the evaluation map obtained by evaluating functions of on points . The image of Ev is an -linear code. Further, it is not hard to see that
Remark 11:
Note that, the determinantal function depends on , therefore when we think a function in as a polynomial in , the coefficients are in the field . But since determinantal functions are defined by Trace function, the evaluation of on points are in the field and hence the code is a code over the field .
We have the following trivial lemma.
Lemma 12:
The code is a projection of the code onto the coordinates in the orbit .
Proof:
This is a simple consequence of the fact that the code is the evaluation of the space on points of ■
Lemma 13:
If is such that is a nonzero polynomial. Then .
Proof:
We simply expand the determinantal function using the trace function . Note that,
where we used for . Clearly, the degree of this polynomial is at least and at most . ■
The next corollary is an immediate consequence of Lemma 13.
Corollary 14:
Suppose that the function is not identically zero over . Then can have at most many zeros in
Note that the polynomial is divisible by . Next, we determine the dimension of the code . To do so, we first determine a spanning set for the vector space . We determine it in the next two lemmas.
Lemma 15:
Let be a fixed nonzero element of . Suppose that is contained in the field where . Then
Proof:
It is enough to prove that for each , the determinantal function can be written as a -linear combination of monomials in the set
Lemma 13 states that
The condition on implies that . For any term of the form where , we obtain that the
Therefore, the expansion of has no terms of the form where . ■
In the next lemma we prove that both the spaces discussed in the last lemma are the same.
Lemma 16:
Let be a fixed nonzero element of . Suppose that is contained in the field where . Then
Proof:
In the view of Lemma 15, we only have to show that for every , satisfying , there exist such that can be written as an -linear combination of some . Let be a normal basis for over . This implies that the matrix given by nonsingular. Thus for any there exists such that
where is the standard basis vector with a 1 in position and zeroes everywhere else. As the vector is the coefficient vector for , (omitting monomials not of the form ), taking dot product of both sides with vector , we get
| (4) |
Thus, for each fixed , we have
Now, taking as in equation (4) and consider the linear combination
In other words, for any with , monomials can be written as linear combination of for some . This completes the proof of the Lemma. ■
We have now found a simple basis for the space . Note that the evaluation map Ev is an injective map therefore, the code and the space are of same dimension. We have the following corollary.
Corollary 17:
Let . Let be a positive integer such that and . Then the code has dimension . Consequently, if is prime, then .
Proof:
The kernel of evaluation map is an ideal of the form . The polynomial has degree . Therefore the evaluation map on . As the monomials in all have degrees strictly smaller than , it follows that the evaluation map is injective. Therefore,
■
V. Decoding Grassmann code
In this section, we propose a decoding algorithm for the Grassmann code . The minimum distance of is . Ideally, one strives for a decoding algorithm capable of correcting up to errors. In a recent joint work [2], the second named author proposed a decoding algorithm for Grassmann codes but unfortunately, the proposed algorithms decoding capability is far from the optimal decoding capability of the code . For example, asymptotically, the proposed decoder can correct only up to errors for . In other words, for the proposed algorithm can asymptotically correct around errors. Here we propose a list decoder for Grassmann code that can correct up to errors for .
We propose an algorithm in this section that can comect up to errors for . We have seen that for most of the orbits in , codes are of same dimension as the Grassmann code . Therefore, such contains an information set of . Also, from Lemma 13 we know that the space contains polynomials of degree bounded between and and the points of orbits can be thought of as a subset of , therefore we may think the code as a subcode of a - RS code, where . Our decoding algorithm is based on these projections of . Our decoder uses Peterson’s decoding algorithm to obtain an information set of and with the information set obtain the correct codeword of . But before we dive into details, we recall the following well known result for Reed-Solomon codes [13].
Proposition 18 (Petersen’s decoding algorithm):
An Reed–Solomon code can decode errors with complexity .
The following remark could be useful in understanding the technical lemmas that will be given later in this section.
Remark 19:
As earlier, let be an orbit in with cardinality , where . We have seen that we may think the points of as for some , as represents point of . Under this identification, . Without loss of generality we may assume that the coordinates of are indexed on the set in this fixed order. The restriction of onto is the code obtained by evaluating the functions on . We can decode it as a codeword of the Reed–Solomon code obtained by evaluating the monomials on . However, the sparsity of the monomials is useful in for decoding. Note that if is spanned by the monomials , then
| (5) |
Since we can decode on instead by extending the values from to . For example, let and be the received word. We extend this word to as follows:
Note that we used equation (5) to extend to . The word is comprised of multiples of . The errors positions in are replicated in each of the copies of . Identifying in this way, we may change the problem of decoding as a codeword of , to a problem of decoding as a codeword of Reed–Solomon code generated by evaluating the same monomials over . This certainly increases our decoding capacity.
Definition 20:
Let be a linear code. We say a procedure is a list decoder of size l correcting errors if given a received word the algorithm outputs a list of codewords such that contains all codewords at distance or less from .
The classical Guruswami–Sudan algorithm list decoder can correct more than errors for a Reed–Solomon code. It can correct up to errors with a polynomial list size. We propose a slightly different list decoder. Instead of performing polynomial interpolation to find codewords close to the received word, we decode all possible values of the highest degree term. For example: take the [15, 5, 11] Reed–Solomon code obtained by evaluating the polynomials of degree 4 or less on the 15 nonzero points of . The code can correct 5 errors. That is, given where , and we can determine such that . We are going to decode a subcode of the [15, 5, 11] Reed–Solomon code, . This subcode is the code obtained by the evaluations of . The monomial is not part of . Since is a subcode of a [15, 5, 11] code we can correct 5 errors. We propose a slightly expensive way of correcting 6 errors. Suppose that the evaluation of was sent and 6 errors occurred. We propose that to decode 6 errors, instead of decoding with a classical list decoder, we try to decode the 16 combinations for . We know that when the evaluation corresponds to a codeword of the [15, 3, 13] Reed–Solomon code where we can correct 6 errors instead of 5. Because the monomial is not part of we know the coefficient corresponding to is zero. To decode in the subcode [15, 3, 13] Reed–Solomon subcode spanned by and we need only to check all 16 values of the coefficient of instead of all 256 values of both the coefficient of and . Our code of interest, the code is obtained by evaluating the monomials . Since we know our codeword is the evaluation of a polynomial with no term of the form , decoding all possibilities for the coefficient of will contain a received word from a polynomial of degree 10 or less. This leads to an improvement of the decoding radius at the expense of decoding each codeword 16 times.
By correcting 6 errors, we mean that if 6 errors or less occur, then we can always find a closest codeword to the received word. It may be counter intuitive to correct 6 errors with a [15, 5, 11] Reed–Solomon code. However, if we compare our received word to all 165 possible codewords, then we surely may be able to find a closest codeword to our received word and decode 6 errors. In our case we need not go as far as checking all possible coefficients. Since the projection is a subcode of a Reed–Solomon code generated by the evaluation of a sparse set of monomials, one needs to check only the possibilities for the coefficient of the highest term to obtain a significant improvement of the decoding radius. Having to check all possibilities for all coefficients would lead to an exponential complexity decoder. In our case, we need only to check one, which makes our decoder have polynomial complexity.
In the following algorithm we denote by RSDecoder the output of decoding the received word as a received codeword from a Reed–Solomon of dimension evaluated at . For the next lemma we assume that is an orbit of with and as in remark 19. We also fix the order.
Algorithm 1.
| Input: a received word where . |
| ListRS ← [ ]. |
| for do |
| . |
| . |
| Append to |
| end for |
Lemma 21:
We can list–decode up to errors for the code with complexity and list size .
Proof:
The proof is little technical. Let be a codeword transmitted through a noisy channel and let be the received word with error satisfying . The codeword means there exist a polynomial
such that , where . For each , let be the word defined by . Note that ’s are the evaluation of monomial on , i.e, on points of . For every , we extend to and decode as a codeword from a Reed–Solomon code spanned by for . This is possible due to equation (5) and the fact that the corresponding Reed-Solomon code can correct up to errors. Note that for the unique value of for and , the word decode as
Consequently, we get a list of polynomials, namely, . Among this list, there will be exactly one , for which the polynomial is the sent codeword. ■
In the next lemma we express the number of elements in a field extension not lying in any proper subfield of the extension field. This number counts some the orbits of the Grassmannian which contain an information set. These orbits will be useful for decoding .
Proposition 22:
Let be a positive integer and let be a finite field with elements. If is a prime or , then there are at least elements such that is not any proper subfield of .
Proof:
When is prime, the statement follows trivially. Now assume and are all distinct primes ( in descending order) dividing . Let . Let be any proper subfield of satisfying . Then is contained in a unique subfield of order for some . Therefore, to establish the proposition, we only need to show that
But this will follow as
Therefore, we now only need to show that . Now, 2 is the smallest prime that can divide , we get . Thus, we have
This completes the proof of the Proposition. ■
The next lemma is an important step in proposing the decoder for Grassmann code . In this lemma, we give the error distribution and prove that if the transmitted codeword is not too corrupt, there will always be an orbit of with an information set such that the projection the received word onto this orbit is also not too corrupt and we can apply Reed-Solomon decoder. Now suppose is the transmitted codeword and is the received codeword with error
Lemma 23:
In the above setting, if less than errors occurred, i.e., , then there exists an orbit of of size which contains at most errors and is not contained in any proper subfield of .
Proof:
The proof is divided into two parts. First, we prove the lemma for the case or is a prime and then deal with the cases separately. So let us assume that or is a prime. In that case, Proposition 22 guarantees that there are at least elements such that does not lie in any proper subfield of . In particular, there will be at least elements that will lie in orbits of size . Also, from lemma 10, we know that each such orbit contains elements from these elements. If there are less than errors, the average error per orbit with information set is
It is easy to verify now that
Next, if , then there are elements in that does not lie in . In particular, these points lie on orbits of size and each such orbit contains exactly elements. Moreover, these orbits contains an information set. Note that,
Thus, the average error per orbit with information set is less than the error correction capacity.
The case is similar to the case . There are elements such that is not contained in a proper subfield of . Further each such orbit contains exactly elements among these. Thus there are orbits of size and these orbits contains an information set. If there are less than errors, the average error per orbit with information set is
Now it is straightforward to verify that
This completes the proof of the lemma. ■
Now we are ready to prove the main result of this article. In the next theorem, we give a decoding algorithm for the Grassmann code which decodes up to errors. It applies Algorithm 1 at each orbit with an information set and recovers a codeword from using the codeword recovered from . It returns the closest such codeword to the received word.
Algorithm 2 works is by outputting the closest codeword from a long list of candidates. The algorithm takes the received word and looks at the subword given by the positions of on each orbit with an information set. At each orbit there are candidates (one for each possible leading term). If any of the candidates decodes successfully we extend it to a codeword of . If this candidate codeword is the first codeword found or is closest to than the previous candidate codeword, then it is stored as the current candidate codeword . This process continues checking all orbits with an information set. At the end, the algorithm outputs the candidate codeword .
If less than errors occurred the algorithm is guaranteed to work. Lemma 23 implies there is a clean orbit with few errors. When we decode the candidates on this clean orbit we are guaranteed that one of the candidates will decode correctly as a codeword of the corresponding Reed–Solomon code. Since contains an information set of , that candidate codeword is at distance to . No other candidate codeword is closer to the received word to a codeword with the same information set as . Therefore errors are corrected. We state the algorithm and prove its correctness as follows.
Theorem 24:
Let and let be the corresponding Grassmann code. Let . Suppose where . If , then Algorithm 2 returns .
Proof:
Let be the sent codeword of and where . Algorithm 2 projects onto all orbits with an information set. Since , Lemma 23 implies there is an orbit where the projection has less than errors. Therefore there is one codeword in the output of which contains the same information set as the sent codeword . Since , is closest to than any other codeword. Therefore once the algorithm loops through , will be set to and it will remain unchanged. Therefore is the output of the algorithm. ■
Algorithm 2.
| Input: a received word where . |
| for in the set of orbits containing an information set. do |
| Codewords ← Algorithm 1 |
| for Codewords do |
| c′ ← extend the codeword to a codeword in with the same information bits as . |
| if then |
| end if |
| end for |
| end for |
| Return . |
Note that the length of the Grassmann code is which is of the order .
Theorem 25:
Algorithm 2 decodes a codeword of with complexity , where is the length of the code .
Proof:
The algorithm decodes a codeword of a Reed–Solomon code of length for each orbit with an information set. If is odd, there are orbits of size . If is even there are of size . In either case, the number of orbits on which we project and decode is . The Reed–Solomon decoding algorithm has complexity . Therefore the complexity of the decoding algorithm on each orbit is . As there are orbits, the overall complexity is . ■
VI. Decoding Example
In this section we decode errors for the binary Grassmann code.
A. Decoding
The code is a [35, 6, 16] code. We aim to correct 7 errors. As mentioned in Example 4 has two orbits of size 15 and one orbit of size 5. If where , then the two orbits of size 15 are the ones containing and respectively. The orbit of size 5 is the one containing the space . On each orbit of size 15, the code is a subcode of a Reed–Solomon code obtained by evaluating the monomials on the 15 nonzero points of . Note that the Reed–Solomon code spanned by has distance . We shall correct 3 errors by decoding as a codeword of the Reed–Solomon code spanned by instead. This means that we need to decode 16 possibilities, one for each possible coefficient of .
Let be the orbit of be the orbit of , and be the orbit of . Suppose that the zero codeword was sent and that seven errors occurred; three errors in and four errors in . The orbit does not contain an information set and is ignored by our algorithm. Let be the received word. We shall assume that restricted to each orbit is
and
Denote by the evaluation vector of the monomial .
Our decoding algorithm is as follows. First try to decode on . For each let and , then decode as a Reed–Solomon codeword spanned by the evaluation of . In this way we consider all possibilities for the coefficient of . We have implemented this decoder in SAGEMATH [19]. We get 16 possible codewords.
It seems there are 16 possible codewords for the received word. However, only the zero codeword is in the span of . All other polynomials contain and , which implies they do not correspond to a codeword of nor .
Now we decode on . For each let . We decode as a Reed–Solomon codeword spanned by the evaluation of . We get 16 possible codewords:
It seems there are 16 more possible codewords for the received word. However all polynomials contain and , which implies they do not correspond to a codeword of . Therefore we recover 0 as the sent codeword.
B. Decoding a nontrivial codeword
Now we shall decode the codeword given by . We recall that where . We take the powers as a basis of over . Its dual basis is given by . If the vector space where and then the function
On the orbit and
On the orbit and
Suppose is the intended codeword. The codeword restricted to is equal to (1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0). The codeword restricted to equals: (0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1). As in the previous case, we shall assume there are three errors on and four errors in .
When we attempt to decode (0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0) on the orbit we get the following candidate codewords:
We remark that there is a candidate codeword for each possible leading monomial since we attempt to decode for all possibilities of that leading term. Note that the only polynomial spanned by is the second entry of the list, corresponding to . In this way, the original codeword is part of the list of possible codewords given by the decoding algorithm. We may recover the codeword by evaluating on and extending the entries on an information set to the remaining positions of the codeword. One can also recover the function from the coefficients of the polynomial. Now we also consider what happens with which has 4 errors. When we decode (1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1) on the orbit we get the following possible codewords:
None of the possible codewords given by can correspond to a codeword of the projection code nor a codeword of the Grassmann code since their evaluation polynomials contain the monomial . Out of the 32 possible codewords, only one is a possible codeword of the Grassmann code. In this way the original codeword is recovered.
C. Several codeword candidates
It may be possible that each orbit offers different candidates for the corresponding codeword. As before, let us assume that we are decoding 7 errors when decoding . Suppose is the intended codeword. However we shall assume all seven enrors occur on . In particular the received word restricted to is . The received word restricted to is . Note that the restriction of the received word on is only one position away from the restriction of on . In this case we get a single candidate codeword from each orbit. The candidate codeword from is the zero codeword. The candidate codeword from is the evaluation of , which corresponds to the codeword given by ). The distance of Ev(0) to the received word is 7. The distance of the received word to is 9. Since the closest codeword is Ev(0) the algorithm outputs Ev(0) thereby correcting the 7 errors correctly.
VII. Acknowledgements
During this work reported in this article, the first named author was supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R25GM121270. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The second named author would like to express his gratitude to the Indo-Norwegian project supported by Research Council of Norway (Project number 280731), and the DST of Govt. of India. Some part of the research was carried out when the second named author was at UiT: The Arctic University of Norway.
Contributor Information
Fernando L. Piñero, Department of Mathematics, University of Puerto Rico in Ponce, 2151 Ave. Santiago de los Caballeros, Ponce, P.R. 00716-7186.
Prasant Singh, Department of Mathematics, Indian Institute of Technology Jammu, NH-44, PO Nagrota, Jagti, Jammu and Kashmir 181221..
References
- [1].Beelen P and Piñero F, The structure of dual Grassmann codes, Des. Codes Cryptogr. 79 (2016), 451–470. [Google Scholar]
- [2].Beelen P and Singh P, Point-line incidence on Grassmannians and majority logic decoding of Grassmann codes, Finite Fields Appl. 73 (2021), 101843. [Google Scholar]
- [3].Chen H, On the minimum distance of Schubert codes, IEEE Trans. Inform. Theory 46 (2000), 1535–1538. [Google Scholar]
- [4].Ghorpade SR and Lachaud G, Higher weights of Grassmann codes, Coding Theory, Cryptography and Related Areas (Guanajuato, 1998), Buchmann J, Hoeholdt T, Stichtenoth H and Tapia-Recillas H Eds., Springer-Verlag, Berlin, (2000), 122–131. [Google Scholar]
- [5].Ghorpade SR and Kaipa KV, Automorphism groups of Grassmann codes, Finite Fields Appl. 23 (2013), 80–102. [Google Scholar]
- [6].Ghorpade SR, Patil AR and Pillai HK, Decomposable subspaces, linear sections of Grassmann varieties, and higher weights of Grassmann codes, Finite Fields Appl. 15 (2009), 54–68. [Google Scholar]
- [7].Ghorpade SR and Singh P, Minimum Distance and the Minimum Weight Codewords of Schubert Codes, Finite Fields Appl. 49 (2018), 1–28. [Google Scholar]
- [8].Ghorpade SR and Tsfasman MA, Schubert varieties, linear codes and enumerative combinatorics, Finite Fields Appl. 11 (2005), 684–699. [Google Scholar]
- [9].Guerra L and Vincenti R, On the linear codes arising from Schubert varieties, Des. Codes Cryptogr. 33 (2004), 173–180. [Google Scholar]
- [10].Kleiman SL and Laksov D, Schubert calculus, Amer. Math. Monthly 79 (1972), 1061–1082. [Google Scholar]
- [11].Kaipa K and Pillai H, Weight spectrum of codes associated with the Grassmannian G(3,7), IEEE Trans. Inform. Theory 59 (2013), 983–993. [Google Scholar]
- [12].Manivel L, Symmetric functions, Schubert polynomials and degeneracy loci. Translated from the 1998 French original by John R. Swallow. SMF/AMS Texts and Monographs, 6. Cours Spécialisés [Specialized Courses], 3. American Mathematical Society, Providence, RI; Société Mathématique de; France, Paris, 2001. [Google Scholar]
- [13].Peterson WW, Encoding and error-correction procedures for the Bose–Chaudhuri codes, IRE Trans. Inform. Theory IT–6 (1960), 459–470. [Google Scholar]
- [14].Yu D. Nogin, Codes associated to Grassmannians, Arithmetic, Geometry and Coding Theory (Luminy, 1993), Pellikaan R, Perret M, Vlăduţ SG, Eds., Walter de Gruyter, Berlin, (1996), 145–154. [Google Scholar]
- [15].Yu D. Nogin, The spectrum of codes associated with the Grassmannian variety G(3, 6), Problems of Information Transmission 33 (1997), 114–123 [Google Scholar]
- [16].Ryan CT, An application of Grassmannian varieties to coding theory, Congr. Numer. 157 (1987), 257–271. [Google Scholar]
- [17].Ryan CT, Projective codes based on Grassmann varieties, Congr. Numer. 157 (1987), 273–279. [Google Scholar]
- [18].Singh P, Majority Logic Decoding for Certain Schubert Codes Using Lines in Schubert Varieties, IEEE Trans. Inform. Theory 68 (2022), 795–805. [Google Scholar]
- [19].SageMath, the Sage Mathematics Software System (Version 9.5.0), The Sage Developers, 2022, https://www.sagemath.org. [Google Scholar]
- [20].Tsfasman M, Vlăduţ S and Nogin D, Algebraic Geometric Codes: Basic Notions, Mathematical Surveys and Monographs, 139. American Mathematical Society, Providence, RI, 2007. [Google Scholar]
- [21].Xiang X, On The Minimum Distance Conjecture For Schubert Codes, IEEE Trans. Inform. Theory 54 (2008), 486–488. [Google Scholar]
