Centraliser codes

July 12, 2017 | Autor: Adel Alahmadi | Categoria: Engineering, Mathematical Sciences
Share Embed


Descrição do Produto

Linear Algebra and its Applications 463 (2014) 1–10

Contents lists available at ScienceDirect

Linear Algebra and its Applications www.elsevier.com/locate/laa

Centraliser codes Adel Alahmadi a , Shefa Alamoudi a , Suat Karadeniz b , Bahattin Yildiz b , Cheryl Praeger c , Patrick Solé a,d,∗ a

Department of Mathematics, King Abdulaziz University, Jeddah, Saudi Arabia Department of Mathematics, Fatih University, 34500 Istanbul, Turkey c Center for the Mathematics of Symmetry and Computation, School of Mathematics and Statistics, University of Western Australia, Perth, Australia d Telecom ParisTech, France b

a r t i c l e

i n f o

Article history: Received 25 February 2014 Accepted 27 August 2014 Available online xxxx Submitted by R. Brualdi MSC: primary 94B05 secondary 13M05 Keywords: Group centralisers Matrix codes Cyclic matrices Separable matrices

a b s t r a c t Centraliser codes are codes of length n2 defined as centralisers of a given matrix A of order n. Their dimension, paritycheck matrices, syndromes, and automorphism groups are investigated. A lower bound on the dimension is n, the order of A. This bound is met when the minimal polynomial is equal to the annihilator, i.e. for so-called cyclic (a.k.a. nonderogatory) matrices. If, furthermore, the matrix is separable and the adjacency matrix of a graph, the automorphism group of that graph is shown to be abelian and to be even trivial if the alphabet field is of even characteristic. © 2014 Elsevier Inc. All rights reserved.

1. Introduction Let A be an n × n square matrix over some finite field Fq . By C(A) we mean the centraliser of A, namely: * Corresponding author. E-mail addresses: [email protected] (S. Karadeniz), [email protected] (B. Yildiz), [email protected] (P. Solé). http://dx.doi.org/10.1016/j.laa.2014.08.024 0024-3795/© 2014 Elsevier Inc. All rights reserved.

2

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

    AB = BA . C(A) := B ∈ Fn×n q

(1)

From basic linear algebra we know that C(A) is a linear subspace of the vector space Fn×n of all n × n matrices over Fq . Now considering C(A) as a subspace, we obtain a q code whose elements are matrices, that can be viewed as vectors of length n2 , by reading them column by column. Definition 1. For any q-ary n × n square matrix A, the subspace C(A) formed above is called the centraliser code generated by A. In a sense A serves as a parity-check matrix as well, because for any n × n matrix B ∈ Fn×n , we have q B ∈ C(A)



AB − BA = 0.

More concretely, we have the following result. Proposition 1.1. A parity-check matrix for C(A) is given by   H = In ⊗ A − AT ⊗ In , with ⊗ denoting the Kronecker product, and M T the transpose of the matrix M . Proof. Consider the map B → AB. If we think of B as written off column by column into a vector of length n2 , say Vec(B), then the matrix of this map is readily seen to be In ⊗A. Thus Vec(AB) = (In ⊗ A) Vec(B). Let Tn be the n2 × n2 matrix of the endomorphism B → B T , that is Vec(B T ) = Tn Vec(B). By a similar calculation the matrix of the map B → BA = (AT B T )T , turns out to be, after some algebra, Tn (In ⊗ AT )Tn . The result follows now upon applying the identity     T n A  ⊗ B  T n = B  ⊗ A of [3, Corollary 52, p. 44] in the special case A = In , B  = AT .

2

The problems about C(A) that arise naturally for a given A include • computing its dimension • finding efficient encoding and decoding procedures • determining its automorphism group The paper is organized as follows. Sections 2, 3, 4 tackle in order the above three problems. Section 5 is dedicated to concrete examples of codes and Section 6 contains some concluding remarks and open problems.

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

3

2. Dimension 2.1. Estimates First we quote an upper bound from [1, Lemma 3] proved there over a general field. Recall that the elementary matrix Ei,j has only one nonzero entry at row i and column j. By ⊕ we mean the block diagonal concatenation of matrices. Theorem 2.1. If A is not a scalar matrix and n ≥ 2, then dim(C(A)) ≤ n2 − 2n + 2, with equality for A = aI1 ⊕ bIn−1 , or A = aIn + bE1,2 , for some distinct nonzero a, b ∈ Fq . The Hamming distance of C(aIn + bE1,2 ) is one since E1,2 ∈ C(aIn + bE1,2 ). Thus for n = 3 we obtain a bound of 5 and for n = 4 a bound of 10, consistently with the examples of Section 5. Denote by Fq [A] the algebra of polynomials in A. This algebra can also be considered as a linear code over Fq of length n2 . Recall that a matrix A is cyclic if its characteristic and minimal polynomial coincide. Theorem 2.2. For all A in Fn×n , we have Fq [A] ⊆ C(A), with equality iff A is cyclic. q Moreover, the dimension of C(A) is at least n with equality iff A is cyclic. Proof. It is immediate that Fq [A] ⊆ C(A), and it follows from elementary linear algebra that the dimension of the former code is the degree of the minimal polynomial of A. The only if part is derived in [8, Theorem 2.1]. Further, it is proved in the first paragraph of the proof of [8, Theorem 2.1] that dim(C(A)) is at least n with equality if and only if A is cyclic. 2 If we wish to construct codes C(A) strictly larger than Fq [A] it is therefore necessary to consider matrices A that are not cyclic. Such matrices are rather rare: by [8, Theorem 4.1] about q13 in proportion for large n. Some examples of such matrices can be constructed by use of graph theory as the following sample result demonstrates. We will return to this theme in Section 4. Proposition 2.3. If A is the adjacency matrix of a distance-regular graph of diameter d < n − 1 then A is not cyclic. Proof. By [2, Theorem 20.7] we know that there is an annihilator polynomial for A (with integral coefficients) of degree d + 1. By hypothesis d + 1 < n, the degree of the characteristic polynomial of A. 2

4

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

2.2. Exact formula for the dimension of C(A) over an extension field Some spectral notation is in order. If M ∈ Mn (Fq ), we denote by L(M ) the splitting field of its characteristic polynomial and by S(M ) ⊆ L(M ) its spectrum. Note that S(M ) = S(M T ). Let K(M, λ) denote the dimension over Fqn of the eigenspace attached to λ ∈ S(M ). Theorem 2.4. For all A ∈ Mn (Fq ), we have      dimL(A) C(A) = K(A, x)K AT , x . x∈S(A)

Proof. By a straightforward extension of [3, Theorem 42, p. 34] from the complex field to finite fields, the spectrum of In ⊗ A − (AT ⊗ In ) is composed of scalars λ − μ such that • λ ∈ S(A) and μ ∈ S(AT ) with corresponding eigenvectors y ⊗ x where • y is an eigenvector of AT and x is an eigenvector of A The result follows on observing that the kernel of H in Proposition 1.1 is the eigenspace corresponding to the eigenvalue zero. 2 3. Encoding and decoding 3.1. Encoding Assume that C(A) has dimension k. Then the information rate is k/n2 and we can give a procedure for encoding and decoding. As an Fq -vector space, C(A) has a basis consisting of k matrices, which we can denote by {A1 , A2 , . . . , Ak }. Then for an information vector (a1 , a2 , . . . , ak ) ∈ Fkq , we can encode this message as a1 A1 + a2 A2 + · · · + ak Ak . The decoding can be done by reversing the above procedure. So, to find the message that a matrix B represents, all we need is to find the coordinate vector of B in the basis {A1 , A2 , . . . , Ak } of C(A). 3.2. Decoding Let C(A) be the centraliser code generated by A. Then as an additive subgroup of Fn×n , C(A) will have cosets in Fn×n . Since for any matrix B ∈ Fn×n , we have B ∈ C(A) q q q

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

5

if and only if AB −BA = 0, we can actually use A in the form of a parity-check matrix to obtain syndromes. (Instead of the n2 × n2 matrix formed previously in Proposition 1.1.) Thus we give the following definition: Definition 2. Let B ∈ Fn×n be any matrix over Fq and let C(A) be the centraliser code q obtained from A. Then the syndrome of B in C(A) is defined as Synd A (B) = AB − BA. Now we can say B ∈ C(A) ⇔ Synd A (B) = 0. The following theorem suggests that this definition of the syndrome might help us in the error-correction schemes: Theorem 3.1. Suppose B1 , B2 ∈ Fn×n are two matrices. Then Synd A (B1 ) = Synd A (B2 ) q if and only if B1 and B2 are in the same coset of C(A). Proof. The matrices B1 and B2 are in the same coset of C(A) if and only if B1 − B2 ∈ C(A) which is equivalent to (B1 − B2 )A = A(B1 − B2 ). But this last equality can be written as B1 A − AB1 = B2 A − AB2 or Synd A (B1 ) = Synd A (B2 ). 2 Note that this syndrome computation will be easier than using the n2 ×n2 parity check matrix of the code of length n2 . Indeed using that large parity-check matrix has multiplicative complexity O(n4 ) while the above syndrome computation is O(nm ) where m is the exponent for multiplying two n × n matrices. The current record is m < 2.3729 [9]. This observation might prove to be one of the main advantages of working with these codes. The matrix A might sometimes make it rather easy to correct errors. As an example consider the binary [9, 5, 3]-centraliser code generated by ⎤ 1 1 1 ⎥ ⎢ A = ⎣1 1 1⎦. 1 1 1 ⎡

Now, C(A) is a one-error correcting code and the error correction in this case is very easy. A one-error vector will be a matrix with one non-zero entry. The error will be located by its row and column index. As an example suppose ⎡

1 ⎢ E = ⎣0 0

0 0 0

⎤ 0 ⎥ 0⎦ 0

is an error vector. Then the syndrome of E will be given by ⎡

⎤ 0 1 1 ⎢ ⎥ Synd A (E) = AE − EA = ⎣ 1 0 0 ⎦ , 1 0 0

6

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

which has exactly one non-zero row (the first) and one non-zero column (the first). By just looking at the syndrome we can say the error was located in the first row and first column. Conversely, if an error has occurred in the ith row and jth column, then the syndrome will have only one non-zero row (the ith) and one non-zero column (the jth). Thus for this centraliser code, the nonzero row and column in the syndrome tell us exactly where the error location is. What we explained above can be generalized to any C(A) to correct a single error. Given A, we can always locate where a single error has occurred by looking at the syndrome alone. 4. Automorphism group Since distance-regular graphs provide examples of non-cyclic matrices, it is natural to consider adjacency matrices of graphs (directed and possibly with loops but without multiple edges). Theorem 4.1. If A is the adjacency matrix of a graph with automorphism group G, then the direct product G × G acts (possibly unfaithfully) on the code C(A) by coordinate permutations. Proof. As is well-known [2], a permutation matrix P corresponds to an automorphism of G iff P A = AP , that is iff P ∈ C(A). Thus if B is a generic element of C(A) then both P B and BP are in C(A). Now the transforms B → BP and B → P B are coordinate permutations of C(A) acting on n2 points, when the coordinates of C(A) are viewed as length n2 vectors (read off column by column). 2 Recall that a semi-regular permutation is a non-identity permutation all of whose cycles have the same length. Corollary 4.2. If A is the adjacency matrix of a graph with a semi-regular automorphism made of a-cycles then C(A) is up to equivalence (n2 /a)-quasi-cyclic. Proof. Follows by taking the semi-regular permutation to act either on rows or on columns of the matrix codewords. 2 Corollary 4.3. If A is a permutation matrix corresponding to a cycle of length n then C(A) is equivalent to an n-quasi-cyclic code. Proof. Take a = n in Corollary 4.2.

2

In [4], Chao proves that for a graph, “directed or undirected, with or without loops”, for which the adjacency matrix D has pairwise distinct eigenvalues, the automorphism group is abelian. Since the eigenvalues lie over the complex numbers the result does not

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

7

carry over to matrices over finite fields without modification. The appropriate concept for the matrix A over Fq is that of separability. A matrix A over Fq is separable if and only if its characteristic polynomial is multiplicity-free. Such matrices are reasonably plentiful, see [8, Theorem 7.1]: for large q the proportion of separable matrices is roughly 1 − 1/q. Proposition 4.4. If A is the adjacency matrix over Fq of a graph G (directed or undirected, with or without loops), and if A is separable, then the automorphism group of G is abelian. Proof. Since the characteristic polynomial of A is multiplicity-free, the vector space V decomposes as a direct sum of irreducible Fq A -modules which are pairwise nonisomorphic, say V = U1 ⊕ · · · ⊕ Uk , for some k ≥ 1. Thus there is a matrix S ∈ GL(n, q) such that D := SAS −1 = diag(D11 , . . . , Dkk )

(2)

is block-diagonal, where the Dii are pairwise distinct irreducible matrices and Dii ∈ Mni (q) with ni = dim(Ui ). Moreover, the centraliser of D consists of all block-diagonal matrices of the form Y = diag(Y11 , . . . , Ykk ), where Yii ∈ C(Dii ) = Fq [Dii ] (the latter equality holding by Theorem 2.2). In particular each of C(Dii ) = Fq [Dii ] is abelian, and hence the centraliser of SAS −1 is abelian. The automorphism group of G can be identified with the subgroup H of permutation matrices P such that P A = AP . As shown in [4], P A = AP if and only if SP S −1 commutes with D = SAS −1 . Thus SHS −1 is contained in the centraliser of D, and hence SHS −1 is abelian. It follows that H is abelian. 2 The result of Chao generalized to directed graphs an older result of Mowshowitz [7] for undirected graphs with distinct eigenvalues, which showed that in this case the automorphism group is an elementary abelian 2-group of bounded order. Mowshowitz’s result was extended in a different way by Criscuolo et al. [5] for graphs whose adjacency matrices A are cyclic over a subfield of the complex numbers. The special properties of symmetric matrices over C ensure that such a matrix A is separable, see [5, top of p. 295]. Thus to obtain an analogue of their result over a finite field we must assume that A is separable. Theorem 4.5. If A is the adjacency matrix over Fq of an undirected graph G (with or without loops), and if A is separable, then the automorphism group of G is an elementary abelian 2-group of order at most 2k , where k is the number of irreducible factors of the minimal polynomial μA (x). In particular this group is trivial if q is even. Proof. Since G is undirected, the adjacency matrix A is symmetric so A = AT . Since A is separable, by Proposition 4.4, the automorphism group H of G is abelian. Let P ∈ H so that P A = AP . Since each separable matrix is cyclic, it follows from Theorem 2.2 that

8

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

P = f (A) for some f (x) ∈ Fq [x]. Using the fact that A = AT , we have P T = f (A)T = f (AT ) = f (A) = P . Since P is a permutation matrix, P −1 = P T . Thus P −1 = P and so P 2 = I. Thus H has exponent at most 2 and so is an elementary abelian 2-group. Moreover, as in the proof of Proposition 4.4, A satisfies (2) for some S ∈ GL(n, q), so D = SAS −1 is block diagonal. Now, for P = f (A) ∈ H, we have YP := SP S −1 = Sf (A)S −1 = f (D), and YP commutes with D. Again, as in the proof of Proposition 4.4, YP = diag(Y11 , . . . , Ykk ), where each Yii ∈ C(Dii ). Note that the minimal and characteristic polynomials of A coincide. On the one hand P 2 = In implies that YP2 = In and hence Yii2 = Ini for each i. On the other hand, C(Dii ) is a field of order q ni and hence contains only one element of multiplicative order 2, namely −Ini in odd characteristic and none in characteristic two. It follows that there are at most 2k choices for YP , so |H| ≤ 2k and that |H| = 1 when q is even. 2 5. Examples 5.1. A family of [n2 , n, n]-codes Let A be a cyclic circulant matrix. Then by Theorem 2.2, the dimension of C(A) is n the minimum possible for a given matrix order. Now, since A is circulant, all matrices in Fq [A] are also circulant, thus C(A) consists of all circulant n × n matrices over Fq . Since all rows of a circulant matrix have the same Hamming weight, the weight of any non-zero circulant matrix will have Hamming weight at least n. Thus C(A) will be an [n2 , n, n]-code. It is equivalent to an n-quasi-cyclic code by Theorem 4.1. For example, the 5 × 5 permutation matrix that corresponds to the cycle (12345) will be a cyclic circulant matrix, thus leading to a [25, 5, 5]-code. 5.2. A [9, 5, 3]2 -optimal centraliser code Let A be the 3 × 3 matrix given by ⎡

1 ⎢ A = ⎣1 1

1 1 1

⎤ 1 ⎥ 1⎦. 1

Then C(A) has size 32 = 25 and minimum Hamming distance 3 and weight enumerator PC (z) = 1 + 6z 3 + 9z 4 + 9z 5 + 6z 6 + z 9 . Since the best minimum distance of a [9, 5]2 binary code is 3 by [6], this is the best we can get for the 3 × 3 case. It is interesting to note that an exhaustive search over all possible 3 × 3 matrices and their centralisers reveal that the size of the centralisers in

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

9

this case are either 512 (when A is the zero matrix or the identity matrix) or 32 or 8 corresponding to a cyclic A. 5.3. A [16, 10, 4]2 centraliser code Let A be the 4 × 4 binary matrix given by ⎡

0 ⎢1 ⎢ A=⎢ ⎣1 1

1 0 1 1

1 1 0 1

⎤ 1 1⎥ ⎥ ⎥. 1⎦ 0

For this matrix, C(A) turns out to be a binary code with parameters [16, 10, 4] which are optimal by [6]. In this case an exhaustive search reveals that for all 4 × 4 binary matrices A, the size of C(A) takes on values from {16, 64, 256, 1024, 65 536}. 5.4. A ternary example Taking A to be any ternary 3 × 3 matrix, and running an exhaustive search over all the centraliser codes, we see that the size of C(A) is 39 for exactly three cases (when A = 0, I3 and 2I3 ); it is 35 for exactly 1014 cases and 33 for exactly 18 666 cases. This is consistent with the fact that a majority of matrices are cyclic. The minimum distance again being bounded above by 3, the best ternary centraliser code we get from a 3 × 3 matrix has parameters [9, 5, 3]. An example of such an A is: ⎡

⎤ 2 2 2 ⎢ ⎥ A = ⎣2 2 2⎦. 2 2 2 6. Conclusion In this paper we have studied the class of codes of length n2 over a finite field consisting of the centraliser C(A) of a given n ×n matrix A. We have showed that each such code has dimension at least n and that this lower bound is achieved precisely for cyclic matrices. In general determining the dimension of such a code given A is a non-trivial problem, and we were only able to give a spectral characterization of the dimension over an extension of the base field. One advantage of this family of codes is the existence of a syndrome with an efficient computing algorithm for large n. In general matrices A with a high degree of symmetry yield codes with high symmetry as well. If A is the adjacency matrix of a graph Γ and if, furthermore, A is separable, then the automorphism group of Γ is shown to be strongly restricted. These results of independent interest are derived using the centraliser code of the adjacency matrix of the graph.

10

A. Alahmadi et al. / Linear Algebra and its Applications 463 (2014) 1–10

References [1] S. Akbari, M. Ghandehari, M. Hadian, A. Mohammadian, On commuting graphs of semisimple rings, Linear Algebra and its Appl. 390 (2004) 345–355. [2] N. Biggs, Algebraic Graph Theory, 2nd edition, Cambridge University Press, Cambridge, 1996. [3] B.J. Broxson, The Kronecker product, PhD dissertation, University of North Florida, 2006. [4] C.-Y. Chao, A note on the eigenvalues of a graph, J. Combin. Theory Ser. B 10 (1971) 301–302. [5] G. Criscuolo, C.-M. Kwok, A. Mowshowitz, R. Tortora, The group and the minimal polynomial of a graph, J. Combin. Theory Ser. B 29 (1980) 293–302. [6] M. Grassl, Online tables of linear codes, www.codetables.de. [7] A. Mowshowitz, The group of a graph whose adjacency matrix has all distinct eigenvalues, in: Proof Techniques in Graph Theory, Proc. Second Ann Arbor Graph Theory Conf., Ann Arbor, MI, 1968, Academic Press, New York, 1969, pp. 109–110. [8] P.M. Neumann, C.E. Praeger, Cyclic matrices over finite fields, J. of the London Math Soc. (2) 52 (1995) 263–284. [9] V.V. Williams, An overview of the recent progress on matrix multiplication, http://theory.stanford. edu/~virgi/sigactcolumn.pdf, 2012, ACM SIGACT News.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.