1 Introduction

Rank-metric codes are sets of matrices whose distance is measured by the rank of their difference. Over finite fields, the codes have found various applications in network coding, cryptography, space-time coding, distributed data storage, and digital watermarking. The first rank-metric codes were introduced in [6, 9, 22] and are today called Gabidulin codes. Motivated by cryptographic applications, Gaborit et al. introduced low-rank parity-check (LRPC) in [1, 10]. They can be seen as the rank-metric analogs of low-density parity-check codes in the Hamming metric. LRPC codes have since had a stellar career, as they are already the core component of a second-round submission to the currently running NIST standardization process for post-quantum secure public-key cryptosystems [17]. They are suitable in this scenario due to their weak algebraic structure, which prevents efficient structural attacks. Despite this weak structure, the codes have an efficient decoding algorithm, which in some cases can decode up to the same decoding radius as a Gabidulin code with the same parameters, or even beyond [1]. A drawback is that for random errors of a given rank weight, decoding fails with a small probability. However, this failure probability can be upper-bounded [1, 10] and decreases exponentially in the difference between maximal decoding radius and error rank. The codes have also found applications in powerline communications [29] and network coding [19].

Codes over finite rings, in particular the ring of integers modulo m, have been studied since the 1970s [3, 4, 24]. They have, for instance, be used to unify the description of good non-linear binary codes in the Hamming metric, using a connection via the Gray mapping from linear codes over \(\mathbb {Z}_4\) with high minimum Lee distance [12]. This Gray mapping was generalized to arbitrary moduli m of \(\mathbb {Z}_m\) in [5]. Recently, there has been an increased interest in rank-metric codes over finite rings due to the following applications. Network coding over certain finite rings was intensively studied in [7, 11], motivated by works on nested-lattice-based network coding [8, 18, 26, 28] which show that network coding over finite rings may result in more efficient physical-layer network coding schemes. Kamche et al. [14] showed how lifted rank-metric codes over finite rings can be used for error correction in network coding. The result uses a similar approach as [23] to transformation the channel output into a rank-metric error-erasure decoding problem. Another application of rank-metric codes over finite rings are space-time codes. It was first shown in [15] how to construct space-time codes with optimal rate-diversity tradeoff via a rank-preserving mapping from rank-metric codes over Galois rings. This result was generalized to arbitrary finite principal ideal rings in [14]. The use of finite rings instead of finite fields has advantages since the rank-preserving mapping can be chosen more flexibly. Kamche et al. also defined and extensively studied Gabidulin codes over finite principal ideal rings. In particular, they proposed a Welch–Berlekamp-like decoder for Gabidulin codes and a Gröbner-basis-based decoder for interleaved Gabidulin codes [14].

Motivated by these recent developments on rank-metric codes over rings, in this paper we define and analyze LRPC codes over Galois rings. Essentially, we show that Gaborit et al.’s construction and decoder work as well over these rings, with only a few minor technical modifications. The core difficulty of proving this result is the significantly more involved failure probability analysis, which stems from the weaker algebraic structure of rings compared to fields: the algorithm and proof are based on dealing with modules over Galois rings instead of vector spaces over finite fields, which behave fundamentally different since Galois rings are usually not integral domains. We also provide a thorough complexity analysis. The results can be summarized as follows.

1.1 Main results

Let p be a prime and rs be positive integers. A Galois ring \({R}\) of cardinality \(p^{rs}\) is a finite Galois extension of degree s of the ring \(\mathbb {Z}_{p^r}\) of integers modulo the prime power \(p^r\). As modules over \({R}\) are not always free (i.e., have a basis), matrices over \({R}\) have a rank and a free rank, which is always smaller or equal to the rank. We will introduce these and other notions formally in Sect. 2.

In Sect. 3, we construct a family of rank-metric codes and a corresponding family of decoders with the following properties: Let \(m,n,k,\lambda \) be positive integers such that \(\lambda \) is greater than the smallest divisor of m and k fulfills \(k \le \tfrac{\lambda -1}{\lambda } n\). The constructed codes are subsets \(\mathcal {C}\subseteq {R}^{m \times n}\) of cardinality \(|\mathcal {C}| = |{R}|^{mk}\). Seen as a set of vectors over an extension ring of \({R}\), the code is linear w.r.t. this extension ring. We exploit this linearity in the decoding algorithm.

Furthermore, let t be a positive integer with \(t < \min \!\left\{ \tfrac{m}{\lambda (\lambda +1)/2}, \tfrac{n-k+1}{\lambda }\right\} \). Let \(\varvec{C}\in \mathcal {C}\) be a (fixed) codeword and let \(\varvec{E}\in {R}^{m \times n}\) be chosen uniformly at random from all matrices of rank t (and arbitrary free rank). Then, we show in Sect. 5 that the proposed decoder in Sect. 4 recovers the codeword \(\varvec{C}\) with probability at least

$$\begin{aligned} 1-4 p^{s[\lambda t-(n-k+1)]} - 4 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) }. \end{aligned}$$

Hence, depending on the relation of \(p^s\) and t, the success probability is positive for

$$\begin{aligned} t \lessapprox t_\mathrm {max} := \left\lceil \min \!\left\{ \tfrac{m}{\lambda (\lambda +1)/2}, \tfrac{n-k+1}{\lambda }\right\} \right\rceil -1. \end{aligned}$$

and converges exponentially fast to 1 in the difference \(t_\mathrm {max}-t\). Note that for \(\lambda =2\) and \(m>\tfrac{3}{2}(n-k+1)\), we have \(t_\mathrm {max} = \lfloor \tfrac{n-k}{2}\rfloor \).

The decoder has complexity \(\tilde{O}(\lambda ^2 n^2 m)\) operations in \({R}\) (see Sect. 6). In Sect. 7, we present simulation results.

Example 1

Consider the case \(p=2\), \(s=4\), \(r=2\), \(m=n=101\), \(k=40\), and \(\lambda =2\). Then, the decoder in Sect. 4 can correct up to \(t_\mathrm {max} = \lfloor \tfrac{n-k}{2}\rfloor = 30\) errors with success probability at least \(1-2^{-6}\). For \(t=24\) errors, the success probability is already \(\approx 1-2^{-46}\) and for \(t=18\), it is \(\approx 1-2^{-102}\). A Gabidulin code as in [14], over the same ring and the same parameters, can correct any error of rank up to 30 (i.e., the same maximal radius). However, the currently fastest decoder for Gabidulin codes over rings [14] has a larger complexity than the LRPC decoder in Sect. 4.

The results of this paper were partly presented at the IEEE International Symposium on Information Theory 2020 [21]. Compared to this conference version, we generalize the results in two ways: first, we consider LRPC codes over the more general class of Galois rings instead of the integers modulo a prime power. This is a natural generalization since Galois rings share with finite fields many of the properties needed for dealing with the rank metric. Indeed, they constitute the common point of view between finite fields and rings of integers modulo a prime power. Second, the conference version only derives a bound on the failure probability for errors whose free rank equals their rank. For some applications, this is no restriction since the error can be designed, but for most communications channels, we cannot influence the error and need to correct also errors of arbitrary rank profile. Hence, we provide a complete analysis of the failure probability for all types of errors.

2 Preliminaries

2.1 Notation

Let A be any commutative ring. We denote modules over A by calligraphic letters, vectors as bold small letters, and matrices as bold capital letters. We denote the set of \(m\times n\) matrices over the ring A by \(A^{m\times n}\) and the set of row vectors of length n over A by \(A^{n} = A^{1\times n}\). Rows and columns of \(m\times n\) matrices are indexed by \(1,\ldots ,m\) and \(1,\ldots ,n\), where \(X_{i,j}\) denotes the entry in the i-th row and j-th column of the matrix \(\varvec{X}\). Moreover, for an element a in a ring A, we denote by \({{\,\mathrm{Ann}\,}}(a)\) the ideal \({{\,\mathrm{Ann}\,}}(a) = \{b \in A \mid ab = 0\}\).

2.2 Galois rings

A Galois ring \({R}:={{\,\mathrm{GR}\,}}(p^r,s)\) is a finite local commutative ring of characteristic \(p^r\) and cardinality \(p^{rs}\), which is isomorphic to \(\mathbb {Z}[z]/(p^r,f(z))\), where f(z) is a polynomial of degree s that is irreducible modulo p. Let \(\mathfrak {m}\) be the unique maximal ideal of \({R}\). It is also well-known that \({R}\) is a finite chain ring and all its ideals are powers of \(\mathfrak {m}\) such that r is smallest positive integer r for which \(\mathfrak {m}^r = \{0\}\). Since Galois rings are principal ideal rings, \(\mathfrak {m}\) is generated by one ring element. We will call such a generator \(g_\mathfrak {m}\) (which is unique up to invertible multiples). Note that in a Galois ring this element can always be chosen to be p. Moreover, \({R}/\mathfrak {m}\) is isomorphic to the finite field \(\mathbb {F}_{p^s}\).

In this setting, it is well-known that there exists a unique cyclic subgroup of \({R}^*\) of order \(p^s-1\), which is generated by an element \(\eta \). The set \(T_s := \{0\}\cup \langle \eta \rangle \) is known as Teichmüller set of \({R}\). Every element \(a\in {R}\) has hence a unique representation as

$$\begin{aligned} a=\sum _{i=0}^{r-1} g_\mathfrak {m}^ia_i, \quad a_i\in T_s. \end{aligned}$$

We will refer to this as the Teichmüller representation of a. For Galois rings, this representation coincides with the p-adic expansion. If, in addition, one chooses the polynomial h(z) to be a Hensel lift of a primitive polynomial in \(\mathbb {F}_p[x]\) of degree s, then the element \(\eta \) can be taken to be one of the roots of h(z). Here, for Hensel lift of a primitive polynomial \(\bar{h}(z)\in \mathbb {F}_p[z]\), we mean that \(h(x)\in \mathbb {Z}_{p^r}[z]\) is such that the canonical projection of h(z) over \(\mathbb {F}_p[z]\) is \(\bar{h}(z)\) and h(z) divides \(z^{p^s-1}-1\) in \(\mathbb {Z}_{p^r}[z]\). The interested reader is referred to [2, 16] for a deeper understanding on Galois rings.

It is easy to see that the number of units in \({R}\) is given by

$$\begin{aligned} |{R}^*|&= |{R}\setminus \mathfrak {m}| = |{R}| - |\mathfrak {m}| = p^{sr} -p^{s(r-1)} = |{R}|\big (1-p^{-s}\big ). \end{aligned}$$
(1)

Example 2

Let \(p=2\), \(s=1\), \(r=3\), and \({R}= \{0,1,\ldots ,7\}\). We have that \(\mathfrak {m}= \{0,2,4,6\}\) and \({R}/\mathfrak {m}= \{0,1\} = \mathbb {F}_2\). Thus, \(g_\mathfrak {m}= 2\). The set \(\{1\}\) is the unique cyclic subgroup of \({R}^*=\{1,3,5,7\}\) of order \(p^s-1 = 1\) which is generated by \(\eta =1\) and \(T_s = \{0,1\}\). Then, the Teichmüller representation of \(a=5\) is given by \( a = 1\cdot g_\mathfrak {m}^0 + 0 \cdot g_\mathfrak {m}^1 + 1 \cdot g_\mathfrak {m}^2\).

Example 3

Let \(p=2\), \(s=3\), \(r=3\), and let us construct \({R}= {{\,\mathrm{GR}\,}}(8,3)\). Consider the ring \(\mathbb {Z}_{8}\), and \(h(z):=z^3+6z^2+5z+7\in \mathbb {Z}_{8}[z]\). The canonical projection of the polynomial h(z) over \(\mathbb {F}_2[z]\) is \(z^3+z+1\) which is primitive, and hence irreducible, in \(\mathbb {F}_2[z]\). Thus, we have

$$\begin{aligned} {R}\cong \mathbb {Z}_8[z]/(h(z)). \end{aligned}$$

Clearly, \(\mathfrak {m}=(2){R}\) and we can choose \(g_\mathfrak {m}=2\). Moreover, if \(\eta \) is a root of h(z), then we also have \({R}\cong \mathbb {Z}_8[\eta ]\), and every element can be represented as \(a_0+a_1\eta +a_2\eta ^2\), for \(a_0,a_1,a_2\in \mathbb {Z}_{8}\). On the other hand, the polynomial h(z) divides \(x^7-1\) in \(\mathbb {Z}_8[z]\) and therefore it is a Hensel lift of \(z^3+z+1\). This implies that \(\eta \) has order 7, and the Teichmüller set is \(T_3=\{0,\eta , \eta ^2,\ldots ,\eta ^7=1\}\). If we take the element \(a=5+3\eta ^2\), then, it can be verified that its Teichmüller represntation is \(a=\eta ^6+\eta ^4g_\mathfrak {m}+\eta ^5g_\mathfrak {m}^2=\eta ^6+\eta ^4\cdot 2+\eta ^5\cdot 4\).

2.3 Extensions of Galois rings

Let \(h(z) \in {R}[z]\) be a polynomial of degree m such that the leading coefficient of h(z) is a unit and h(z) is irreducible over the finite field \({R}/\mathfrak {m}\). Then, the Galois ring \({R}[z]/(h(z))\) is denoted by \({S}\). We have that \({S}\) is the Galois ring \({{\,\mathrm{GR}\,}}(p^r,sm)\), with maximal ideal \(\mathfrak {M}= \mathfrak {m}{S}\). Moreover, it is known that subrings of Galois rings are Galois rings and that for every \(\ell \) dividing m there exists a unique subring of \({S}\) which is a Galois extension of degree \(\ell \) of \({R}\). These are all subrings of \({S}\) that contain \({R}\). In particular there exists a unique copy of \({R}\) in \({S}\), and we can therefore consider (with a very small abuse of notation) \({R}\subseteq {S}\). In particular, we have that \(g_\mathfrak {m}\) is also the generator of \(\mathfrak {M}\) in \({S}\).

As for \({R}\), also \({S}\) contains a unique cyclic subgroup of order \(p^{sm}-1\), and we can consider the Teichmüller set \(T_{sm}\) as the union of such a subgroup together with the 0 element. Hence, every \(a\in {S}\) has a unique representation as

$$\begin{aligned} a=\sum _{i=0}^{r-1} g_\mathfrak {m}^ia_i, \quad a_i\in T_{sm}. \end{aligned}$$

The number of units in \({S}\) is given by

$$\begin{aligned} |{S}^*|&= |{S}\setminus \mathfrak {M}| = |{S}| - |\mathfrak {M}| = p^{srm} - |\mathfrak {m}|^m = p^{srm} - \big (p^{s(r-1)}\big )^m \\&= p^{srm}\big (1-p^{-sm}\big ) = |{S}|\big (1-p^{-sm}\big ). \end{aligned}$$

From now on and for the rest of the paper, we will always denote by \({R}\) the Galois ring \({{\,\mathrm{GR}\,}}(p^r,s)\), and by \({S}\) the Galois ring \({{\,\mathrm{GR}\,}}(p^r,sm)\).

2.4 Smith normal form

The Smith normal form is well-defined for both \({R}\) and \({S}\), i.e., for \(\varvec{A}\in {R}^{m \times n}\), there are invertible matrices \(\varvec{S}\in {R}^{m \times m}\) and \(\varvec{T}\in {R}^{n \times n}\) such that

$$\begin{aligned} \varvec{D}= \varvec{S}\varvec{A}\varvec{T}\in {R}^{m \times n} \end{aligned}$$

is a diagonal matrix with diagonal entries \(d_1,\ldots ,d_{\min \{n,m\}}\) with

$$\begin{aligned} d_j \in \mathfrak {m}^{i_j} \setminus \mathfrak {m}^{i_j+1}, \end{aligned}$$

where the \(0 \le i_1 \le i_2 \le \cdots \le i_{\min \{n,m\}} \le r\). The same holds for matrices over \({S}\), where we replace \(\mathfrak {m}\) by \(\mathfrak {M}\) (note that \(\mathfrak {M}^r=\{0\}\) and \(\mathfrak {M}^{r-1}\ne \{0\}\) for the same r). The rank and the free rank of \(\varvec{A}\) (w.r.t. a ring \(A \in \{{S},{R}\}\)) is defined by \(\mathrm {rk}(\varvec{A}) := |\{ i\in \{1,\ldots ,\min \{m,n\}\}: \varvec{D}_{i,i} \not = 0 \}|\) and \(\mathrm {frk}(\varvec{A}) := |\{ i \in \{1,\ldots ,\min \{m,n\}\} :\varvec{D}_{i,i} \text { is a unit} \}|\), respectively, where \(\varvec{D}\) is the diagonal matrix of the Smith normal form w.r.t. the ring R.

2.5 Modules over finite chain rings

The ring \({S}\) is a free module over \({R}\) of rank m. Hence, elements of \({S}\) can be treated as vectors in \({R}^m\) and linear independence, \({R}\)-subspaces of \({S}\) and the \({R}\)-linear span of elements are well-defined. Let \(\varvec{\gamma }=[\gamma _1,\ldots ,\gamma _m]\) be an ordered basis of \({S}\) over \({R}\). By utilizing the module space isomorphism \({S}\cong {R}^m\), we can relate each vector \(\varvec{a}\in {S}^{n}\) to a matrix \(\varvec{A}\in {R}^{m\times n}\) according to \({{\,\mathrm{ext}\,}}_{\gamma } : {S}^{n} \rightarrow {R}^{m\times n}, \varvec{a}\mapsto \varvec{A}\), where \(a_j = \sum _{i=1}^{m} A_{i,j} \gamma _{i}\), \(j \in \{1,\ldots ,n\}\). The (free) rank norm \(({{\,\mathrm{f}\,}})\mathrm {rk}_{{R}}(\varvec{a})\) is the (free) rank of the matrix representation \(\varvec{A}\), i.e., \(\mathrm {rk}_{{R}}(\varvec{a}) := \mathrm {rk}(\varvec{A})\) and \(\mathrm {frk}_{{R}}(\varvec{a}) := \mathrm {frk}(\varvec{A})\), respectively.

Example 4

Let \(p=2\), \(s=1\), \(r=3\) as in Example 2, \(h(z) = z^3+z+1\) and

$$\begin{aligned} \varvec{a}= \begin{bmatrix} 2z^2 + 2z + 5,&4z^2 + z + 6,&2z^2 + z \end{bmatrix}. \end{aligned}$$

Using a polyomial basis \(\varvec{\gamma }=[1,z,z^2]\), the matrix representation of \(\varvec{a}\) is

$$\begin{aligned} \varvec{A}= \begin{bmatrix} 5 &{} 6 &{} 0\\ 2 &{} 1 &{} 1\\ 2 &{} 4 &{} 2 \end{bmatrix} \end{aligned}$$

and the Smith normal form of \(\varvec{A}\) is given by

$$\begin{aligned} \varvec{D}= \begin{bmatrix} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 2 \\ \end{bmatrix}. \end{aligned}$$

It can be observed that \(d_1, d_2 \in \mathfrak {m}^0 \setminus \mathfrak {m}^1 = \{1,3,5,7\}\) and \(d_3 \in \mathfrak {m}^1 \setminus \mathfrak {m}^2 = \{2,6\}\) and thus \(\mathrm {rk}(\varvec{A}) = \mathrm {rk}(\varvec{D}) = 3\) and \(\mathrm {frk}(\varvec{A})= \mathrm {frk}(\varvec{D})= 2\). It follows that \(\mathrm {rk}_{{R}}(\varvec{a}) = 3\) and \(\mathrm {frk}_{{R}}(\varvec{a}) = 2\).

Let \(a = \sum _{i=1}^{m} a_i \gamma _i \in {S}\), where \(a_i \in {R}\). The following statements are equivalent (cf. [14, Lemma 2.4]):

  • a is a unit in \({S}\).

  • At least one \(a_i\) is a unit in \({R}\).

  • \(\{a\}\) is linearly independent over \({R}\).

The \({R}\)-linear module that is spanned by \(v_1,\ldots ,v_{\ell } \in {S}\) is denoted by \(\langle v_1,\ldots ,v_\ell \rangle _{{R}} := \big \{\sum _{i=1}^{\ell } a_i v_i : a_i \in {R}\big \}\). The \({R}\)-linear module that is spanned by the entries of a vector \(\varvec{a}\in {S}^{n}\) is called the support of \(\varvec{a}\), i.e., \(\mathrm {supp}_\mathrm {R}(\varvec{a}) := \langle a_1,\ldots ,a_n \rangle _{{R}}\). Further, \({\mathcal {A}}\cdot {\mathcal {B}}\) denotes the product module of two submodules \({\mathcal {A}}\) and \({\mathcal {B}}\) of \({S}\), i.e., \({\mathcal {A}}\cdot {\mathcal {B}}:= \langle a \cdot b \, : \, a \in {\mathcal {A}}, \, b \in {\mathcal {B}}\rangle \).

2.6 Valuation in Galois rings

We define the valuation of \(a \in {R}\setminus \{0\}\) as the unique integer \(v(a) \in \{0,\ldots ,r-1\}\) such that

$$\begin{aligned} a \in \mathfrak {m}^{v(a)} \setminus \mathfrak {m}^{v(a)+1}, \end{aligned}$$

and set \(v(0) := r\). In the same way, the valuation of \(b \in {S}\setminus \{0\}\) as the unique integer \(v(b) \in \{0,\ldots ,r-1\}\) such that

$$\begin{aligned} b \in \mathfrak {M}^{v(b)} \setminus \mathfrak {M}^{v(b)+1}, \end{aligned}$$

and \(v(0) = r\).

Let \(\{\gamma _1,\ldots , \gamma _m\}\) be a basis of \({S}\) as \({R}\)-module. It is easy to see that for \(a = \sum _{i=1}^{m} a_i \gamma _i \in {S}\setminus \{0\}\), where \(a_i \in {R}\) (not all 0), we have

$$\begin{aligned} v(a) = \min _{i=1,\ldots ,m}\{v(a_i)\}. \end{aligned}$$
(2)

Example 5

Let \(p=2\), \(s=1\), \(r=3\) as in Example 2, \(h(z) = z^3+z+1\) and let \(a=1\), \(b=2\), \(c=4 \in {R}\). Since \(a \in \mathfrak {m}^0 \setminus \mathfrak {m}^1=\{1,3,5,7\}\), \(b \in \mathfrak {m}^1 \setminus \mathfrak {m}^2 = \{2,6\}\), and \(c \in \mathfrak {m}^2 \setminus \mathfrak {m}^3=\{4\}\), one obtains \(v(a) =0\), \(v(b) = 1\) and \(v(c)=2\).

Furthermore, let \(d=2z^2+1\), \(e=4z^2+2z+2\), \(f=4z^2+4\), where \(d\in \mathfrak {M}^0\setminus \mathfrak {M}^1\), \(e\in \mathfrak {M}^1\setminus \mathfrak {M}^2\) and \(f\in \mathfrak {M}^2\setminus \mathfrak {M}^3\). It follows that \(v(d)=0\), \(v(e)=1\) and \(v(f)=2\). Since an element is a unit if and only if its valuation is equal to 0, only the elements a and d are units.

2.7 Rank profile of a module and mingensets

Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\) and \(d_1,\ldots ,d_n\) be diagonal entries of a Smith normal form of a matrix whose row space is \({\mathcal {M}}\). Define the rank profile of \({\mathcal {M}}\) to be the polynomial

$$\begin{aligned} \phi ^{{\mathcal {M}}}(x) := \sum _{i=0}^{r-1} \phi _i^{{\mathcal {M}}} x^i \in \mathbb {Z}[x]/(x^r), \end{aligned}$$

where

$$\begin{aligned} \phi ^{{\mathcal {M}}}_i := \left| \left\{ j : v(d_j)=i\right\} \right| . \end{aligned}$$

Note that \(\phi ^{{\mathcal {M}}}(x)\) is independent of the chosen matrix and Smith normal form since the diagonal entries \(d_i\) are unique up to multiplication by a unit. We can easily read the free rank and rank from the rank profile

$$\begin{aligned} \mathrm {frk}_{{R}} {\mathcal {M}}&= \phi ^{{\mathcal {M}}}_0 = \phi ^{{\mathcal {M}}}(0), \\ \mathrm {rk}_{{R}} {\mathcal {M}}&= \sum _{i=0}^{r-1} \phi ^{{\mathcal {M}}}_i = \phi ^{{\mathcal {M}}}(1). \end{aligned}$$

Example 6

Consider the ring \({R}={{\,\mathrm{GR}\,}}(8,3)\) as defined in Example 3, where as generator of \(\mathfrak {m}\) we take \(g_\mathfrak {m}=2\). Take a module \({\mathcal {M}}\) whose diagonal matrix in the Smith normal form is

$$\begin{aligned} \begin{bmatrix} 1 &{} &{} &{} &{}\\ &{} 1 &{} &{} &{} \\ &{} &{} 2 &{} &{} \\ &{} &{} &{} 4 &{}\\ &{} &{} &{} &{} 0 \end{bmatrix}. \end{aligned}$$

We have

$$\begin{aligned} \phi ^{{\mathcal {M}}}(x) = 2+x+x^2. \end{aligned}$$

On \(\mathbb {Z}[x]/(x^r)\), we define the following partial order \(\preceq \).

Definition 1

Let \(a(x),b(x) \in \mathbb {Z}[x]/(x^r)\). We say that \(a(x) \preceq b(x)\) if for every \(i\in \{0,\ldots , r-1\}\) we have

$$\begin{aligned} \sum _{j=0}^i a_j \le \sum _{j=0}^i b_j. \end{aligned}$$

Remark 1

The partial order \(\preceq \) on rank profiles is compatible with the containment of submodules. That is, if \(M_1\subseteq M_2\) then \(\phi ^{{\mathcal {M}}_1} \preceq \phi ^{{\mathcal {M}}_2}\). Clearly the opposite implication is not true in general.

For \(\varvec{D}\) and \(\varvec{T}\) as in the Smith normal form of a matrix over \({R}\), observe that the nonzero rows of the matrix \(\varvec{D}\varvec{T}^{-1}\) produce a set of generators for the \({R}\)-module generated by the rows of \(\varvec{A}\), which is minimal and of the form

$$\begin{aligned} \varGamma =\{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}. \end{aligned}$$

A generating set coming from the Smith Normal Form as described above will be called \(\mathfrak {m}\)-shaped basis. Alternatively, a \(\mathfrak {m}\)-shaped basis for a \({R}\)-module \({\mathcal {M}}\) is a generating set \(\{b_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\) such that \(v(b_{i,\ell _i})=i\). Moreover, every \({R}\)-submodule of \({R}^n\) can be seen as the rowspace of a matrix, and hence it decomposes as

$$\begin{aligned} {\mathcal {M}}=\left\langle \varGamma ^{(0)}\right\rangle _{{R}}+\mathfrak {m}\left\langle \varGamma ^{(1)}\right\rangle _{{R}} + \cdots + \mathfrak {m}^{r-1}\left\langle \varGamma ^{(r-1)}\right\rangle _{{R}}, \end{aligned}$$

where \(\varGamma ^{(i)}:=\{a_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\). It is easy to see that \(\langle \varGamma ^{(i)}\rangle _{{R}}\) is a free module. However, this decomposition depends on the chosen \(\mathfrak {m}\)-shaped basis \(\varGamma \).

For a module \(\mathcal {M}\) with \(\mathfrak {m}\)-shaped basis \(\varGamma = \{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\), we have the following: Let \(e \in \mathcal {M}\) and

$$\begin{aligned} e = \sum _{i=0}^{r-1} \sum _{\ell _i=1}^{\phi _i^\mathcal {M}} e_{i,\ell _i} g_\mathfrak {m}^ia_{i,\ell _i} = \sum _{i=0}^{r-1} \sum _{\ell _i=1}^{\phi _i^\mathcal {M}} e'_{i,\ell _i} g_\mathfrak {m}^i a_{i,\ell _i} \end{aligned}$$

be two different representations of e in the \(\mathfrak {m}\)-shaped basis with coefficients \(e_{i,\ell _i},e'_{i,\ell _i} \in {R}\), respectively. Then, we have

$$\begin{aligned} e_{i,\ell _i} \equiv e'_{i,\ell _i} \mod g_\mathfrak {m}^{r-i} \end{aligned}$$

for all \(0\le i \le r-1\) and \(1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i\). This is due to the fact that by definition of \(\mathfrak {m}\)-shaped basis, the set \(\{a_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i\}\) is linear independent over \({R}\), and hence \((e_{i,\ell _i} - e'_{i,\ell _i}) g_\mathfrak {m}^{i}=0\) for every \(i,\ell _i\). Therefore, the representation of an element in \({\mathcal {M}}\) with respect to a \(\mathfrak {m}\)-shaped basis have uniquely determined coefficients \(e_{i,\ell _i}\) modulo \({{\,\mathrm{Ann}\,}}(g_\mathfrak {m}^i)=\mathfrak {m}^{r-i}\).

Lemma 1

Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\) with rank-profile \(\phi ^{{\mathcal {M}}}\) and let \(j \in \{1,\ldots ,r-1\}\). Then, the rank-profile of \(\mathfrak {m}^j{\mathcal {M}}\) is given by

$$\begin{aligned} \phi ^{\mathfrak {m}^j{\mathcal {M}}}(x) = x^j \phi ^{{\mathcal {M}}}(x). \end{aligned}$$

In particular, the rank of \(\mathfrak {m}^j{\mathcal {M}}\) is equal to \(\phi ^{\mathfrak {m}^j{\mathcal {M}}}(1)=\sum \limits _{i=0}^{r-1-j}\phi ^{{\mathcal {M}}}_i\).

Proof

Let \(g_\mathfrak {m}\) be a generator of \(\mathfrak {m}\). If \(\varGamma =\{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\) is a \(\mathfrak {m}\)-shaped basis for M, then it is easy to see that

$$\begin{aligned} \left\{ g_\mathfrak {m}^{i+j}a_{i,\ell _i} \mid 0\le i \le r-j-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \right\} \end{aligned}$$

is a \(\mathfrak {m}\)-shaped basis for \(\mathfrak {m}^j{\mathcal {M}}\). Hence, the first j coefficients of \(\phi ^{\mathfrak {m}^j{\mathcal {M}}}(x)\) are equal to zero, while the remaining ones are the j-th shift of the first \(r-j\) coefficients of \(\phi ^{{\mathcal {M}}}(x)\). \(\square \)

Proposition 1

For any pair of \({R}\)-submodules \({\mathcal {M}}_1, {\mathcal {M}}_2\) of \({S}\), we have

$$\begin{aligned} \phi ^{{\mathcal {M}}_1 \cdot {\mathcal {M}}_2}(x) \preceq \phi ^{{\mathcal {M}}_1}(x) \phi ^{{\mathcal {M}}_2}(x). \end{aligned}$$

Proof

Let \(g_\mathfrak {m}\) be a generator of \(\mathfrak {m}\). Let \({\mathcal {M}}_1, {\mathcal {M}}_2\) be two \({R}\)-submodules with rank-profile \(\phi ^{{\mathcal {M}}_1}\) and \(\phi ^{{\mathcal {M}}_2}\) respectively. Then, there exist a minimal generating set of \({\mathcal {M}}_1\) given by

$$\begin{aligned} \varGamma _1:=\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_1}_i \right\} , \end{aligned}$$

and a minimal generating set of \(M_2\) given by

$$\begin{aligned} \varGamma _2:=\left\{ g_\mathfrak {m}^ib_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_2}_i \right\} . \end{aligned}$$

In particular, the product set \(\varGamma _1\cdot \varGamma _2\) is a generating set of \({\mathcal {M}}_1 \cdot {\mathcal {M}}_2\). Hence

$$\begin{aligned} \sum _{i=0}^{r-1} \phi ^{{\mathcal {M}}_1\cdot {\mathcal {M}}_2}_i&=\mathrm {rk}_{{R}}({\mathcal {M}}_1\cdot {\mathcal {M}}_2) \\&\le |\varGamma _1 \cdot \varGamma _2\setminus \{0\}| \\&= \sum _{i=0}^{r-1}\sum _{j=0}^i \phi ^{{\mathcal {M}}_1}_j\phi ^{{\mathcal {M}}_2}_{i-j}\\&=\sum _{i=0}^{r-1}(\phi ^{{\mathcal {M}}_1} \phi ^{{\mathcal {M}}_2})_i. \end{aligned}$$

The general inequality for the truncated sums then follows by considering the rank of the submodule \(\mathfrak {m}^j({\mathcal {M}}_1 \cdot {\mathcal {M}}_2)\) and Lemma 1. \(\square \)

3 LRPC codes over Galois rings

Definition 2

Let \(k,n,\lambda \) be positive integers with \(0<k<n\). Furthermore, let \(\mathcal {F}\subseteq {S}\) be a free \({R}\)-submodule of \({S}\) of rank \(\lambda \). A low-rank parity-check (LRPC) code with parameters \(\lambda ,n,k\) is a code with a parity-check matrix \(\varvec{H}\in {S}^{(n-k) \times n}\) such that \({{\,\mathrm{rk}\,}}_{{S}} \varvec{H}= \mathrm {frk}_{{S}} \varvec{H}= n-k\) and \(\mathcal {F}= \langle H_{1,1},\ldots ,H_{(n-k),n} \rangle _{{R}}\).

Note that an LRPC code is a free submodule of \({S}^n\) of rank k. This means that the cardinality of the code is \(|{S}|^k = |{R}|^{mk} = p^{r s m k}\). We define the following three additional properties of the parity-check matrix that we will use throughout the paper to prove the correctness of our decoder and to derive failure probabilities. As for rank-metric codes over finite fields, we can interpret vectors over \({S}\) as matrices over \({R}\) by the \({R}\)-module isomorphism \({S}\simeq {R}^m\). In particular, an LRPC code can be seen as a subset of \({R}^{m \times n}\).

Definition 3

Let \(\lambda \), \(\mathcal {F}\), and \(\varvec{H}\) be defined as in Definition 2. Let \(f_1,\ldots ,f_\lambda \in {S}\) be a free basis of \(\mathcal {F}\). For \(i=1,\ldots ,n-k\), \(j=1,\ldots ,n\), and \(\ell =1,\ldots ,\lambda \), let \(h_{i,j,\ell } \in {R}\) be the unique elements such that \(H_{i,j} = \sum _{\ell = 1}^{\lambda } h_{i,j,\ell } f_{\ell }\). Define

$$\begin{aligned} \varvec{H}_{\mathrm {ext}} := \begin{bmatrix} h_{1,1,1} &{}\quad h_{1,2,1} &{}\quad \ldots &{} \quad h_{1,n,1} \\ h_{1,1,2} &{} \quad h_{1,2,2} &{} \quad \ldots &{} \quad h_{1,n,2} \\ \vdots &{} \quad \vdots &{} \quad \ddots &{}\quad \vdots \\ h_{2,1,1} &{}\quad h_{2,2,1} &{}\quad \ldots &{}\quad h_{2,n,1} \\ h_{2,1,2} &{}\quad h_{2,2,2} &{}\quad \ldots &{} \quad h_{2,n,2} \\ \vdots &{}\quad \vdots &{} \quad \ddots &{}\quad \vdots \\ \end{bmatrix} \in {R}^{(n-k)\lambda \times n}. \end{aligned}$$
(3)

Then, \(\varvec{H}\) has the

  1. 1.

    unique-decoding property if \(\lambda \ge \tfrac{n}{n-k}\) and \(\mathrm {frk}\left( \varvec{H}_{\mathrm {ext}} \right) = \mathrm {rk}\left( \varvec{H}_{\mathrm {ext}} \right) = n\),

  2. 2.

    maximal-row-span property if every row of the parity-check matrix \(\varvec{H}\) spans the entire space \(\mathcal {F}\),

  3. 3.

    unity property if every entry \(H_{i,j}\) of \(\varvec{H}\) is chosen from the set \(H_{i,j} \in \tilde{\mathcal {F}} := \left\{ \textstyle \sum _{i=1}^{\lambda } \alpha _i f_i \, : \, \alpha _i \in {R}^* \cup \{0\} \right\} \subseteq \mathcal {F}\).

Furthermore, we say that \(\mathcal {F}\) has the base-ring property if \(1 \in \mathcal {F}\).

In the original papers about LRPC codes over finite fields, [1, 10], some of the properties of Definition 3 are used without explicitly stating them.

We will see in Sect. 4.2 that the unique-decoding property together with a property of the error guarantees that erasure decoding always works (i.e., that the full error vector can be recovered from knowing the support and syndrome of an error). This property is also implicitly used in [10]. It is, however, not very restrictive: if the parity-check matrix entries \(H_{i,j}\) are chosen uniformly at random from \(\mathcal {F}\), this property is fulfilled with the probability that a random \(\lambda (n-k) \times n\) matrix has full (free) rank n. This probability is arbitrarily close to 1 for increasing difference of \(\lambda (n-k)\) and n (cf. [20] for the field and Lemma 7 in Sect. 5.2 for the ring case).

We will use the maximal-row-span property to prove a bound on the failure probability of the decoder in Sect. 5. It is a sufficient condition that our bound (in particular Theorem 3 in Sect. 5) holds. Although not explicitly stated, [1, Proposition 4.3] must also assume a similar or slightly weaker condition in order to hold. It does not hold for arbitrary parity-check matrices as in [1, Definition 4.1] (see the counterexample in Remark 4 in Sect. 5). This is again not a big limitation in general for two reasons: first, the ideal codes in [1, Definition 4.2] appear to automatically have this property, and second, a random parity-check matrix has this property with high probability.

In the case of finite fields, the unity property is no restriction at all since the units of a finite field are all non-zero elements. That is, we have \(\tilde{\mathcal {F}} = \mathcal {F}\). Over rings, we need this additional property as a sufficient condition for one of our failure probability bounds (Theorem 3 in Sect. 5). It is not a severe restriction in general, since

$$\begin{aligned} \frac{|\tilde{\mathcal {F}}|}{|\mathcal {F}|} = \frac{(|{R}^*|+1)^\lambda }{|{R}|^\lambda } = \big (1-p^{-s}+p^{-sr}\big )^\lambda , \end{aligned}$$

which is relatively close to 1 for large \(p^s\) and comparably small \(\lambda \).

Finally, Gaborit et al. [10] also used the base-ring property of \(\mathcal {F}\). In contrast to the other three properties in Definition 3, this property only depends on \(\mathcal {F}\) and not on \(\varvec{H}\). We will also assume this property to derive a bound on the probability of one possible cause of a decoding failure event in Sect. 5.3.

4 Decoding

4.1 The main decoder

Fix \(\lambda \) and \(\mathcal {F}\) as in Definition 2. Let \(f_1,\ldots ,f_\lambda \in {S}\) be a free basis of \(\mathcal {F}\). Note that since the \(f_i\) are linearly independent, the sets \(\{f_i\}\) are linearly independent, which by the discussion in Sect. 2 implies that all the \(f_i\) are units in \({S}\). Hence, \(f_i^{-1}\) exists for each i. We will discuss erasure decoding (Line 6) in Sect. 4.2.

figure a

Algorithm 1 recovers the support \({\mathcal {E}}\) of the error \(\varvec{e}\) if \({\mathcal {E}}' = {\mathcal {E}}\). A necessary (but not sufficient) condition for this to be fulfilled is that we have \({\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\). Furthermore, we will see in Sect. 4.2 that we can uniquely recover the error vector \(\varvec{e}\) from its support \({\mathcal {E}}\) and syndrome \(\varvec{s}\) if the the parity-check matrix fulfills the unique decoding property and we have \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\). Hence, decoding works if the following three conditions are fulfilled:

  1. 1.

    \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\), (product condition).

  2. 2.

    \({\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\), (syndrome condition)

  3. 3.

    \(\bigcap _{i=1}^{\lambda } {\mathcal {S}}_i = {\mathcal {E}}\), (intersection condition),

We call the case that at least one of the three conditions is not fulfilled a (decoding) failure. We will see in the next section (Sect. 5) that whether an error results in a failure depends solely on the error support \({\mathcal {E}}\). Furthermore, given an error support that is drawn uniformly at random from the modules of a given rank profile \(\phi \), the failure probability can be upper-bounded by a function that depends only on the rank of the module (i.e., \(\phi ^{\mathcal {E}}(1)\)).

In Sect. 6, we will analyze the complexity of Algorithm 1. The proofs in that section also indicate how the algorithm can be implemented in practice.

Remark 2

Note that the success conditions above imply that for an error of rank \(\phi ^{\mathcal {E}}(1) = t\), we have \(\lambda t \le m\) (due to the product condition) as well as \(\lambda \ge \tfrac{n}{n-k}\) (due to the unique-decoding property). Combined, we obtain \(t \le m\tfrac{n-k}{n} = m(1-R)\), where \(R := \tfrac{k}{n}\) is the rate of the LRPC code.

4.2 Erasure decoding

As its name suggests, the unique decoding property of the parity-check matrix is related to unique erasure decoding, i.e., the process of obtaining the full error vector \(\varvec{e}\) after having recovered its support. The next lemma establishes this connection.

Lemma 2

(Unique Erasure Decoding) Given a parity-check matrix \(\varvec{H}\) that fulfills the unique-decoding property. Let \({\mathcal {E}}\) be a free support of rank \(t \le \tfrac{m}{\lambda }\). If \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{\mathcal {E}}\phi ^\mathcal {F}\), then, for any syndrome \(\varvec{s}\in {S}^{n-k}\), there is at most one error vector \(\varvec{e}\in {S}^n\) with support \({\mathcal {E}}\) that fulfills \(\varvec{H}\varvec{e}^\top = \varvec{s}^\top \).

Proof

Let \(f_1,\ldots ,f_\lambda \) be a basis of the free module \(\mathcal {F}\). Furthermore, let \(\varepsilon _1,\ldots ,\varepsilon _t\) be an \(\mathfrak {m}\)-shaped basis of \(\mathcal {M}\). To avoid too complicated sums in the derivation below, we use a slightly different notation as in the definition of \(\mathfrak {m}\)-shaped basis and write \(\varepsilon _j = g_\mathfrak {m}^{v(\varepsilon _j)} \varepsilon _j^*\) for all \(j=1,\ldots ,t\), where \(\varepsilon ^*_j \in {S}^*\) are units.

Due to \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{\mathcal {E}}\phi ^\mathcal {F}\), we have that \(f_i \varepsilon _\kappa \) for \(i=1,\ldots ,\lambda \) and \(\kappa =1,\ldots ,t\) is an \(\mathfrak {m}\)-shaped basis of the product space \({\mathcal {E}}\cdot \mathcal {F}\). Any entry of the parity-check matrix \(\varvec{H}\) has a unique representation \(H_{i,j} = \sum _{\ell = 1}^{\lambda } h_{i,j,\ell } f_{\ell }\) for \(h_{i,k,\ell } \in {R}\). Furthermore, any entry of error vector \(\varvec{e}= [e_1,\ldots ,e_n]\) can be represented as \(e_j = \sum _{\kappa =1}^{t} e_{j,\kappa } \varepsilon _\kappa \), where the \(e_{j,\kappa } \in {R}\) are unique modulo \(\mathfrak {m}^{r-v(\varepsilon _\kappa )}\).

We want to recover the error vector \(\varvec{e}\) from the syndrome \(\varvec{s}= [s_1,\ldots ,s_{n-k}]^\top \), which are related by definition as follows:

$$\begin{aligned} s_{i}&=\sum _{j=1}^{n}H_{i,j}e_{j} \\&=\sum _{j=1}^{n}\sum _{\ell =1}^{\lambda }h_{i,j,\ell }f_{\ell }\sum _{\kappa =1}^{t}e_{j,\kappa }\varepsilon _{\kappa } \\&=\sum _{j=1}^{n}\sum _{\ell =1}^{\lambda } \underbrace{\sum _{\kappa =1}^{t}h_{i,j,\ell }e_{j,\kappa }}_{=: \, s_{i,\ell ,\kappa }} f_{\ell }\varepsilon _{\kappa } \\&=\sum _{\ell =1}^{\lambda }\sum _{\kappa =1}^{t}s_{i,\ell ,\kappa } f_{\ell }\varepsilon _{\kappa }. \end{aligned}$$

Hence, for any representation \(e_{j,\kappa }\) of the error \(\varvec{e}\), there is a representation \(s_{i,\ell ,\kappa }\) of \(\varvec{s}\). If we know the latter representation, it is easy to obtain the corresponding \(e_{j,\kappa }\) under the assumed conditions: write

$$\begin{aligned} s_{i,\ell ,\kappa } = \sum _{j=1}^{n}h_{i,j,\ell } e_{j,\kappa },\quad \ell =1,\ldots ,\lambda , \, \kappa =1,\ldots ,t, \, i=1,\ldots ,n-k. \end{aligned}$$

We can rewrite this into t independent linear systems of equations of the form

$$\begin{aligned} \underbrace{\begin{bmatrix} s_{1,1,\kappa } \\ s_{1,2,\kappa } \\ \vdots \\ s_{2,1,\kappa } \\ s_{2,2,\kappa } \\ \vdots \end{bmatrix}}_{=: \, \varvec{s}^{(\kappa )}} = \varvec{H}_{\mathrm {ext}} \cdot \underbrace{\begin{bmatrix} e_{1,\kappa } \\ e_{2,\kappa } \\ \vdots \\ e_{n,\kappa } \\ \vdots \end{bmatrix}}_{=: \, \varvec{e}^{(\kappa )}} \end{aligned}$$
(4)

for each \(\kappa =1,\ldots ,t\), where \(\varvec{H}_{\mathrm {ext}} \in {R}^{(n-k)\lambda \times n}\) is independent of \(\kappa \) and defined as in (3).

By the unique decoding property, \(\varvec{H}_{\mathrm {ext}}\) has more rows than columns (i.e, \((n-k)\lambda \ge n\)) and full free rank and rank (equal to n). Hence, each system in (4) has a unique solution \(\varvec{e}^{(\kappa )}\).

It is left to show that any representation \(s_{i,\ell ,\kappa }\) of \(\varvec{s}\) in the \(\mathfrak {m}\)-shaped basis \(f_i \varepsilon _\kappa \) of \({\mathcal {E}}\cdot \mathcal {F}\) yields the same error vector \(\varvec{e}\). Recall that \(s_{i,\ell ,\kappa }\) is unique modulo \(\mathfrak {m}^{r-v(\varepsilon _i)}\) (note that \(v(f_i \varepsilon _\kappa ) = v(\varepsilon _\kappa )\)). Assume now that we have a different representation, say

$$\begin{aligned} {\varvec{s}'}^{(\kappa )} = \varvec{s}^{(\kappa )} + g_{\mathfrak {m}}^{r-v(\varepsilon _\kappa )} \varvec{\chi }, \end{aligned}$$

where \(\varvec{\chi } \in {R}^{(n-k)\lambda }\). Then the unique solution \({\varvec{e}'}^{(\kappa )}\) of the linear system \({\varvec{s}'}^{(\kappa )} \varvec{H}_\mathrm {ext} {\varvec{e}'}^{(\kappa )}\) is of the form

$$\begin{aligned} {\varvec{e}'}^{(\kappa )} = \varvec{e}^{(\kappa )} + g_{\mathfrak {m}}^{r-v(\varepsilon _\kappa )} \varvec{\mu } \end{aligned}$$

for some \(\varvec{\mu '} \in {R}^{(n-k)\lambda }\). Hence, \({\varvec{e}'}^{(\kappa )} \equiv \varvec{e}^{(\kappa )} \mod \mathfrak {m}^{r-v(\varepsilon _\kappa )}\), which means that the two representations \({\varvec{e}'}^{(\kappa )}\) and \(\varvec{e}^{(\kappa )}\) belong to the same error \(\varvec{e}\).

This shows that we can take any representation of the syndrome vector \(\varvec{s}\), solve the system in (4) for \(\varvec{e}^{(\kappa )}\) for \(\kappa =1,\ldots ,t\), and obtain the unique error vector \(\varvec{e}\) corresponding to this syndrome \(\varvec{s}\) and support \({\mathcal {E}}\). \(\square \)

5 Failure probability

Consider an error vector \(\varvec{e}\) that is chosen uniformly at random from the set of error vectors whose support is a module of a given rank profile \(\phi \in \mathbb Z[x]/(x^r)\) and rank \(\phi (1) = t\). In this section, we derive a bound on the failure probability of the LRPC decoder over Galois rings for this error model. The resulting bound does not depend on the whole rank profile \(\phi \), but only on the rank t.

This section is the most technical and involved part of the paper. Therefore, we derive the bound in three steps, motivated by the discussion on failure conditions in Sect. 4: In Sect. 5.1, we derive an upper bound on the failure probability of the product condition. Sect. 5.2 presents a bound on the syndrome condition failure probability conditioned on the event that the product condition is fulfilled. Finally, in Sect. 5.3, we derive a bound on the intersection failure probability, given that the first conditions are satisfied.

The proof strategy is similar to the analogous derivation for LRPC codes over fields by Gaborit et al. [10]. However, our proof is much more involved for several reasons:

  • we need to take care of the weaker structure of Galois rings and modules over them, e.g., zero divisors and the fact that not all modules have bases and thus module elements may not be uniquely represented in a minimal generating set;

  • we correct a few (rather minor) technical inaccuracies in the original proof; and

  • some for finite fields well-known prerequisite results are, to the best of our knowledge, not known over Galois rings.

Before analyzing the three conditions, we show the following result, whose implication is that if \(\varvec{e}\) is chosen randomly as described above, then the random variable \({\mathcal {E}}\), the support of the chosen error, is also uniformly distributed on the set of modules with rank profile \(\phi \). Note that the analogous statement for errors over a finite field follows immediately from linear algebra, but here, we need a bit more work.

Lemma 3

Let \(\phi (x) \in \mathbb Z[x]/(x^r)\) with nonnegative coefficients and let \({\mathcal {E}}\) be an \({R}\)-submodule of \({S}\) with rank profile \(\phi (x)\). Then, the number of vectors \(\varvec{e}\in {S}^n\) whose support is equal to \({\mathcal {E}}\) only depends on \(\phi (x)\).

Proof

Let us write \(\phi (x)=\sum _{i=0}^{r-1}n_ix^i\) with \(N:=\phi (1)=\sum _{i=0}^{r-1}n_i=\mathrm {rk}_{{R}}({\mathcal {E}})\), and let \(\varGamma \) be a \(\mathfrak {m}\)-shaped basis for \({\mathcal {E}}\). Then, the vector \(\varvec{e}\) whose first N entries are the element of \(\varGamma \) and whose last \(n-N\) entries are 0 is a vector whose support is equal to \({\mathcal {E}}\). Moreover, all the vectors in \({S}^n\) whose support is equal to \({\mathcal {E}}\) are of the form \((\varvec{A}\varvec{e}^\top )^\top \), for \(\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\). Let us fix a basis of \({S}\) so that we can identify \({S}\) with \({R}^m\). In this representation, \(\varvec{e}^\top \) corresponds to a matrix \(\varvec{D}\varvec{T}\), where

$$\begin{aligned} \varvec{D}=\begin{bmatrix} \varvec{I}_{n_0} &{} &{} &{} &{}\\ &{} g_\mathfrak {m}\varvec{I}_{n_1} &{} &{} &{}\\ &{} &{} \ddots &{} &{}\\ &{} &{} &{} g_{\mathfrak {m}}^{r-1}\varvec{I}_{n_{r-1}}&{}\\ &{} &{} &{} &{} \varvec{0} \end{bmatrix}\in {R}^{n\times n} \end{aligned}$$

and \(\varvec{T}\in {R}^{n\times m}\) has linearly independent rows over \({R}\). Then, the vectors in \({S}^n\) whose support is equal to \({\mathcal {E}}\) correspond to matrices \(\varvec{A}\varvec{D}\varvec{T}\) for \(\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\), and their number is equal to the cardinality of the set

$$\begin{aligned} \mathrm {Vec}({\mathcal {E}},n):=\{\varvec{A}\varvec{D}\varvec{T}\mid \varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\}. \end{aligned}$$

The group \({{\,\mathrm{GL}\,}}(n,{R})\) left acts on \(\mathrm {Vec}({\mathcal {E}},n)\) and, by definition, its action is transitive. Hence, by the orbit-stabilizer theorem, we have

$$\begin{aligned} |\mathrm {Vec}({\mathcal {E}},n)|=\frac{|{{\,\mathrm{GL}\,}}(n,{R})|}{|\mathrm {Stab}(\varvec{D}\varvec{T})|}, \end{aligned}$$

where \(\mathrm {Stab}(\varvec{D}\varvec{T})=\mathrm {Stab}_{{{\,\mathrm{GL}\,}}(n,{R})}(\varvec{D}\varvec{T})=\{\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R}) \mid \varvec{A}\varvec{D}\varvec{T}=\varvec{D}\varvec{T}\}\). Hence, we need to count how many matrices \(\varvec{A}\in {{\,\mathrm{GL}\,}}(n,{R})\) satisfy

$$\begin{aligned} (\varvec{A}-\varvec{I}_n)\varvec{D}\varvec{T}=0. \end{aligned}$$

Let us call \(\varvec{S}:=\varvec{A}-\varvec{I}_n\) and divide it in \(r+1\) block \(\varvec{S}_i\in {R}^{n\times n_i}\) for \(i\in \{0,\ldots ,r-1\}\) and \(\varvec{S}_r\in {R}^{n\times (n-N)}\). Moreover, do the same with \(\varvec{T}\), dividing it in \(r+1\) blocks \(\varvec{T}_i\in {R}^{n_i\times m}\) for \(i\in \{0,\ldots ,r-1\}\) and \(\varvec{T}_r\in {R}^{(n-N)\times m}\). Therefore, we get

$$\begin{aligned} \begin{bmatrix}\varvec{S}_0&\varvec{S}_1&\cdots&\varvec{S}_{r-1}&\varvec{S}_r\end{bmatrix}\begin{bmatrix}\varvec{T}_0\\ g_{\mathfrak {m}} \varvec{T}_1 \\ \vdots \\ g_{\mathfrak {m}}^{r-1}\varvec{T}_{r-1}\\ \varvec{0}\end{bmatrix}=\varvec{0}. \end{aligned}$$

Since the rows of \(\varvec{T}\) are linearly independent over \({R}\), this is true if and only if \(\varvec{S}_i\in \mathfrak {m}^{r-i}{R}^{n\times n_i}\). This condition clearly only depends on the values \(n_i\)’s, and hence on \(\phi (x)\). \(\square \)

5.1 Failure of product condition

The product condition means that the product space of the randomly chosen support \({\mathcal {E}}\) and the fixed free module \(\mathcal {F}\) (in which the parity-check matrix coefficients are contained) has maximal rank profile \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\). If \({\mathcal {E}}\) was a free module, the condition would translate to \({\mathcal {E}}\cdot \mathcal {F}\) being a free module of rank \(\lambda t\). In fact, our proof strategy reduces the question if \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\) to the question whether a free module of rank t, which is related to \({\mathcal {E}}\), results in a product space with the free module \(\mathcal {F}\) of maximal rank profile. Hence, we first study this question for products of free modules. This part of the bound derivation is similar to the case of LRPC codes over finite fields (cf. [1]), but the proofs and counting arguments are more involved since we need to take care of non-units in the ring.

Lemma 4

Let \(\alpha ',\beta \) be non-negative integers with \((\alpha '+1)\beta < m\). Further, let \({\mathcal {A}}',{\mathcal {B}}\) be free submodules of \({S}\) of free rank \(\alpha '\) and \(\beta \), respectively, such that also \({\mathcal {A}}'\cdot {\mathcal {B}}\) is a free submodule of \({S}\) of free rank \(\alpha '\beta \). For an element \(a \in {S}^*\), chosen uniformly at random, let \({\mathcal {A}}:= {\mathcal {A}}' + \langle a \rangle \). Then, we have

$$\begin{aligned} \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}}) < \alpha '\beta +\beta \big ) \le \left( 1-p^{-s\beta }\right) \sum _{j = 0}^{r-1} p^{s(r-j)[(\alpha '+1) \beta -m]}. \end{aligned}$$

Proof

First note that since a is a unit in \({S}\), the mapping \(\varphi _a \, : \, {\mathcal {B}}\rightarrow {S}, ~ b \mapsto ab\) is injective. This means that \(a{\mathcal {B}}\) is a free module with \(\mathrm {frk}_{{R}}(a{\mathcal {B}})=\mathrm {frk}_{{R}}({\mathcal {B}})=\beta \). Let \(b_1,\ldots ,b_\beta \) be a basis of \({\mathcal {B}}\). Then, \(a b_1, \ldots , a b_\beta \) is a basis of \(a{\mathcal {B}}\). Therefore, \({\mathcal {A}}\cdot {\mathcal {B}}\) is a free module with \(\mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}}) = \alpha \beta +\beta \) if and only if \(a{\mathcal {B}}\cap {\mathcal {A}}'\cdot {\mathcal {B}}= \{0\}\). Hence,

$$\begin{aligned} \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}}) < \alpha '\beta +\beta \big ) \le \Pr \left( \exists b \in {\mathcal {B}}\setminus \{0\} : ab \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) . \end{aligned}$$
(5)

Let c be chosen uniformly at random from \({S}\). Recall that a is chosen uniformly at random from \({S}^*\). Then,

$$\begin{aligned} \Pr \! \left( \exists b \in {\mathcal {B}}\setminus \{0\} : ab \in {\mathcal {A}}' \cdot {\mathcal {B}}\right) \le \Pr \! \left( \exists b \in {\mathcal {B}}\setminus \{0\} : cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) . \end{aligned}$$
(6)

This holds since if c is chosen to be a non-unit in \({S}\), then the statement “\(\exists \, b \in {\mathcal {B}}\setminus \{0\} \, : \, cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\)” is always true. To see this, write \(c = g_\mathfrak {m}c'\) for some \(c' \in {S}\). Since \(\beta >0\), there is a unit \(b^* \in {\mathcal {B}}\cap {S}^*\). Choose \(b := g_\mathfrak {m}^{r-1}b^* \in {\mathcal {B}}\setminus \{0\}\). Hence, \(c b = g_\mathfrak {m}c' g_\mathfrak {m}^{r-1}b^* = 0\), and b is from \({\mathcal {B}}\) and non-zero.

Now we bound the right-hand side of (6) as follows

$$\begin{aligned} \Pr \left( \exists b \in {\mathcal {B}}\setminus \{0\} : cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right)&\le \textstyle \sum _{b \in {\mathcal {B}}\setminus \{0\}} \Pr \left( cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) \\&= \sum _{j = 0}^{r-1} \sum _{b \in {\mathcal {B}}: v(b) = j} \Pr \left( cb^* g_\mathfrak {m}^{j} \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) . \end{aligned}$$

Since \(b^*\) is a unit in \({S}\), for uniformly drawn c, \(c b^*\) is also uniformly distributed on \({S}\). Hence, \(cb^* g_\mathfrak {m}^{j}\) is uniformly distributed on the ideal \(\mathfrak {M}^{j}\) of \({S}\) (the mapping \({S}\rightarrow \mathfrak {M}^j\), \(\chi \mapsto \chi g_\mathfrak {m}^j\) is surjective and maps equally many elements to the same image) and we have \(\Pr \left( cb^* g_\mathfrak {m}^{j} \in {\mathcal {A}}'\cdot {\mathcal {B}}\right) = \frac{\left| \mathfrak {M}^{j} \cap {\mathcal {A}}'\cdot {\mathcal {B}}\right| }{|\mathfrak {M}^{j}|}\). Let \(v_1,\ldots ,v_{\alpha '\beta }\) be a basis of \({\mathcal {A}}'\cdot {\mathcal {B}}\). Then, by (2), an element \(c \in {\mathcal {A}}'\cdot {\mathcal {B}}\) is in \(\mathfrak {M}^{j}\) if and only if it can be written as \(c = \sum _{i} \mu _i v_i\), where \(\mu _i \in \mathfrak {m}^j\) for all i.

Hence, \(\left| \mathfrak {M}^{j} \cap {\mathcal {A}}'\cdot {\mathcal {B}}\right| = |\mathfrak {m}^{j}|^{\alpha ' \beta }\). Moreover, we have \(|\mathfrak {M}^{j}| = |\mathfrak {m}^{j}|^m\), where \(|\mathfrak {m}^{j}| = p^{s(r-j)}\). Overall, we get

$$\begin{aligned} \Pr \left( \exists \, b \in {\mathcal {B}}\setminus \{0\} \, : \, cb \in {\mathcal {A}}'\cdot {\mathcal {B}}\right)&\le \sum _{j = 0}^{r-1} \sum _{b \in {\mathcal {B}}\, : \, v(b) = j} p^{s(r-j)(\alpha ' \beta -m)} \nonumber \\&= \sum _{j = 0}^{r-1} \big |\{b \in {\mathcal {B}}\, : \, v(b) = j\}\big | p^{s(r-j)(\alpha ' \beta -m)}. \end{aligned}$$
(7)

Furthermore, we have (note that \(\mathfrak {M}^{j+1} \subseteq \mathfrak {M}^{j}\))

$$\begin{aligned} \big |\{b \in {\mathcal {B}}\, : \, v(b) = j\}\big |&= \Big |\big (\mathfrak {M}^{j} \setminus \mathfrak {M}^{j+1}\big ) \cap {\mathcal {B}}\Big | = \big |\mathfrak {M}^{j} \cap {\mathcal {B}}\big | - \big |\mathfrak {M}^{j+1} \cap {\mathcal {B}}\big | \nonumber \\&= p^{s(r-j)\beta }-p^{s(r-j-1)\beta }. \end{aligned}$$
(8)

Combining and simplifying (5), (6), (7), and (8) we obtain the desired result. \(\square \)

Lemma 5

Let \({\mathcal {B}}\) be a fixed free submodule of \({S}\) with \(\mathrm {frk}_{{R}}({\mathcal {B}})=\beta \). For a positive integer \(\alpha \) with \(\alpha \beta <m\), let \({\mathcal {A}}\) be drawn uniformly at random from the set of free submodules of \({S}\) of free rank \(\alpha \). Then,

$$\begin{aligned} \Pr \left( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}})< \alpha \beta \right) \le \left( 1-p^{-s\beta }\right) \sum _{i=1}^{\alpha } \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} \le 2 \alpha p^{s(\alpha \beta -m)} \end{aligned}$$

Proof

Drawing a free submodule \({\mathcal {A}}\subseteq {S}\) of rank \(\alpha \) uniformly at random is equivalent to drawing iteratively \({\mathcal {A}}_0 := \{0\}, ~ {\mathcal {A}}_i := {\mathcal {A}}_{i-1} + \langle a_i \rangle \) for \(i=1,\ldots ,\alpha \) where for each iteration i, the element \(a_i \in {S}\) is chosen uniformly at random from the set of vectors that are linearly independent of \({\mathcal {A}}_{i-1}\). The equivalence of the two random experiments is clear since the possible choices of the sequence \(a_1,\ldots ,a_\alpha \) gives exactly all bases of free \({R}\)-submodules of \({S}\) of rank \(\alpha \). Furthermore, all sequences are equally likely and each resulting submodule has the same number of bases that generate it (which equals the number of invertible \(\alpha \times \alpha \) matrices over \({R}\)). We have the following recursive formula for any \(i=1,\ldots ,\alpha \):

$$\begin{aligned}&\Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \big ) \\&\quad = \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \wedge \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \big ) \\&\quad \quad + \underbrace{\Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \wedge \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \big )}_{\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \text { implies }\mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta } \\&\quad = \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \mid \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \big ) \\&\quad \quad \cdot \underbrace{\Pr (\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta )}_{\le 1} + \Pr \big (\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \big ) \\&\quad \overset{(*)}{\le } \left( 1-p^{-s\beta }\right) \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} + \Pr \big (\mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})<(i-1)\beta \big ), \end{aligned}$$

where (\(*\)) follows from Lemma 4 by the following additional argument:

$$\begin{aligned}&\Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \mid \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \, \wedge \, a_i \text { linearly independent and}\\&\quad \quad \text {its span trivially intersects with }{\mathcal {A}}_{i-1}\big ) \\&\quad \le \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_i \cdot {\mathcal {B}})< i \beta \mid \mathrm {frk}_{{R}}({\mathcal {A}}_{i-1}\cdot {\mathcal {B}})=(i-1)\beta \, \wedge \, a_i \text { uniformly from } {S}^* \big ) \\&\quad \le \left( 1-p^{-s\beta }\right) \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)}, \end{aligned}$$

where the last inequality is exactly the statement of Lemma 4. By \(\Pr \big (\mathrm {frk}_{{R}}(A_{0}B)<0\big ) = 0\), we get

$$\begin{aligned} \Pr \left( \mathrm {frk}_{{R}}({\mathcal {A}}\cdot {\mathcal {B}})< \alpha \beta \right)&= \Pr \big ( \mathrm {frk}_{{R}}({\mathcal {A}}_\alpha \cdot {\mathcal {B}})< \alpha \beta \big ) \\&= \left( 1-p^{-s\beta }\right) \sum _{i=1}^{\alpha } \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} \\&\le \alpha \underbrace{\left( 1-p^{-s\beta }\right) }_{\le 1} p^{-rs(m-\alpha \beta )} \underbrace{\sum _{j = 0}^{r-1} p^{js(m-\alpha \beta )}}_{\le 2 p^{(r-1)s(m-\alpha \beta )}} \\&\le 2 \alpha p^{s(\alpha \beta -m)}. \end{aligned}$$

This proves the claim. \(\square \)

Recall that the error support \({\mathcal {E}}\) is not necessarily a free module. In the following sequence of statements, we will therefore answer the question how the results of Lemmas 4 and 5 can be used to derive a bound on the product condition failure probability. To achieve this, we study the following free modules related to modules of arbitrary rank profile. Note that this part of the proof differs significantly from LRPC codes over finite fields, where all modules are vector spaces, and thus free.

For a module \({\mathcal {M}}\subseteq {S}\) with \(\mathfrak {m}\)-shaped basis \(\varGamma \), define \(\mathcal {F}(\varGamma ) \subseteq {S}\) be the free module that is obtained from \({\mathcal {M}}\) as follows: Let us write \(\varGamma =\{g_\mathfrak {m}^ia_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i \}\), where the elements \(a_{i,\ell _i}\) are all reduced modulo \(\mathfrak {M}^{r-i}\), that is, the Teichmüller representation of \(a_{i,\ell _i}\) is of the form

$$\begin{aligned} a_{i,\ell _i}=\sum _{j=0}^{r-i-1} g_\mathfrak {m}^jz_j, \quad z_j\in T_{tm}. \end{aligned}$$

This is clearly possible since if we add to \(a_{i,\ell _i}\) an element \(y\in \mathfrak {M}^{r-i}=(g_\mathfrak {m}^{r-i})\), then \(g_\mathfrak {m}^i(a_{i,\ell _i}+y)=g_\mathfrak {m}^ia_{i,\ell _i}\). At this point, we define \(F(\varGamma ) := \{a_{i,\ell _i} \mid 0\le i \le r-1, 1 \le \ell _i \le \phi ^{{\mathcal {M}}}_i\}\), and \(\mathcal {F}(\varGamma ):=\langle F(\varGamma )\rangle _{{R}}\). The fact that \(\mathcal {F}(\varGamma )\) is free directly follows from considering its Smith Normal Form, which tells us that in the matrix representation it is spanned by (some of) the rows of an invertible matrix in \({{\,\mathrm{GL}\,}}(m,{R})\). In particular, we have \(\mathrm {frk}_{{R}}(\mathcal {F}(\varGamma ))={{\,\mathrm{rk}\,}}_{{R}}({\mathcal {M}})\).

Example 7

Let \(p=2\), \(s=1\), \(r=3\) as in Example 2, \(h(z) = z^3+z+1\) and \({\mathcal {M}}\) a module with \(\mathfrak {m}\)-shaped basis \(\varGamma = \{1,2z^2+2z,4z^2+2z+2\}\). Then, \({\mathcal {M}}\) has a diagnonal matrix in Smith normal form of

$$\begin{aligned} \begin{bmatrix} 1&{}0&{}0\\ 0&{}2&{}0\\ 0&{}0&{}2 \end{bmatrix} \end{aligned}$$

and \(\phi ^{{\mathcal {M}}}(z) = 2z+1\). Using the notation above, we observe \(a_{0,1}=1\), \(a_{1,1}=z^2+z\), \(a_{1,2} = z^3+2z^2\) and \(\mathcal {F}(\varGamma ) = \langle \{1,z^2+z,z^3+2z^2\} \rangle _{{R}}\).

At this point, for two different \(\mathfrak {m}\)-shaped bases \(\varGamma , \Lambda \) of \({\mathcal {M}}\), one could ask whether \(\mathcal {F}(\varGamma )= \mathcal {F}(\Lambda )\). The answer is affirmative, and it can be deduced from the following result.

Proposition 2

Let \(n_0,\ldots ,n_{r-1}\in \mathbb {N}\) be nonnegative integers, let \(N:=n_0+\cdots +n_{r-1}\) and let \(\varvec{D}\in {R}^{N\times N}\) be a diagonal matrix given by

$$\begin{aligned} \varvec{D}:=\begin{bmatrix} \varvec{I}_{n_0} &{} &{} &{} \\ &{} g_\mathfrak {m}\varvec{I}_{n_1} &{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} g_{\mathfrak {m}}^{r-1}\varvec{I}_{n_{r-1}} \end{bmatrix}. \end{aligned}$$

Moreover, let \(\varvec{T}_1,\varvec{T}_2 \in {R}^{r\times m}\) be such that the rows of \(\varvec{T}_i\) are \({R}\)-linearly independent for each \(i\in \{1,2\}\). Then, the rowspaces of \(\varvec{D}\varvec{T}_1\) and \(\varvec{D}\varvec{T}_2\) coincide if and only if for every \(i,j \in \{0,\ldots ,r-1\}\) there exist \(\varvec{Y}_{i,j}\in {R}^{n_i\times n_j}\) with \(\varvec{Y}_{i,i}\in {{\,\mathrm{GL}\,}}(n_i,{R})\) and \(\varvec{Z}_i\in {R}^{n_i\times m}\) such that

$$\begin{aligned} \varvec{T}_2=\varvec{Y}\varvec{T}_1+\varvec{Z}, \end{aligned}$$

where

$$\begin{aligned} \varvec{Y}= \begin{bmatrix} \varvec{Y}_{0,0} &{} g_\mathfrak {m}\varvec{Y}_{0,1} &{} g_\mathfrak {m}^2 \varvec{Y}_{0,2} &{} \cdots &{} g_\mathfrak {m}^{r-1} \varvec{Y}_{0,r-1} \\ \varvec{Y}_{1,0} &{} \varvec{Y}_{1,1} &{} g_\mathfrak {m}\varvec{Y}_{1,2} &{} \cdots &{} g_\mathfrak {m}^{r-2} \varvec{Y}_{1,r-1} \\ \varvec{Y}_{2,0} &{} \varvec{Y}_{2,1} &{} \varvec{Y}_{2,2} &{} \cdots &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ \varvec{Y}_{r-1,0} &{} \varvec{Y}_{r-1,1} &{} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix}, \quad \varvec{Z}=\begin{bmatrix} \varvec{0} \\ g_\mathfrak {m}^{r-1}\varvec{Z}_1 \\ g_\mathfrak {m}^{r-2}\varvec{Z}_2 \\ \vdots \\ g_\mathfrak {m}\varvec{Z}_{r-1}\end{bmatrix}. \end{aligned}$$

Proof

The rowspaces of \(\varvec{D}\varvec{T}_1\) and \(\varvec{D}\varvec{T}_2\) coincide if and only if there exists a matrix \(\varvec{X}\in {{\,\mathrm{GL}\,}}(N,{R})\) such that \(\varvec{X}\varvec{D}\varvec{T}_1=\varvec{D}\varvec{T}_2\). Divide \(\varvec{T}_\ell \) in r blocks \(\varvec{T}_{\ell ,i}\in {R}^{n_i \times m}\) for \(i\in \{0,\ldots , r-1\}\) and divide \(\varvec{X}\) in \(r\times r\) blocks \(\varvec{X}_{i,j}\in {R}^{n_i\times n_j}\) for \(i,j \in \{0,\ldots ,r-1\}\). Hence, from \(\varvec{X}\varvec{D}\varvec{T}_1=\varvec{D}\varvec{T}_2\) we get

$$\begin{aligned} \sum _{j=0}^{r-1} \varvec{X}_{i,j}g_\mathfrak {m}^j\varvec{T}_{1,j}=g_\mathfrak {m}^i\varvec{T}_{2,i}. \end{aligned}$$
(9)

Since the rows of \(\varvec{T}_{1}\) are \({R}\)-linearly independent, (9) implies that \(g_\mathfrak {m}^j\varvec{X}_{i,j} \in g_\mathfrak {m}^i{R}^{n_i\times n_j}\). This shows that

$$\begin{aligned} \varvec{X}= \begin{bmatrix} \varvec{Y}_{0,0} &{} \varvec{Y}_{0,1} &{} \varvec{Y}_{0,2} &{} \cdots &{} \varvec{Y}_{0,r-1} \\ g_\mathfrak {m}\varvec{Y}_{1,0} &{} \varvec{Y}_{1,1} &{} \varvec{Y}_{1,2} &{} \cdots &{} \varvec{Y}_{1,r-1} \\ g_\mathfrak {m}^2 \varvec{Y}_{2,0} &{} g_\mathfrak {m}\varvec{Y}_{2,1} &{} \varvec{Y}_{2,2} &{} \cdots &{} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ g_\mathfrak {m}^{r-1} \varvec{Y}_{r-1,0} &{} g_\mathfrak {m}^{r-2} \varvec{Y}_{r-1,1} &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix}, \end{aligned}$$

for some \(\varvec{Y}_{i,j}\in {R}^{n_i\times n_j}\). Observe now that \(\varvec{X}=\varvec{U}+g_\mathfrak {m}\varvec{L}\), where

$$\begin{aligned} \varvec{U}&= \begin{bmatrix} \varvec{Y}_{0,0} &{} \varvec{Y}_{0,1} &{} \varvec{Y}_{0,2} &{} \cdots &{} \varvec{Y}_{0,r-1} \\ \varvec{0} &{} \varvec{Y}_{1,1} &{} \varvec{Y}_{1,2} &{} \cdots &{} \varvec{Y}_{1,r-1} \\ \varvec{0} &{} \varvec{0} &{} \varvec{Y}_{2,2} &{} \cdots &{} \varvec{Y}_{2,r-1} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ \varvec{0} &{} \varvec{0} &{} \varvec{0} &{} \cdots &{} \varvec{Y}_{r-1,r-1} \\ \end{bmatrix}, \\ \varvec{L}&=\begin{bmatrix} \varvec{0} &{} \varvec{0} &{} \varvec{0} &{} \cdots &{} \varvec{0} \\ \varvec{Y}_{1,0} &{} \varvec{0} &{} \varvec{0} &{} \cdots &{} \varvec{0} \\ g_\mathfrak {m}\varvec{Y}_{2,0} &{} \varvec{Y}_{2,1} &{} \varvec{0} &{} \cdots &{} \varvec{0} \\ \vdots &{} \vdots &{} \vdots &{} &{}\vdots \\ g_\mathfrak {m}^{r-2} \varvec{Y}_{r-1,0} &{} g_\mathfrak {m}^{r-3} \varvec{Y}_{r-1,1} &{} g_\mathfrak {m}^{r-4} \varvec{Y}_{r-1,2} &{} \cdots &{} \varvec{0} \\ \end{bmatrix}. \end{aligned}$$

Since \(\varvec{X}\) is invertible and \(g_\mathfrak {m}\varvec{L}\) is nilpotent, then \(\varvec{U}\) is also invertible and hence \(\varvec{Y}_{i,i}\in {{\,\mathrm{GL}\,}}(n_i,{R})\), for every \(i\in \{0,\ldots ,r-1\}\). At this point, observe that \( \varvec{X}\varvec{D}= \varvec{D}\varvec{Y}\), from which we deduce

$$\begin{aligned} \varvec{D}(\varvec{T}_2-\varvec{Y}\varvec{T}_1)=\varvec{0}. \end{aligned}$$

This implies that the ith block of \(\varvec{T}_2-\varvec{Y}\varvec{T}_1 \in {{\,\mathrm{Ann}\,}}(g_\mathfrak {m}^i){R}^{n_i\times m}=g_\mathfrak {m}^{r-i}{R}^{n_i\times m}\) and we conclude. \(\square \)

Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\). Proposition 2 implies that if we restrict to take a \(\mathfrak {m}\)-shaped basis \(\varGamma =\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\right\} \) such that the elements \(a_{i,j_i}\) have Teichmüller representation

$$\begin{aligned} a_{i,j_i}=\sum _{\ell =0}^{r-i-1}g_\mathfrak {m}^\ell z_\ell , \quad z_\ell \in T_{tm}, \end{aligned}$$
(10)

then the module \(\mathcal {F}(\varGamma )\) is well-defined and does not depend on the choice of \(\varGamma \).

Definition 4

We define \(\mathcal {F}({\mathcal {M}})\) to be the space \(\mathcal {F}(\varGamma )\), where \(\varGamma =\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le \right. \)\(\left. r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\right\} \) is any \(\mathfrak {m}\)-shaped basis such that the elements \(a_{i,j_i}\) have Teichmüller representation as in (10).

The following two corollaries follow from observations in Proposition 2. We will use them to show that for certain uniformly chosen modules \({\mathcal {M}}\), the corresponding free modules \(\mathcal {F}({\mathcal {M}})\) are uniformly chosen from the set of free modules of rank equal to the rank of \({\mathcal {M}}\). The proofs can be found in Appendix A.

Now, for a given \({R}\)-submodule of \({S}\) we consider all the free modules that comes from a \(\mathfrak {m}\)-shaped basis for \({\mathcal {M}}\). More specifically, we set

$$\begin{aligned} \mathrm {Free}({\mathcal {M}}):=\Big \{ {\mathcal {A}}\mid&{\mathcal {A}} \text{ is } \text{ free } \text{ with } \mathrm {frk}_{{R}}({\mathcal {A}})=\mathrm {rk}_{{R}}({\mathcal {M}}) \text{ and } \exists \{ a_{i,\ell _i}\} \text{ basis } \text{ of } {\mathcal {A}}\\&\text{ such } \text{ that } \{ g_\mathfrak {m}^ia_{i,\ell _i}\} \text{ is } \text{ a } \mathfrak {m}\text{-shaped } \text{ basis } \text{ for } \mathcal M \Big \}. \end{aligned}$$

In fact, even though for the \({R}\)-module \({\mathcal {M}}\) there is a unique free module \(\mathcal {F}({\mathcal {M}})\) as explained in Definition 4, we have more than one free module \({\mathcal {A}}\) belonging to \(\mathrm {Free}({\mathcal {M}})\). The exact number of such free modules is given in the following Corollary.

Corollary 1

Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\) with rank profile \(\phi ^{{\mathcal {M}}}(x)\) and rank \(N := {{\,\mathrm{rk}\,}}_{{R}}({\mathcal {M}})\). Then

$$\begin{aligned} |\mathrm {Free}({\mathcal {M}})|=s^{(m-N)\sum _{i=1}^{r-1}i \phi ^{{\mathcal {M}}}_i }. \end{aligned}$$

In particular, \(|\mathrm {Free}({\mathcal {M}})|\) only depends on \(\phi ^{{\mathcal {M}}}(x)\).

Proof

See Appendix A. \(\square \)

Now we estimate an opposite quantity. For a fixed rank profile \(\phi (x)\) with \(\phi (1)\le m\), and given a free \({R}\)-submodule \({\mathcal {N}}\) of \({S}\) with free rank \(\mathrm {frk}_{{R}}({\mathcal {N}})=\phi (1)\), for how many \({R}\)-submodules \({\mathcal {M}}\) of \({S}\) with rank profile \(\phi ^{{\mathcal {M}}}(x)=\phi (x)\) the module \({\mathcal {N}}\) belongs to \(\mathrm {Free}({\mathcal {M}})\)? Formally, we want to estimate the cardinality of the set

$$\begin{aligned} \mathrm {Mod}(\phi ,{\mathcal {N}}):=\left\{ {\mathcal {M}}\subseteq {S}\mid \phi ^{{\mathcal {M}}}(x)=\phi (x) \text{ and } {\mathcal {N}}\in \mathrm {Free}({\mathcal {M}}) \right\} . \end{aligned}$$

Corollary 2

Let \(\phi (x)=\sum _{i=0}^{r-1}n_ix^i\in \mathbb {N}[x]/(x^r)\) such that \(\phi (1)=N\le m\), and let \({\mathcal {N}}\) be a free \({R}\)-submodule of \({S}\) with free rank \(\mathrm {frk}_{{R}}({\mathcal {N}})=N\). Then

$$\begin{aligned} |\mathrm {Mod}(\phi ,{\mathcal {N}})|=\frac{|{{\,\mathrm{GL}\,}}(N,{R})|}{|G_{\phi }^*|}. \end{aligned}$$

In particular, \(|\mathrm {Mod}(\phi ,{\mathcal {N}})|\) only depends on \(\phi (x)\).

Proof

See Appendix A. \(\square \)

We need the following lemma to derive a sufficient condition for the product of two modules to have a maximal rank profile.

Lemma 6

Let \({\mathcal {M}}\) be an \({R}\)-submodule of \({S}\), and let \({\mathcal {A}}, {\mathcal {B}}\in \mathrm {Free}({\mathcal {M}})\). Moreover, let \({\mathcal {N}}\) be a free \({R}\)-submodule of \({S}\). Then, \({\mathcal {N}}\cdot {\mathcal {A}}\) is free with \(\mathrm {frk}_{{R}}({\mathcal {N}}\cdot {\mathcal {A}})=\mathrm {rk}_{{R}}({\mathcal {M}})\mathrm {frk}_{{R}}({\mathcal {N}})\) if and only if \({\mathcal {N}}\cdot {\mathcal {B}}\) is free with \(\mathrm {frk}_{{R}}({\mathcal {N}}\cdot {\mathcal {B}})=\mathrm {rk}_{{R}}({\mathcal {M}})\mathrm {frk}_{{R}}({\mathcal {N}})\).

Proof

Let \(A=\{a_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) be a basis of \({\mathcal {A}}\) and \(B=\{b_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) be a basis of \({\mathcal {B}}\) such that \(\varGamma := \{g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) and \(\Lambda := \{g_\mathfrak {m}^ib_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i\}\) are two \(\mathfrak {m}\)-shaped bases for \({\mathcal {M}}\), and let \(\varDelta =\{u_1,\ldots ,u_t\}\) be a basis for \({\mathcal {N}}\). Assume that \(\varDelta \cdot A=\{u_{\ell }a_{i,j_i} \}\) has \(\mathrm {rk}_{{R}}({\mathcal {M}})\mathrm {frk}_{{R}}({\mathcal {N}})\) linearly independent elements over \({R}\). By symmetry, it is enough to show that this implies \({\mathcal {N}}\cdot {\mathcal {B}}\) is free. By Proposition 2, we know that there exists \(x_{i,j_i} \in {S}\) such that \({\mathcal {B}}=\langle \{a_{i,j_i}+g_\mathfrak {m}x_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}}_i \} \rangle _{{R}}\). Hence, we need to prove that the elements \(\{u_{\ell }(a_{i,j_i}+g_mx_{i,j_i})\}\) are linearly independent over \({R}\). Suppose that there exists \(\lambda _{\ell ,i,j_i}\in {R}\) such that

$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }(a_{i,j_i}+g_mx_{i,j_i})=0, \end{aligned}$$

hence, rearranging the sum, we get

$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }a_{i,j_i}=- g_\mathfrak {m}\sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }x_{i,j_i}. \end{aligned}$$
(11)

Multiplying both sides by \(g_\mathfrak {m}^{r-1}\) we obtain

$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}g_\mathfrak {m}^{r-1}u_{\ell }a_{i,j_i}=0, \end{aligned}$$

and since by hypothesis \(\{u_{\ell }a_{i,j_i}\}\) is a basis, this implies \(\lambda _{\ell ,i,j_i}\in {{\,\mathrm{Ann}\,}}(g_\mathfrak {m}^{r-1})=\mathfrak {m}\) and therefore there exist \(\lambda '_{\ell ,i,j_i}\in {R}\), such that \(\lambda _{\ell ,i,j_i}=\varvec{g}_\mathfrak {m}\lambda '_{\ell ,i,j_i}\). Thus, (11) becomes

$$\begin{aligned} g_\mathfrak {m}\sum _{\ell ,i,j_i}\lambda '_{\ell ,i,j_i}u_{\ell }a_{i,j_i}=- g_\mathfrak {m}^2\sum _{\ell ,i,j_i}\lambda '_{\ell ,i,j_i}u_{\ell }x_{i,j_i}. \end{aligned}$$

Now, multiplying both sides by \(g_\mathfrak {m}^{r-2}\) and with the same reasoning as before, we obtain that all the \(\lambda '_{\ell ,i,j_i}\in \mathfrak {m}\) and the right-hand side of (11) belongs to \(\mathfrak {m}^3\). Iterating this process \(r-2\) times, we finally get that the right-hand side of (11) belongs to \(\mathfrak {m}^r=(0)\), and therefore (11) corresponds to

$$\begin{aligned} \sum _{\ell ,i,j_i}\lambda _{\ell ,i,j_i}u_{\ell }a_{i,j_i}=0, \end{aligned}$$

which, by hypothesis implies \(\lambda _{\ell ,i,j_i}=0\) for every \(\ell ,i,j_i\). This concludes the proof, showing that the elements \(\{u_{\ell }(a_{i,j_i}+g_mx_{i,j_i})\}\) are linearly independent over \({R}\). \(\square \)

With the aid of Lemma 6 we can show that the property for the product of two arbitrary \({R}\)-modules \({\mathcal {M}}_1, {\mathcal {M}}_2\) of having maximal rank profile (according to Definition 1) depends on the free modules \(\mathcal {F}({\mathcal {M}}_1)\) and \(\mathcal {F}({\mathcal {M}}_2)\) and on their product.

Proposition 3

Let \({\mathcal {M}}_1\) and \({\mathcal {M}}_2\) be submodules of \({S}\). If the product of free modules \(\mathcal {F}({\mathcal {M}}_1)\) and \(\mathcal {F}({\mathcal {M}}_2)\) has free rank

$$\begin{aligned} \mathrm {frk}_{{R}}\!\left( \mathcal {F}({\mathcal {M}}_1)\mathcal {F}({\mathcal {M}}_2)\right) = {{\,\mathrm{rk}\,}}_{{R}}(\mathcal {F}({\mathcal {M}}_1)) {{\,\mathrm{rk}\,}}_{{R}}(\mathcal {F}({\mathcal {M}}_2)), \end{aligned}$$

then we have

$$\begin{aligned} \phi ^{{\mathcal {M}}_1 \cdot {\mathcal {M}}_2}(x) = \phi ^{{\mathcal {M}}_1}(x) \phi ^{{\mathcal {M}}_2}(x). \end{aligned}$$

Moreover, if we assume that \(\deg (\phi ^{{\mathcal {M}}_1}(x))+\deg (\phi ^{{\mathcal {M}}_2}(x))<r\), then also the converse is true. In particular, the converse is true if one of the two modules is free.

Proof

First, observe that by Lemma 6 we can take any pair of \(\mathfrak {m}\)-shaped bases \(\varGamma _1\) and \(\varGamma _2\) of \({\mathcal {M}}_1\) and \({\mathcal {M}}_2\), respectively. Let us fix

$$\begin{aligned} \varGamma _1:=\left\{ g_\mathfrak {m}^ia_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_1}_i \right\} \end{aligned}$$

\(\mathfrak {m}\)-shaped basis of \({\mathcal {M}}_1\) and

$$\begin{aligned} \varGamma _2:=\left\{ g_\mathfrak {m}^ib_{i,j_i} \mid 0\le i \le r-1, 1 \le j_i \le \phi ^{{\mathcal {M}}_2}_i \right\} \end{aligned}$$

\(\mathfrak {m}\)-shaped basis of \({\mathcal {M}}_2\). By hypothesis, the set \(F(\varGamma _1)\cdot F(\varGamma _2)\) contains \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)=t\) linearly independent elements over \({R}\). Let \(\varvec{A}\in {R}^{t\times m}\) be the matrix whose rows are the vectorial representations in \({R}^m\) of the elements in \(F(\varGamma _1)\cdot F(\varGamma _2)\). Clearly, a Smith Normal Form for \(\varvec{A}\) is \(\varvec{A}=\varvec{D}\varvec{T}\) where \(\varvec{D}= ( \varvec{I}_t \mid \varvec{0})\) and \(\varvec{T}\in {{\,\mathrm{GL}\,}}(n,{R})\) is any invertible matrix whose first \(t\times m\) block is equal to \(\varvec{A}\). By definition \(\varGamma _1\cdot \varGamma _2\) is a generating set for \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\) and hence \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\) is equal to the rowspace of the matrix \(\varvec{A}'\) whose rows are the vectorial representations of the elements in \(\varGamma _1\cdot \varGamma _2\). A row of \(\varvec{A}'\) corresponding to the element \(g_\mathfrak {m}^ia_{i,j_i}g_\mathfrak {m}^sb_{s,\ell _s}\in \varGamma _1\cdot \varGamma _2\) is equal to the row of \(\varvec{A}\) corresponding to the element \(a_{i,j_i}b_{s,\ell _s}\) multiplied by \(g_\mathfrak {m}^{i+s}\). Therefore, \(\varvec{A}'=\varvec{D}'\varvec{A}=\varvec{D}'\varvec{D}\varvec{T}=(\varvec{D}'\mid \varvec{0})\varvec{T}\), where \(\varvec{D}'\) is a \(t\times t\) diagonal matrix whose diagonal elements are all of the form \(g_{\mathfrak {m}}^{i+s}\) for suitable is. This shows that \(\varvec{A}'=(\varvec{D}'\mid \varvec{0})\varvec{T}\) is a Smith Normal Form for \(\varvec{A}'\) and the rank profile \(\phi ^{{\mathcal {M}}_1\cdot {\mathcal {M}}_2}(x)\) corresponds to \(\phi ^{{\mathcal {M}}_1}(x)\phi ^{{\mathcal {M}}_2}(x)\).

On the other hand, if \(\phi ^{{\mathcal {M}}_1\cdot {\mathcal {M}}_2}(x)=\phi ^{{\mathcal {M}}_1}(x)\phi ^{{\mathcal {M}}_2}(x)\), then the set \(\varGamma _1\cdot \varGamma _2\) is a \(\mathfrak {m}\)-shaped basis for \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\). Moreover, since \(\deg (\phi ^{{\mathcal {M}}_1}(x))+\deg (\phi ^{{\mathcal {M}}_2}(x))<r\), we have that \(F(\varGamma _1)\cdot F(\varGamma _2)=F(\varGamma _1\cdot \varGamma _2)\), which is a set of \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)\) nonzero elements. Let \(\varvec{S}\varvec{D}\varvec{T}\) be a Smith normal form for \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\), then the elements of \(F(\varGamma _1\cdot \varGamma _2)\) correspond to the first \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)\) rows of matrix \(\varvec{T}\), and hence they are \({R}\)-linearly independent. Thus, \(\mathcal {F}({\mathcal {M}}_1)\cdot \mathcal {F}({\mathcal {M}}_2)\) is free with free rank equal to \(\mathrm {rk}_{{R}}({\mathcal {M}}_1)\mathrm {rk}_{{R}}({\mathcal {M}}_2)\). \(\square \)

Remark 3

Observe that the second part of Proposition 3 does not hold anymore if we remove the hypothesis that \(\deg (\phi ^{{\mathcal {M}}_1}(x))+\deg (\phi ^{{\mathcal {M}}_2}(x))<r\).

Let \({\mathcal {A}}'\), \({\mathcal {A}}={\mathcal {A}}'+\langle a\rangle \) and \({\mathcal {B}}\) be three free modules of free rank \(\alpha -1\), \(\alpha \) and \(\beta \) respectively, such that \({\mathcal {A}}'\cdot {\mathcal {B}}\) is free of rank \((\alpha -1)\beta \), but \({\mathcal {A}}\cdot {\mathcal {B}}\) is not free of rank \(\alpha \beta \). Take a basis for \({\mathcal {A}}\) of the form \(\{a_1,\ldots , a_{\alpha -1},a\}\) such that \(\{a_1,\ldots , a_{\alpha -1}\}\) is a basis of \({\mathcal {A}}'\), and fix also a basis \(\{b_1,\ldots ,b_{\beta }\}\) for \({\mathcal {B}}\). Then, define \({\mathcal {M}}_1\) to be the \({R}\)-module whose \(\mathfrak {m}\)-shaped basis is \(\{a_1,\ldots ,\varvec{a}_{\alpha -1},g_\mathfrak {m}^{r-1}a\}\), and define \({\mathcal {M}}_2=\mathfrak {m}{\mathcal {B}}\). Consider the module \({\mathcal {M}}_1\cdot {\mathcal {M}}_2\). It is easy to see that \({\mathcal {M}}_1\cdot {\mathcal {M}}_2=\mathfrak {m}({\mathcal {A}}'\cdot {\mathcal {B}})={\mathcal {A}}'\cdot {\mathcal {M}}_2\). Observe that \({\mathcal {B}}\in \mathrm {Free}({\mathcal {M}}_2)\) and by Proposition 3 and Lemma 6, we have that \(\phi ^{{\mathcal {M}}_1 \cdot {\mathcal {M}}_2}(x) = \phi ^{{\mathcal {M}}_1}(x) \phi ^{{\mathcal {M}}_2}(x)\). However, by construction we have \({\mathcal {A}}\in \mathrm {Free}({\mathcal {M}}_1)\), \({\mathcal {B}}\in \mathrm {Free}({\mathcal {M}}_1)\) and \({\mathcal {A}}\cdot {\mathcal {B}}\) is not free of rank \(\alpha \beta \). Therefore, by Lemma 6 this also holds for \(\mathcal {F}({\mathcal {M}}_1)\cdot \mathcal {F}({\mathcal {M}}_2)\).

We are now ready to put the various statements of this subsection together and prove an upper bound on the failure probability of the product condition—the main statement of this subsection.

Theorem 1

Let \({\mathcal {B}}\) be a fixed \({R}\)-submodule of \({S}\) with rank profile \(\phi ^{{\mathcal {B}}}(x)\) and let \(\lambda :=\phi ^{{\mathcal {B}}}(1)=\mathrm {rk}_{{R}}({\mathcal {B}})\). Let t be a positive integer with \(t \lambda <m\) and \(\phi (x) \in \mathbb {Z}[x]/(x^r)\) with nonnegative coefficients such that \(\phi (1)=t\). Let \({\mathcal {A}}\) be an \({R}\)-submodule of \({S}\) selected uniformly at random among all the modules with \(\phi ^{\mathcal {A}}= \phi \). Then,

$$\begin{aligned} \Pr \left( \phi ^{{\mathcal {A}}\cdot {\mathcal {B}}} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}} \right) \le \left( 1-p^{-s\beta }\right) \sum _{i=1}^{\alpha } \sum _{j = 0}^{r-1} p^{s(r-j)(i \beta -m)} \le 2 \alpha p^{s(\alpha \beta -m)} \end{aligned}$$

Proof

Let us denote by \(\mathrm {Mod}(\phi )\) the set of all \({R}\)-submodules of \({S}\) whose rank profile equals \(\phi \). Choose uniformly at random a module \({\mathcal {A}}\) in \(\mathrm {Mod}(\phi )\), and then select \(\mathcal {X}\) uniformly at random from \(\mathrm {Free}({\mathcal {A}})\). Then, this results in a uniform distribution on the set of all free modules with free rank equal to \(\phi (1)=t\), that is the set \(\mathrm {Mod}(t)\), where t denotes the constant polynomial in \(\mathbb {Z}[x]/(x^r)\) equal to t. Indeed, for an arbitrary free module \({\mathcal {N}}\) with \(\mathrm {frk}_{{R}}({\mathcal {N}})=t\),

$$\begin{aligned} \Pr (\mathcal {X}={\mathcal {N}})&=\Pr (\mathcal {X}={\mathcal {N}}\mid {\mathcal {A}}\in \mathrm {Mod}({\mathcal {N}},\phi ))\Pr ({\mathcal {A}}\in \mathrm {Mod}({\mathcal {N}},\phi ))\\&=\frac{1}{|\mathrm {Free}({\mathcal {A}})|}\frac{|\mathrm {Mod}({\mathcal {N}},\phi )|}{|\mathrm {Mod}(\phi )|},\end{aligned}$$

which by Corollaries 1 and 2 is a constant number that does not depend on \({\mathcal {N}}\).

Now, suppose that \(\phi ^{{\mathcal {A}}\cdot {\mathcal {B}}} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}}\). By Proposition 3, this implies \({\mathcal {N}}\cdot {\mathcal {N}}'\) is not a free module of rank \(t\lambda \), where \({\mathcal {N}}\) is any free module in \(\mathrm {Free}({\mathcal {A}})\) and \({\mathcal {N}}'\) is any free module in \(\mathrm {Free}({\mathcal {B}})\). Hence,

$$\begin{aligned} \Pr \left( \phi ^{{\mathcal {A}}\cdot {\mathcal {B}}} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}} \right) \le 1- \Pr \big ({\mathcal {N}}\cdot {\mathcal {N}}'\text { is a free module of free rank }t\lambda \big ), \end{aligned}$$

and we conclude using Lemma 5. \(\square \)

As a consequence, we can finally derive the desired upper bound on the product condition failure probability.

Theorem 2

Let \(\mathcal {F}\) be defined as in Definition 2. Let t be a positive integer with \(t \lambda <m\) and \(\phi (x) \in \mathbb {Z}[x]/(x^r)\) with nonnegative coefficients and such that \(\phi (1)=t\) (recall that this means that an error of rank profile \(\phi \) has rank t). Let \(\varvec{e}\) be an error word, chosen uniformly at random among all error words with support \({\mathcal {E}}\) of rank profile \(\phi ^{\mathcal {E}}= \phi \). Then, the probability that the product condition is not fulfilled is

$$\begin{aligned}&\Pr \left( \phi ^{{\mathcal {E}}\cdot \mathcal {F}} \ne \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \le \left( 1-p^{-s\lambda }\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)(i \lambda -m)} \le 2 t p^{s(t \lambda -m)} \end{aligned}$$

Proof

Let us denote by \(\mathrm {Mod}(\phi )\) the set of all \({R}\)-submodules of \({S}\) whose rank profile equals \(\phi \). By Lemma 3, choosing uniformly at random \(\varvec{e}\) among all the words whose support \({\mathcal {E}}\) has rank profile \(\phi \) results in a uniform distribution on \(\mathrm {Mod}(\phi )\). At this point, the claim follows from Theorem 1. \(\square \)

5.2 Failure of syndrome condition

Here we derive a bound on the probability that the syndrome condition is not fulfilled, given that the product condition is satisfied. As in the case of finite fields, the bound is based on the relative number of matrices of a given dimension that have full (free) rank. For completeness, we give a closed-form expression for this number in the following lemma. However, it can also be derived from the number of submodules of a given rank profile, which was given in [13, Theorem 2.4]. Note that the latter result holds also for finite chain rings.

Lemma 7

Let ab be positive integers with \(a < b\). Then, the number of \(a \times b\) matrices over \({R}={{\,\mathrm{GR}\,}}(p^r,s)\) of (full) free rank a is \(\mathrm {NM}(a,b;{R}) = p^{a b r s} \prod _{a'=0}^{a-1} \left( 1-p^{a'-b} \right) \).

Proof

First note that \(\mathrm {NM}(1,b;{R}) = p^{b r s}-p^{b (r-1) s} = p^{brs}\big (1-p^{bs}\big )\) since a \(1 \times b\) matrices over \({R}\) is of free rank 1 if and only if at least one entry is a unit. Hence we subtract from the number of all matrices (\(|{R}|^b = p^{b r s}\)) the number of vectors that consist only of non-units \((|{R}|-|{R}^*|)^b = p^{b(r-1)s}\) (cf. (1)).

Let now for any \(a' \le a\) be \(\varvec{A}\in {R}^{a' \times b}\) a matrix of free rank \(a'\). We define \(\mathcal {V}(\varvec{A}) := \big \{ \varvec{v}\in {R}^{1 \times b} \! : \! \mathrm {frk}\big (\begin{bmatrix} \varvec{A}^\top \varvec{v}^\top \end{bmatrix}^\top \big ) = a' \big \}\). We study the cardinality of \(\mathcal {V}(\varvec{A})\). We have \(\mathrm {frk}\big (\begin{bmatrix} \varvec{A}^\top \varvec{v}^\top \end{bmatrix}^\top \big ) = a'\) if and only if the rows of the matrix \(\hat{\varvec{A}} := \begin{bmatrix} \varvec{A}^\top \varvec{v}^\top \end{bmatrix}^\top \) are linearly dependent. Due to \(\mathrm {frk}(\varvec{A}) = a'\) and the existence of a Smith normal form of \(\varvec{A}\), there are invertibe matrices \(\varvec{S}\) and \(\varvec{T}\) such that \(\varvec{S}\varvec{A}\varvec{T}= \varvec{D}\), where \(\varvec{D}\) is a diagonal matrix with ones on its diagonal.

Since \(\varvec{S}\) and \(\varvec{T}\) are invertible, we can count the number of vectors \(\varvec{v}'\) such that the rows of the matrix \(\big [ \varvec{D}^\top {\varvec{v}'}^\top \big ]^\top \) are linearly independent instead of the matrix \(\hat{\varvec{A}}\) (note that \(\varvec{v}= \varvec{v}' \varvec{T}^{-1}\) gives a corresponding linearly dependent row in \(\hat{\varvec{A}}\)).

Since \(\varvec{D}\) is in diagonal form with only ones on its diagonal, the linearly dependent vectors are exactly of the form

$$\begin{aligned} \varvec{v}' = [v'_1,\ldots ,v'_a,v'_{a'+1},\ldots , v'_b], \end{aligned}$$

where \(v'_i \in {R}\) for \(i=1,\ldots ,a'\) and \(v'_i \in \mathfrak {m}\) for \(i=a'+1,\ldots ,b\). Hence, we have

$$\begin{aligned} |\mathcal {V}(\varvec{A})| = p^{a'rs} p^{(b-a')(r-1)s)} = p^{brs} p^{(a'-b)s}. \end{aligned}$$

Note that this value is independent of \(\varvec{A}\).

By the discussion on \(|\mathcal {V}(\varvec{A})|\), we get the following recursive formula:

$$\begin{aligned} \mathrm {NM}(a'\!+ \!1,b;{R}) \! = \! {\left\{ \begin{array}{ll} \mathrm {NM}(a',b;{R}) p^{brs}\! \left( 1- p^{(a'-b)s} \right) , \! \! &{}a'\ge 1, \\ p^{brs}\!\left( 1-p^{bs}\right) , &{}a'=0, \end{array}\right. } \end{aligned}$$

which resolves into \(\mathrm {NM}(a,b;{R}) = p^{a b r s} \prod _{a'=0}^{a-1} \left( 1-p^{(a'-b)s}\right) \). \(\square \)

At this point we can prove the bound on the failure probability of the syndrome condition similar to the one in [10], using Lemma 7. The additional difficulty over rings is to deal with non-unique decompositions of module elements in \(\mathfrak {m}\)-shaped bases and the derivation of a simplified bound on the relative number of non-full-rank matrices. Furthermore, the start of the proof corrects a minor technical impreciseness of Gaborit et al.’s proof.

Theorem 3

Let \(\mathcal {F}\) be defined as in Definition 2, t be a positive integer with \(t \lambda < \min \{m,n-k+1\}\), and \({\mathcal {E}}\) be an error space of rank t. Suppose that the product condition is fulfilled for \({\mathcal {E}}\) and \(\mathcal {F}\). Suppose further that \(\varvec{H}\) has the maximal-row-span and unity properties (cf. Definition 3).

Let \(\varvec{e}\) be an error word, chosen uniformly at random among all error words with support \({\mathcal {E}}\). Then, the probability that the syndrome condition is not fulfilled for \(\varvec{e}\) is

$$\begin{aligned} \Pr \left( {\mathcal {S}}\ne {\mathcal {E}}\cdot \mathcal {F}\mid \phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \le 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) < 4 p^{-s(n-k+1-\lambda t)}. \end{aligned}$$

Proof

Let \(\varvec{e}' \in {S}^n\) be chosen such that every entry \(e_i'\) is chosen uniformly at random from the error support \({\mathcal {E}}\).Footnote 1 Denote by \({\mathcal {S}}_{\varvec{e}}\) and \({\mathcal {S}}_{\varvec{e}'}\) the syndrome spaces obtained by computing the syndromes of \(\varvec{e}\) and \(\varvec{e}'\), respectively. Then, we have

$$\begin{aligned} \Pr \big ({\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\big ) \le \Pr \big ( {\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\mid \mathrm {supp}_\mathrm {R}(\varvec{e}') = {\mathcal {E}}\big ) = \Pr \big ( {\mathcal {S}}_{\varvec{e}} = {\mathcal {E}}\cdot \mathcal {F}\big ), \end{aligned}$$

where the latter equality follows from the fact that the random experiments of choosing \(\varvec{e}'\) and conditioning on the property that \(\varvec{e}'\) has support \({\mathcal {E}}\) is the same as directly drawing \(\varvec{e}\) uniformly at random from the set of errors with support \({\mathcal {E}}\). Hence, we obtain a lower bound on \(\Pr \big ( {\mathcal {S}}_{\varvec{e}} = {\mathcal {E}}\cdot \mathcal {F}\big )\) by studying \(\Pr \big ({\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\big )\), which we do in the following.

Let \(f_1,\ldots ,f_\lambda \) and \(\varepsilon _1,\ldots ,\varepsilon _t\) be \(\mathfrak {m}\)-shaped bases of \(\mathcal {F}\) and \({\mathcal {E}}\), respectively, such that \(f_j \varepsilon _i\) for \(i=1,\ldots ,t\), \(j=1,\ldots ,\lambda \) form an \(\mathfrak {m}\)-shaped basis of \({\mathcal {E}}\cdot \mathcal {F}\). Note that the existence of such bases is guaranteed by the assumed product condition \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\).

Since \(e'_i\) is an element drawn uniformly at random from \({\mathcal {E}}\), we can write it as \(e'_i = \sum _{\mu =1}^{t} e_{i,\mu }' \varepsilon _\mu \), where \(e_{i,j}'\) are uniformly distributed on \({R}\). We can assume uniformity of \(e_{i,\mu }'\) since for a given \(e'_i\), the decomposition of \(e_{i,\mu }'\) is unique modulo \(\mathfrak {m}^{r-v(\varepsilon _i)}\). In particular, there are equally many decompositions \([e_{i,1}',\dots ,e_{i,t}']\) for each \(e'_i\) and the sets of these decompositions are disjoint for different i.

Due to the unity property of the parity-check matrix \(\varvec{H}\), we can write any entry \(H_{i,j}\) of \(\varvec{H}\) as \(H_{i,j} = \sum _{\eta =1}^{\lambda } h_{i,j,\eta } f_\eta \), where the \(h_{i,j,\eta }\) are units in \({R}\) or zero. Furthermore, since each row of \(\varvec{H}\) spans the entire module \(\mathcal {F}\) (full-row-span property), for each i and each \(\eta \), there is at least one \(j^*\) with \(h_{i,j^*,\eta } \ne 0\). By the previous assumption, this means that \(h_{i,j^*,\eta } \in {R}^*\).

Then, each syndrome coefficient can be written as

$$\begin{aligned} s_i = \sum \nolimits _{j=1}^{n} e'_j H_{i,j} = \sum \nolimits _{\mu =1}^{t} \sum \nolimits _{\eta =1}^{\lambda } \underbrace{\left( \sum \nolimits _{j=1}^{n} e_{j,\mu }' h_{i,j,\eta }\right) }_{=: s_{\mu ,\eta ,i}} \varepsilon _\mu f_\eta . \end{aligned}$$

By the above discussion, for each i and \(\eta \), there is a \(j^*\) with \(h_{i,j^*,\eta } \in {R}^*\). Hence, \(s_{\mu ,\eta ,i}\) is a sum (with at least one summand) of the products of uniformly distributed elements of \({R}\) and units of \({R}\). A uniformly distributed ring element times a unit is also uniformly distributed on \({R}\). Hence \(s_{\mu ,\eta ,i}\) is a sum (with at least one summand) of uniformly distributed elements of \({R}\). Hence, \(s_{\mu ,\eta ,i}\) itself is uniformly distributed on \({R}\).

All together, we can write

$$\begin{aligned} \begin{bmatrix} s_1 \\ s_2 \\ \vdots \\ s_{n-k} \end{bmatrix} = \underbrace{ \begin{bmatrix} s_{1,1,1} &{} s_{1,2,1} &{} \cdots &{} s_{t,\lambda ,1} \\ s_{1,1,2} &{} s_{1,2,2} &{} \cdots &{} s_{t,\lambda ,2} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ s_{1,1,n-k} &{} s_{1,2,n-k} &{} \cdots &{} s_{t,\lambda ,n-k} \\ \end{bmatrix}}_{=: \, \varvec{S}} \cdot \begin{bmatrix} \varepsilon _1 f_1 \\ \varepsilon _1 f_2 \\ \vdots \\ \varepsilon _t f_\lambda \\ \end{bmatrix}, \end{aligned}$$

where, by assumption, the \(\varepsilon _i f_j\) are a generating set of \({\mathcal {E}}\cdot \mathcal {F}\) and the matrix \(\varvec{S}\) is chosen uniformly at random from \({R}^{(n-k)\times t \lambda }\). If \(\varvec{S}\) has full free rank \(t \lambda \), then we have \({\mathcal {S}}_{\varvec{e}'} = {\mathcal {E}}\cdot \mathcal {F}\). By Lemma 7, the probability of drawing such a full-rank matrix is

$$\begin{aligned} \frac{\mathrm {NM}(a,b;{R})}{|{R}|^{ab}} = \prod _{a'=0}^{a-1} \left( 1-p^{(a'-b)s} \right) . \end{aligned}$$

This proves the bound

$$\begin{aligned} \Pr \left( {\mathcal {S}}\ne {\mathcal {E}}\cdot \mathcal {F}\mid \phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \le 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) . \end{aligned}$$

We simplify the bound further using the observation that the product is a q-Pochhammer symbol. Hence, we have

$$\begin{aligned} 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) = \sum _{j=1}^{\lambda t} \underbrace{(-1)^{j+1} p^{-j(n-k)s} \left[ \begin{matrix} \lambda t \\ j \end{matrix} \right] _{p^s} p^{s\left( {\begin{array}{c}j\\ 2\end{array}}\right) }}_{=: \, a_j}, \end{aligned}$$

where \(\left[ \begin{matrix} a \\ b \end{matrix} \right] _{q} := \prod _{j=1}^{b} \tfrac{q^{a+1-j}-1}{q^{j}-1}\) is the Gaussian binomial coefficient. Using \(q^{b(a-b)} \le \left[ \begin{matrix} a \\ b \end{matrix} \right] _{q} < 4 q^{b(a-b)}\), we obtain

$$\begin{aligned} \left| \frac{a_{j+1}}{a_j}\right|&= p^{-(n-k-j)s} \frac{\left[ \begin{matrix} \lambda t \\ j+1 \end{matrix} \right] _{p^s}}{\left[ \begin{matrix} \lambda t \\ j \end{matrix} \right] _{p^s}}< p^{-(n-k-j)s} \frac{4 q^{s(j+1)(\lambda t-j-1)}}{q^{sj(\lambda t-j)}} \\&= 4 p^{s[\lambda t - j - (n-k+1)]} < 1 \end{aligned}$$

for \(\lambda t < n-k+1\), i.e., \(|a_j|\) is strictly monotonically decreasing. Since the summands \(a_j\) have alternating sign, we can thus bound \(\sum _{j=1}^{\lambda t}a_j \le a_1\), which gives

$$\begin{aligned} 1 - \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) \le a_1 < 4 p^{-s(n-k+1-\lambda t)} \end{aligned}$$

\(\square \)

Remark 4

In contrast to Theorem 3 the full-row-span property was not assumed in [1, Proposition 4.3], which is the analogous statement for finite fields. However, also the statement in [1, Proposition 4.3] is only correct if we assume additional structure on the parity-check matrix (e.g., that each row spans the entire space \(\mathcal {F}\) or a weaker condition), due to the following counterexample: Consider a parity-check matrix \(\varvec{H}\) that contains only non-zero entries on its diagonal and in the last row, where the diagonal entries are all \(f_1\) and the last row contains the remaining \(f_2,\ldots ,f_\lambda \), i.e.,

This is a valid parity-check matrix according to [1, Definition 4.1] since the entries of \(\varvec{H}\) span the entire space \(\mathcal {F}\). However, due to the structure of the matrix, the first \(n-k-1\) syndromes are all in \(f_1 {\mathcal {E}}\), hence \(\mathrm {rk}_{{R}}({\mathcal {S}}) \le t+1 < t \lambda \) for any error of support \({\mathcal {E}}\).

5.3 Failure of intersection condition

We use a similar proof strategy as in [1] to derive an upper bound on the failure probability of the intersection condition. The following lemma is the Galois-ring analog of [1, Lemma 3.4], where the difference is that we need to take care of the fact that the representation of module elements in an \(\mathfrak {m}\)-shaped basis is not necessarily unique in a Galois ring.

Lemma 8

Let \({\mathcal {A}}\subseteq {S}\) be an \({R}\)-module of rank \(\alpha \) and \({\mathcal {B}}\subseteq {S}\) be a free \({R}\)-module of free rank \(\beta \). Assume that \(\phi ^{{\mathcal {A}}\cdot {\mathcal {B}}^2} = \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}^2}\) and that there is an element \(e \in {\mathcal {A}}\cdot {\mathcal {B}}\setminus {\mathcal {A}}\) with \(e {\mathcal {B}}\subseteq {\mathcal {A}}\cdot {\mathcal {B}}\). Then, there is an \(y \in {\mathcal {B}}\setminus {R}\) such that \(y {\mathcal {B}}\subseteq {\mathcal {B}}\).

Proof

Let \(a_1,\ldots ,a_\alpha \) be an \(\mathfrak {m}\)-shaped basis of \({\mathcal {A}}\) and \(b_1,\ldots ,b_\beta \) be a basis of \({\mathcal {B}}\). Due to \(e \in {\mathcal {A}}\cdot {\mathcal {B}}\), there are coefficients \(e_{i,j} \in {R}\) such that

$$\begin{aligned} \textstyle e = \sum _{i=1}^{\alpha } \underbrace{\left( \textstyle \sum _{j=1}^{\beta } e_{i,j} b_j \right) }_{=: \, b'_i} a_i. \end{aligned}$$
(12)

Due to the fact that \(e \notin {\mathcal {A}}\), there is an \(\eta \in \{1,\ldots ,\alpha \}\) with \(b_\eta ' a_\eta \notin {\mathcal {A}}\). In particular, \(y := g_\mathfrak {m}^{v(a_\eta )} b_\eta ' \in {\mathcal {B}}\setminus {R}\). We show that y fulfills \(y {\mathcal {B}}\subseteq {\mathcal {B}}\).

Let now \(b \in {\mathcal {B}}\). Since by assumption \(eb \in {\mathcal {A}}\cdot {\mathcal {B}}\), there are \(c_{i,j} \in {R}\) with \(e b = \sum _{i=1}^{\alpha } \left( \sum _{j=1}^{\beta } c_{i,j} b_j \right) a_i\). By (12), we can also write \(e b = \sum _{i=1}^{\alpha } \left( \sum _{j=1}^{\beta } e_{i,j} b_j b \right) a_i = \sum _{i=1}^{\alpha } b_i' b a_i\). Due to the maximality of the rank profile of \({\mathcal {A}}\cdot {\mathcal {B}}^2\), i.e., \(\phi ^{{\mathcal {A}}\cdot {\mathcal {B}}^2} = \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}^2}\), we have that the coefficients \(c_{i} \in {\mathcal {B}}^2\) of any representation \(c = \sum _i c_i a_i\) of an element \(c \in {\mathcal {A}}\cdot {\mathcal {B}}^2\) are unique modulo \(\mathfrak {M}^{r-v(a_i)}\). Hence, for every \(i=1,\ldots ,\alpha \), there exists \(\chi _i \in {\mathcal {B}}^2\) such that

$$\begin{aligned} b_i' b = \sum _{j=1}^{\beta } c_{i,j} b_j + g_\mathfrak {m}^{r-v(a_i)} \chi _i. \end{aligned}$$

Thus, with \(\sum _{j=1}^{\beta } c_{\eta ,j} b_j \in {\mathcal {B}}\), \(g_\mathfrak {m}^{v(a_i)} \in {R}\), and \(g_\mathfrak {m}^{r}=0\), we get

$$\begin{aligned} y b = g_\mathfrak {m}^{v(a_\eta )} b_\eta ' b = g_\mathfrak {m}^{v(a_\eta )}\sum _{j=1}^{\beta } c_{\eta ,j} b_j + g_\mathfrak {m}^{r} \chi _\eta \in {\mathcal {B}}. \end{aligned}$$

Since this hold for any b, we have \(y {\mathcal {B}}\subseteq {\mathcal {B}}\), which proves the claim. \(\square \)

We get the following bound using Lemma 8, Theorem 1, and a similar argument as in [10].

Theorem 4

Let \(\mathcal {F}\) be defined as in Definition 2 such that it has the base-ring property (i.e., \(1 \in \mathcal {F})\). Suppose that no intermediate ring \(R'\) between \({R}\subsetneq R' \subseteq {S}\) is contained in \(\mathcal {F}\) (this holds, e.g., for \(\lambda \) greater than the smallest divisor of m or for special \(\mathcal {F})\).

Let t be a positive integer with \(t \tfrac{\lambda (\lambda +1)}{2} < m\) and \(t \lambda < n-k+1\), and let \(\phi (x) \in \mathbb {Z}[x]/(x^r)\) with nonnegative coefficients such that \(\phi (1)=t\). Choose \(\varvec{e}\in {S}^n\) uniformly at random from the set of vectors with whose support has rank profile \(\phi \).

Then, the probability that the intersection condition is not fulfilled, given that syndrome and product conditions are satisfied, is

$$\begin{aligned}&\Pr \left( \textstyle \bigcap _{i=1}^{\lambda } {\mathcal {S}}_i = {\mathcal {E}}\mid {\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\wedge \phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}} \right) \\&\quad \le \left( 1-p^{-s\frac{\lambda (\lambda +1)}{2}}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \frac{\lambda (\lambda +1)}{2}-m\right) } \le 2 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$

Proof

Suppose that the product (\(\phi ^{{\mathcal {E}}\cdot \mathcal {F}} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}}\)) and syndrome (\({\mathcal {S}}= {\mathcal {E}}\cdot \mathcal {F}\)) conditions are fulfilled, and assume that the intersection condition is not fulfilled. Then we have \(\bigcap _{i=1}^{\lambda } {\mathcal {S}}_i =: {\mathcal {E}}' \supsetneq {\mathcal {E}}\). Choose any \(e \in {\mathcal {E}}' \setminus {\mathcal {E}}\). Since \(\mathcal {F}\) contains 1 by assumption, we have \(e \in {\mathcal {A}}\cdot {\mathcal {B}}\). Due to \({\mathcal {A}}\subseteq {\mathcal {E}}\), we have \(e \notin {\mathcal {A}}\). Furthermore, we have \({\mathcal {E}}' \cdot {\mathcal {B}}= {\mathcal {E}}\cdot {\mathcal {B}}\), so all conditions on e of Lemma 8 are fulfilled.

Since \({\mathcal {E}}\) is chosen uniformly at random from all free submodules of \({S}\) of rank t, we can apply Theorem 1 and obtain that \(\phi ^{{\mathcal {E}}\cdot \mathcal {F}^2} = \phi ^{{\mathcal {E}}} \phi ^{\mathcal {F}^2}\) with probability at least

$$\begin{aligned}&\Pr \!\left( \phi ^{{\mathcal {A}}\cdot {\mathcal {B}}^2} \ne \phi ^{{\mathcal {A}}} \phi ^{{\mathcal {B}}^2}\right) \\&\quad \le \left( 1-p^{-s\lambda '}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)(i \lambda '-m)}\\&\quad \le \left( 1-p^{-s\frac{\lambda (\lambda +1)}{2}}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \frac{\lambda (\lambda +1)}{2}-m\right) } \\&\quad \le 2 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$

where \(\lambda ' := \mathrm {rk}_{{R}}(\mathcal {F}^2) \le \tfrac{1}{2}\lambda (\lambda +1)\) (this is clear since \(\mathcal {F}^2\) is generated by the products of all unordered element pairs of an \(\mathfrak {m}\)-shaped basis of \(\mathcal {F}\)).

Hence, with probability at least one minus this value, both conditions of Lemma 8 are fulfilled. In that case, there is an element \(y \in \mathcal {F}\setminus {R}\) such that \(y \mathcal {F}\subseteq \mathcal {F}\). Thus, also \(y^i \mathcal {F}\subseteq \mathcal {F}\) for all positive integers i, and we have that the ring \({R}(y)\) extended by the element \(y \notin {R}\) fulfills \({R}(y) \subseteq \mathcal {F}\) (this holds since \(\mathcal {F}\) contains at least one unit). This is a contradiction to the assumption on intermediate rings. \(\square \)

5.4 Overall failure probability

The following theorem states the overall bound on the failure probability, exploiting the bounds derived in Theorems 2, 3, and 4.

Theorem 5

Let \(\mathcal {F}\) be defined as in Defintion 2 such that it has the base-ring property (i.e., \(1 \in \mathcal {F})\). Suppose that no intermediate ring \(R'\) between \({R}\subsetneq R' \subseteq {S}\) is contained in \(\mathcal {F}\) (this holds, e.g., for \(\lambda \) greater than the smallest divisor of m or for special \(\mathcal {F})\). Suppose further that \(\varvec{H}\) has the maximal-row-span and unity properties (cf. Definition 3).

Let t be a positive integer with \(t \tfrac{\lambda (\lambda +1)}{2} < m\) and \(t \lambda < n-k+1\), and let \(\phi (x) \in \mathbb {Z}[x]/(x^r)\) with nonnegative coefficients such that \(\phi (1)=t\). Choose \(\varvec{e}\in {S}^n\) uniformly at random from the set of vectors with whose support has rank profile \(\phi \).

Then, Algorithm 1 with input \(\varvec{c}+\varvec{e}\) returns \(\varvec{c}\) with a failure probability of at most

$$\begin{aligned} \Pr (\text {failure})&\le \left( 1-p^{-s\lambda }\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \lambda -m\right) } \nonumber \\&\quad + \left[ 1- \prod _{i=0}^{\lambda t -1} \left( 1-p^{[i-(n-k)]s}\right) \right] \nonumber \\&\quad + \left( 1-p^{-s\frac{\lambda (\lambda +1)}{2}}\right) \sum _{i=1}^{t} \sum _{j = 0}^{r-1} p^{s(r-j)\left( i \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$
(13)
$$\begin{aligned}&\le 4 p^{s[\lambda t-(n-k+1)]} + 4 t p^{s\left( t \frac{\lambda (\lambda +1)}{2}-m\right) } \end{aligned}$$
(14)

Proof

The statement follows by applying the union bound to the failure probabilities of the three success conditions, derived in Theorems 2, 3, and 4. \(\square \)

The simplified bound (14) in Theorem 5 coincides up to a constant with the bound by Gaborit et at. [10] in the case of a finite field (Galois ring with \(r=1\)). If we compare an LRPC code over a finite field of size \(p^{rs}\) and with an LRPC code over a Galois ring with parameters prs (i.e., the same cardinality), then we can observe that the bounds have the same exponent, but the base of the exponent is different: It is \(p^{rs}\) for the field and \(p^s\) for the ring case. Hence, the maximal decoding radii \(t_\mathrm {max}\) (i.e., the maximal rank t for which the bound is \(<1\)) are roughly the same, but the exponential decay in \(t_\mathrm {max}-t\) for smaller error rank t is slower in case of rings due to a smaller base of the exponential expression. This “loss” is expected due to the weaker structure of modules over Galois rings compared to vector spaces over fields.

6 Decoding complexity

We discuss the decoding complexity of the decoding algorithm described in Sect. 4. Over a field, all operations within the decoding algorithm are well-studied and it is clear that the algorithm runs in roughly \(\tilde{O}(\lambda ^2 n^2 m)\) operations over the small field \(\mathbb {F}_q\). Although we believe that an analog treatment over the rings studied in this paper must be known in the community, we have not found a comprehensive complexity overview of the corresponding operations in the literature. Hence, we start the complexity analysis with an overview of complexities of ring operations and linear algebra over these rings.

6.1 Cost model and basic ring operations

We express complexities in operations in \({R}\). For some complexity expressions, we use the soft-O notation, i.e., \(f(n) \in \tilde{O}(g(n))\) if there is a \(r \in \mathbb {Z}_{\ge 0}\) such that \(f(n) \in \tilde{O}(g(n) \log (g(n))^r)\). We use the following result, which follows straightforwardly from standard computer-algebra methods in the literature.

Lemma 9

(Collection of results in [27]) Addition in \({S}\) costs m additions in \({R}\). Multiplication in \({S}\) can be done in \(O(m \log (m) \log (\log (m)))\) operations in \({R}\).

Proof

We represent elements of \({S}\) as residue classes of polynomials in \({R}[z]/(h(z))\) (e.g., each residue class is represented by its unique representative of degree \(<m\)), where \(h \in {R}[z]\) is a monic polynomial of degree m as explained in the preliminaries.

Addition is done independently on the m coefficients of the polynomial representation, so it only requires m additions in \({R}\). Multiplication consists of multiplying two residue classes in \({R}[z]/(h(z))\), which can be done by multiplying the two representatives of degree \(<m\) and then taking them modulo (h(z)) (i.e., take the remainder of the division by the monic polynomial h). Both multiplication and division can be implemented in \(O(m \log (m) \log (\log (m)))\) time using Schönhage and Strassen’s polynomial multiplication algorithm (cf. [27, Sect. 8.3]) and a reduction of division to multiplication using a Newton iteration (cf. [27, Sect. 9.1]). Note that both methods work over any commutative ring with 1. \(\square \)

6.2 Linear algebra over Galois rings

We recall how fast we can compute the Smith normal form of a matrix over \({R}\) and show that computing the right kernel of a matrix and solving a linear system can be done in a similar speed. Let \(2 \le \omega \le 3\) be the matrix multiplication exponent (e.g., \(\omega = 2.37\) using the Coppersmith–Winograd algorithm).

Lemma 10

([25, Proposition 7.16]) Let \(\varvec{A}\in {R}^{a \times b}\). Then, the Smith normal form \(\varvec{D}\) of \(\varvec{A}\), as well as the corresponding transformation matrices \(\varvec{S}\) and \(\varvec{T}\), can be computed in

$$\begin{aligned} O(a b \min \{a,b\}^{\omega -2} \log (a+b)) \end{aligned}$$

operations in \({R}\).

Lemma 11

Let \(\varvec{A}\in {R}^{a \times b}\). An \(\mathfrak {m}\)-shaped basis of the right kernel of \(\varvec{A}\) can be computed in \(O(a b \min \{a,b\}^{\omega -2} \log (a+b))\) operations in \({R}\).

Proof

We compute the Smith normal form \(\varvec{D}= \varvec{S}\varvec{A}\varvec{T}\) and the transformation matrices \(\varvec{S}\) and \(\varvec{T}\) of \(\varvec{A}\). To compute the right kernel, we need to solve the homogeneous linear system \(\varvec{A}\varvec{x}= \varvec{0}\) for \(\varvec{x}\). Using the Smith normal form, we can rewrite it into

$$\begin{aligned} \varvec{D}\varvec{T}^{-1} \varvec{x}= \varvec{0}. \end{aligned}$$

Denote \(\varvec{y}:= \varvec{T}^{-1} \varvec{x}\) and first solve \(\varvec{D}\varvec{y}= \varvec{0}\). W.l.o.g., let the diagonal entries of \(\varvec{D}\) be of the form

$$\begin{aligned} \begin{bmatrix} \varvec{I}_{n_0} &{} &{} &{} &{} \\ &{} g_\mathfrak {m}\varvec{I}_{n_1} &{} &{} &{} \\ &{} &{} \ddots &{} &{} \\ &{} &{} &{} g_{\mathfrak {m}}^{r-1}\varvec{I}_{n_{r-1}} &{} \\ &{} &{} &{} &{} \varvec{0} \end{bmatrix} \end{aligned}$$

where the \(n_i\) are the coefficients of the rank profile \(\phi (x)=\sum _{i=0}^{r-1}n_ix^i\in \mathbb {N}[x]/(x^r)\) of \(\varvec{A}\)’s row space. Then, the rows of the following matrix are an \(\mathfrak {m}\)-shaped basis of the right kernel of \(\varvec{D}\) (we denote by \(\eta := n_0\) the free rank of \(\varvec{A}\)’s row space and by \(\mu := \sum _{i=0}^{r-1}n_i)\) the rank of \(\varvec{A}\)’s row space):

$$\begin{aligned} \varvec{K}:= \begin{bmatrix} \varvec{0}_{(\mu -\eta ) \times \eta } &{} \varvec{B}&{} \varvec{0}_{(\mu -\eta ) \times (b-\mu )} \\ \varvec{0}_{(b-\mu ) \times \eta } &{} \varvec{0}_{(b-\mu ) \times (\mu -\eta )} &{} \varvec{I}_{(b-\mu ) \times (b-\mu )} \\ \end{bmatrix} \in {R}^{(b-\eta ) \times b}, \end{aligned}$$

where

$$\begin{aligned} \varvec{B}:= \begin{bmatrix} g_\mathfrak {m}^{r-1} \varvec{I}_{n_1} &{} &{} &{} \\ &{} g_\mathfrak {m}^{r-2} \varvec{I}_{n_1} &{} &{} \\ &{}&{} \ddots &{} \\ &{} &{} &{} g_{\mathfrak {m}}^{1}\varvec{I}_{n_{r-1}} \\ \end{bmatrix}. \end{aligned}$$

Hence, the rows of \(\varvec{K}\varvec{T}^\top \) form an \(\mathfrak {m}\)-shaped basis of the right kernel of \(\varvec{A}\). Note that this matrix multiplication can be implemented with complexity \(O(b^2)\) since \(\varvec{K}\) has only at most one entry per row and column. \(\square \)

Lemma 12

Let \(\varvec{A}\in {R}^{a \times b}\) and \(\varvec{b}\in {R}^{a}\). A solution of the linear system \(\varvec{A}\varvec{x}= \varvec{b}\) (or, in case no solution exists, the information that it does not exist) can be obtained in \(O(a b \min \{a,b\}^{\omega -2} \log (a+b))\) operations in \({R}\).

Proof

We follow the same strategy and the notation as in Lemma 11. Solve

$$\begin{aligned} \varvec{D}\underbrace{\varvec{T}^{-1} \varvec{x}}_{=: \, \varvec{y}} = \varvec{S}\varvec{b}=: \varvec{b}'. \end{aligned}$$

for one \(\varvec{y}\). The system has a solution if and only if \(b_j' \in \mathfrak {M}^{i_j}\) for \(j=1,\ldots ,r'\), and \(b_j' = 0\) for all \(j>r'\). In case it has a solution, it is easy to obtain a solution \(\varvec{y}\). Then we only need to compute \(\varvec{x}= \varvec{T}\varvec{y}\), which is a solution of \(\varvec{A}\varvec{x}= \varvec{b}\). The heaviest step is to compute the Smith normal form, which proves the complexity statement. \(\square \)

6.3 Complexity of the LRPC decoder over Galois rings

Theorem 6

Suppose that the inverse elements \(f_1^{-1},\ldots ,f_\lambda ^{-1}\) are precomputed. Then, Algorithm 1 has complexity \(\tilde{O}(\lambda ^2 n^2 m)\) operations in \({R}\).

Proof

The heaviest steps of Algorithm 1 (see Sect. 4) are as follows:

Line 1 computes the syndrome \(\varvec{s}\) from the received word. This is a vector-matrix multiplication in \({S}\), which costs \(O(n(n-k)) \subseteq O(n^2)\) operations in \({S}\), i.e., \(\tilde{O}(n^2m)\) operations in \({R}\).

Line 4 is called \(\lambda \) times and computes for each \(f_i\) the set \(S_i = f_i^{-1} {\mathcal {S}}\) (recall that the inverses \(f_i^{-1}\) are precomputed). We obtain a generating set of \({\mathcal {S}}_i\) by multiplying \(f_i^{-1}\) to all syndrome coefficients \(s_1,\ldots ,s_{n-k}\). This costs \(O(\lambda (n-k))\) operations in \({S}\) in total, i.e., \(\tilde{O}(\lambda n m)\) operations in \({R}\). If we want a minimal generating set, we can compute the Smith normal form for each \({\mathcal {S}}_i\), which costs \(\tilde{O}(\lambda n^{\omega -1}m)\) operations in \({R}\) according to Lemma 10.

Line 5 computes the intersection \({\mathcal {E}}' \leftarrow \bigcap _{i=1}^{\lambda } {\mathcal {S}}_i\) of the modules \({\mathcal {S}}_i\). This can be computed via the kernel computation algorithm as follows: Let \({\mathcal {A}}\) and \({\mathcal {B}}\) be two modules. Then, we have \({\mathcal {A}}\cap {\mathcal {B}}= \mathcal {K} \left( \mathcal {K} ({\mathcal {A}}) \cup \mathcal {K} ({\mathcal {B}}) \right) \). Hence, we can compute the intersection \({\mathcal {A}}\cap {\mathcal {B}}\) by writing generating sets of the modules as the rows of two matrices \(\varvec{A}\) and \(\varvec{B}\), respectively. Then, we compute matrices \(\varvec{A}'\) and \(\varvec{B}'\), whose rows are generating sets of the right kernel of \(\varvec{A}\) and \(\varvec{B}\), respectively. Then, rows of the matrix \(\varvec{C}:= \begin{bmatrix} \varvec{A}' \\ \varvec{B}' \end{bmatrix}\) are a generating set of \(\mathcal {K} ({\mathcal {A}}) \cup \mathcal {K} ({\mathcal {B}})\), and be obtain \({\mathcal {A}}\cap {\mathcal {B}}\) by computing again the right kernel of \(\varvec{C}\). By applying this algorithm iteratively to the \({\mathcal {S}}_i\) (using the kernel computation algorithm described in Lemma 11), we obtain the intersection \({\mathcal {E}}'\) in \(\tilde{O}(\lambda n^{\omega -1}m)\) operations.

Line 6 recovers an error vector \(\varvec{e}\) from the support \({\mathcal {E}}'\) and syndrome \(\varvec{s}\). As shown in the proof of Lemma 2, this can be done by solving t linear systems over \({R}\) with each n unknowns and \((n-k)\lambda \) equations w.r.t. the same matrix \(\varvec{H}_{\mathrm {ext}}\). Hence, we only once need to compute the Smith normal form of \(\varvec{H}_{\mathrm {ext}}\), which requires \(\tilde{O}(n [(n-k)\lambda ]^{\omega -1})\) operations. The remaining steps for solving the systems (see Lemma 12 to compute one solution, if it exists, and Lemma 11 to compute an affine basis) consist mainly of matrix-vector operations, which require in total \(\tilde{O}(t \lambda ^2(n-k)^2)\) operations in \({R}\), where \(t \le m\) is the rank of \({\mathcal {E}}'\). Note that during the algorithm, it is easy to detect whether the systems have no solution, a unique solution, or more than one solution. \(\square \)

Remark 5

The assumption that \(f_1^{-1},\ldots ,f_\lambda ^{-1}\) are precomputed makes sense since in many application, the code is chosen once and then several received words are decoded for the same \(f_1,\ldots ,f_\lambda \). Precomputation of all \(f_1^{-1},\ldots ,f_\lambda ^{-1}\) costs at most \(\tilde{O}(\lambda m^\omega )\) since for \(a \in {S}\), the relation \(a^{-1} a \equiv 1 \mod h\) (for a and \(a^{-1}\) being the unique representative in \({R}[z]/(h)\) with degree \(<m\)) gives a linear system of equations of size \(m \times m\) over \({R}\) with a unique solution \(a^{-1}\). This complexity can only exceed the cost bound in Theorem 6 if \(m \gg n\).

In fact, we conjecture, but cannot rigorously prove, that the inverse of a unit in \({S}\) can be computed in \(\tilde{O}(m)\) operations in \({R}\) using a fast implementation of the extended Euclidean algorithm (see, e.g., [27]). If this is true, the precomputation cost is smaller than the cost bound in Theorem 6.

The currently fastest decoder for Gabidulin codes over finite rings, the Welch–Berlekamp-like decoder in [14], has complexity \(O(n^\omega )\) operations over \({S}\) since its main step is to solve a linear system of equations. Over \({R}\), this complexity bound is \(\tilde{O}(n^\omega m)\), i.e., it is larger than the complexity bound for our LRPC decoder for constant \(\lambda \) and the same parameters n and m.

7 Simulation results

We performed simulations of LRPC codes with \(\lambda =2\), \(k=8\) and \(n=20\) (note that we need \(k \le \tfrac{\lambda -1}{\lambda }n\) by the unique-decoding property) over the ring \({S}\) with \(p=r=2\), \(s=1\) and \(m=21\). In each simulation, we generated one parity-check matrix (fulfilling the maximal-row-span and the unity properties) and conducted a Monte Carlo simulation in which we collected at least 1000 decoding errors and at least 50 failures of every success condition. All simulations gave very similar results and confirmed our analysis. We present one of the simulation results in Fig. 1 for errors of rank weight \(t=1,\ldots ,7\) and three different rank profiles.

We indicate by markers the estimated probabilities of violating the product condition (S: Prod), the syndrome condition (S: Synd), the intersection condition (S: Inter) as well as the decoding failure rate (S: Dec). Black markers denote the result of the simulations with errors of rank profile \(\phi _1(x) = t\), blue markers show the result with errors of rank profile \(\phi _2(x) = tx\) and orange markers indicate the result with rank profile \(\phi _3(x) \in \{1,1+x,2+x,2+2x,3+2x,3+3x,4+3x \}\). Further, we show the derived boundsFootnote 2 on the probabilities of not fulfilling the product condition (B: Prod) given in Theorem 2, the syndrome condition (B: Synd) derived in Theorem 3, the intersection condition (B: Inter) provided in Theorem 4 and the union bound (B: Dec) stated in Theorem 5. Since the derived bounds depend only on the rank weight t but not on the rank profile, we show each bound only once.

One can observe that the bound on the probability of not fulfilling the syndrome condition is very close to the true probability while the bounds on the probabilities of violating the product and syndrome condition are loose. Gaborit et al. have made the same observation in the case of finite fields. In addition, it seems that only the rank weight but not the rank profile has an impact on the probabilities of violating the success conditions.

Fig. 1
figure 1

Simulation results for \(\lambda =2\), \(k=8\) and \(n=20\) over \({S}\) with \(p=r=2\), \(s=1\) and \(m=21\). The markers indicate the estimated probabilities of not fulfilling the product condition (S: Prod), the syndrome condition (S: Synd), the intersection condition (S: Inter) and the decoding failure rate (S: Dec), where the black, blue and orange markers refer to errors of rank profile \(\phi _1(x) =t\), \(\phi _2(x) =tx\) and \(\phi _3(x)\in \{ 1,1+x,2+x,2+2x,3+2x,3+3x,4+3x \}\), respectively. The derived bounds on these probabilities are shown as lines

We also found that the base-ring property of \(\mathcal {F}\) is—in all tested cases—not necessary for the failure probability bound on the intersection condition (Theorem 4) to hold. It is an interesting question whether we can prove the bound without this assumption, both for finite fields and rings.

8 Conclusion

We have adapted low-rank parity-check codes from finite fields to Galois rings and showed that Gaborit et al.’s decoding algorithm works as well for these codes. We also presented a failure probability bound for the decoder, whose derivation is significantly more involved than the finite-field analog due to the weaker structure of modules over finite rings. The bound shows that the codes have the same maximal decoding radius as their finite-field counterparts, but the exponential decay of the failure bound has \(p^s\) as a basis instead of the cardinality of the base ring \(|{R}|=p^{rs}\) (note \({R}\) is a finite field if and only if \(r=1\)). This means that there is a “loss” in failure probability when going from finite fields to finite rings, which can be expected due to the zero divisors in the ring.

The results show that LRPC codes work over finite rings, and thus can be considered, as an alternative to Gabidulin codes over finite rings, for potential applications of rank-metric codes, such as network coding and space-time codes—recall from the introduction that network and space-time coding over rings may have advantages compared to the case of fields. It also opens up the possibility to consider the codes for cryptographic applications, the main motivation for LRPC codes over fields.

Open problems are a generalization of the codes to more general rings (such as principal ideal rings); an analysis of the codes in potential applications; as well as an adaption of the improved decoder for LRPC codes over finite fields in [1] to finite rings. To be useful for network coding (both in case of fields and rings), the decoder must be extended to handle row and column erasures in the rank metric (cf. [14, 23]).