Abstract
We show how to generalize Gama and Nguyen’s slide reduction algorithm [STOC ’08] for solving the approximate Shortest Vector Problem over lattices (SVP) to allow for arbitrary block sizes, rather than just block sizes that divide the rank n of the lattice. This leads to significantly better running times for most approximation factors. We accomplish this by combining slide reduction with the DBKZ algorithm of Micciancio and Walter [Eurocrypt ’16].
We also show a different algorithm that works when the block size is quite large—at least half the total rank. This yields the first non-trivial algorithm for sublinear approximation factors.
Together with some additional optimizations, these results yield significantly faster provably correct algorithms for \(\delta \)-approximate SVP for all approximation factors \(n^{1/2+\varepsilon } \le \delta \le n^{O(1)}\), which is the regime most relevant for cryptography. For the specific values of \(\delta = n^{1-\varepsilon }\) and \(\delta = n^{2-\varepsilon }\), we improve the exponent in the running time by a factor of 2 and a factor of 1.5 respectively.
The first author was partially funded by the Singapore Ministry of Education and the National Research Foundation under grant R-710-000-012-135, and supported by the grant MOE2019-T2-1-145 “Foundations of quantum-safe cryptography". The second author was funded by EPSRC grant EP/S020330/1. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 885394). Parts of this work were done while the fourth author was visiting the Massachusetts Institute of Technology, the Centre for Quantum Technologies at the National University of Singapore, and the Simons Institute in Berkeley.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
A lattice \(\mathcal {L}\subset \mathbb {R}^m\) is the set of integer linear combinations
of linearly independent basis vectors \(\mathbf {B}= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_n) \in \mathbb {R}^{m \times n}\). We call n the rank of the lattice.
The Shortest Vector Problem (SVP) is the computational search problem in which the input is (a basis for) a lattice \(\mathcal {L}\subseteq \mathbb {Z}^m\), and the goal is to output a non-zero lattice vector \(\textit{\textbf{y}} \in \mathcal {L}\) with minimal length, \(\Vert \textit{\textbf{y}}\Vert = \lambda _1(\mathcal {L}) := \min _{\textit{\textbf{x}}\in \mathcal {L}_{\ne \textit{\textbf{0}}}} \Vert \textit{\textbf{x}}\Vert \). For \(\delta \ge 1\), the \(\delta \)-approximate variant of SVP (\(\delta \)-SVP) is the relaxation of this problem in which any non-zero lattice vector \(\textit{\textbf{y}}\in \mathcal {L}_{\ne \textit{\textbf{0}}}\) with \(\Vert \textit{\textbf{y}}\Vert \le \delta \cdot \lambda _1(\mathcal {L})\) is a valid solution.
A closely related problem is \(\delta \)-Hermite SVP (\(\delta \)-HSVP, sometimes also called Minkowski SVP), which asks us to find a non-zero lattice vector \(\textit{\textbf{y}}\in \mathcal {L}_{\ne \textit{\textbf{0}}}\) with \(\Vert \textit{\textbf{y}}\Vert \le \delta \cdot \mathrm {vol}(\mathcal {L})^{1/n}\), where \(\mathrm {vol}(\mathcal {L}) := \det (\mathbf {B}^T \mathbf {B})^{1/2}\) is the covolume of the lattice. Hermite’s constant \(\gamma _n\) is (the square of) the minimal possible approximation factor that can be achieved in the worst case. I.e.,
where the supremum is over lattices \(\mathcal {L}\subset \mathbb {R}^n\) with full rank n. Hermite’s constant is only known exactly for \(1 \le n \le 8\) and \(n = 24\), but it is known to be asymptotically linear in n, i.e., \(\gamma _n = \varTheta (n)\). HSVP and Hermite’s constant play a large role in algorithms for \(\delta \)-SVP.
Starting with the celebrated work of Lenstra, Lenstra, and Lovász in 1982 [LLL82], algorithms for solving \(\delta \)-(H)SVP for a wide range of parameters \(\delta \) have found innumerable applications, including factoring polynomials over the rationals [LLL82], integer programming [Len83, Kan83, DPV11], cryptanalysis [Sha84, Odl90, JS98, NS01], etc. More recently, many cryptographic primitives have been constructed whose security is based on the (worst-case) hardness of \(\delta \)-SVP or closely related lattice problems [Ajt96, Reg09, GPV08, Pei09, Pei16]. Such lattice-based cryptographic constructions are likely to be used on massive scales (e.g., as part of the TLS protocol) in the not-too-distant future [NIS18], and in practice, the security of these constructions depends on the fastest algorithms for \(\delta \)-(H)SVP, typically for \(\delta = \mathrm {poly}(n)\).
Work on \(\delta \)-(H)SVP has followed two distinct tracks. There has been a long line of work showing progressively faster algorithms for exact SVP (i.e., \(\delta = 1\)) [Kan83, AKS01, NV08, PS09, MV13]. However, even the fastest such algorithm (with proven correctness) runs in time \(2^{n + o(n)}\) [ADRS15, AS18]. So, these algorithms are only useful for rather small n.
This paper is part of a separate line of work on basis reduction algorithms [LLL82, Sch87, SE94, GHKN06, GN08a, HPS11, MW16]. (See [NV10] and [MW16] for a much more complete list of works on basis reduction.) At a high level, these are reductions from \(\delta \)-(H)SVP on lattices with rank n to exact SVP on lattices with rank \(k \le n\). More specifically, these algorithms divide a basis \(\mathbf {B}\) into projected blocks \(\mathbf {B}_{[i,i+k-1]}\) with block size k, where
and \(\pi _i\) is the orthogonal projection onto the subspace orthogonal to \(\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_{i-1}\). Basis reduction algorithms use their SVP oracle to find short vectors in these (low-rank) blocks and incorporate these short vectors into the lattice basis \(\mathbf {B}\). By doing this repeatedly (at most \(\mathrm {poly}(n, \log \Vert \mathbf {B}\Vert )\) times) with a cleverly chosen sequence of blocks, such algorithms progressively improve the “quality” of the basis \(\mathbf {B}\) until \(\textit{\textbf{b}}_1\) is a solution to \(\delta \)-(H)SVP for some \(\delta \ge 1\). The goal, of course, is to take the block size k to be small enough that we can actually run an exact algorithm on lattices with rank k in reasonable time while still achieving a relatively good approximation factor \(\delta \).
For HSVP, the DBKZ algorithm due to Micciancio and Walter yields the best proven approximation factor for all ranks n and block sizes k [MW16], which was previously obtained by [GN08a] only when n is divisible by k. Specifically, the approximation factor corresponds to Mordell’s inequality:
(Recall that \(\gamma _k = \varTheta (k)\) is Hermite’s constant. Here and throughout the introduction, we have left out low-order factors that can be made arbitrarily close to one.) Using a result due to Lovász [Lov86], this can be converted into an algorithm for \(\delta _{\mathsf {MW},H}^2\)-SVP. However, the slide reduction algorithm of Gama and Nguyen [GN08a] achieves a better approximation factor for SVP. It yields
for HSVP and SVP respectively, where we write \(\lceil {n}\rceil _k := k \cdot \lceil {n/k}\rceil \) for n rounded up to the nearest multiple of k. (We have included the result for HSVP in Eq. (2) for completeness, though it is clearly no better than Eq. (1).)
The discontinuous approximation factor in Eq. (2) is the result of an unfortunate limitation of slide reduction: it only works when the block size k divides the rank n. If n is not divisible by k, then we must artificially pad our basis so that it has rank \(\lceil {n}\rceil _k\), which results in the rather odd expressions in Eq. (2). Of course, for \(n \gg k\), this rounding has little effect on the approximation factor. But, for cryptographic applications, we are interested in small polynomial approximation factors \(\delta \approx n^c\) for relatively small constants c, i.e., in the case when \(k = \varTheta (n)\). For such values of k and n, this rounding operation can cost us a constant factor in the exponent of the approximation factor, essentially changing \(n^c\) to \(n^{\lceil {c}\rceil }\). Such constants in the exponent have a large effect on the theoretical security of lattice-based cryptography.Footnote 1
1.1 Our Results
Our first main contribution is a generalization of Gama and Nguyen’s slide reduction [GN08a] without the limitation that the rank n must be a multiple of the block size k. Indeed, we achieve exactly the approximation factor shown in Eq. (2) without any rounding, as we show below.
As a very small additional contribution, we allow for the possibility that the underlying SVP algorithm for lattices with rank k only solves \(\delta \)-approximate SVP for some \(\delta > 1\). This technique was already known to folklore and used in practice, and the proof requires no new ideas. Nevertheless, we believe that this work is the first to formally show that a \(\delta \)-SVP algorithm suffices and to compute the exact dependence on \(\delta \). (This minor change proves quite useful when we instantiate our \(\delta \)-SVP subroutine with the \(2^{0.802k}\)-time \(\delta \)-SVP algorithm for some large constant \(\delta \gg 1\) due to Liu, Wang, Xu, and Zheng [LWXZ11, WLW15]. See Table 1 and Figure 1.)
Theorem 1
(Informal, slide reduction for \(n \ge 2k\)). For any approximation factor \(\delta \ge 1\) and block size \(k := k(n) \ge 2\), there is an efficient reduction from \(\delta _H\)-HSVP and \(\delta _S\)-SVP on lattices with rank \(n \ge 2k\) to \(\delta \)-SVP on lattices with rank k, where
Notice in particular that this matches Eq. (2) in the case when \(\delta = 1\) and k divides n. (This is not surprising, since our algorithm is essentially identical to the original algorithm from [GN08a] in this case.) Theorem 1 also matches the approximation factor for HSVP achieved by [MW16], as shown in Eq. (1), so that the best (proven) approximation factor for both problems is now achieved by a single algorithm: in other words, we get the best of both algorithms [GN08a] and [MW16].
However, Theorem 1 only applies for \(n \ge 2k\). Our second main contribution is an algorithm that works for \(k \le n \le 2k\). To our knowledge, this is the first algorithm that provably achieves sublinear approximation factors for SVP and is asymptotically faster than, say, the fastest algorithm for O(1)-SVP. (We overcame a small barrier here. See the discussion in Sect. 3.)
Theorem 2
(Informal, slide reduction for \(n \le 2k\)). For any approximation factor \(\delta \ge 1\) and block size \(k \in [n/2,n]\), there is an efficient reduction from \(\delta _S\)-SVP on lattices with rank n to \(\delta \)-SVP on lattices with rank k, where
and \(q := n-k \le k\).
Together, these algorithms yield the asymptotically fastest proven running times for \(\delta \)-SVP for all approximation factors \(n^{1/2 + \varepsilon } \le \delta \le n^{O(1)}\)—with a particularly large improvement when \(\delta = n^c\) for \(1/2< c < 1\) or for any c slightly smaller than an integer. Table 1 and Fig. 1 summarize the current state of the art. For example, one can solve \(O(n^{1.99})\)-SVP in \(2^{0.269n+o(n)}\)-time and \(O(n^{0.99})\)-SVP in \(2^{0.405n+o(n)}\) instead of the previously best \(2^{0.401n+o(n)}\)-time and \(2^{0.802n+o(n)}\), respectively.
It is worthwhile to mention that, though our focus is on provable algorithms, any heuristic algorithm can be plugged into our reduction giving us the same improvement for these algorithms (see Table 2). Our reduction just shows how to “recycle” one’s favourite algorithm for exact (or near-exact) SVP to tackle higher dimension, provided that one is interested in approximating SVP rather than HSVP. Our results further our understanding of the hardness of SVP but they do not impact usual security estimates, such as those of lattice-based candidates to NIST’s post-quantum standardization: this is because current security estimates actually rely on HSVP estimates, following [GN08b]. The problem of approximating SVP is essentially the same as that of approximating HSVP, except for lattices with an extremely small first minimum: such lattices exist but typically do not arise in real-world cryptographic constructions (see [GN08b, §3.2]). For the same reason, implementing our algorithm has limited value in practice at the moment: running the algorithm would only be meaningful if one was interested in approximating SVP on ad-hoc lattices with an extremely small first minimum.
1.2 Our Techniques
We first briefly recall some of the details of Gama and Nguyen’s slide reduction. Slide reduction divides the basis \(\mathbf {B}= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_n) \in \mathbb {R}^{m \times n}\) evenly into disjoint “primal blocks” \(\mathbf {B}_{[ik+1,(i+1)k]}\) of length k. (Notice that this already requires n to be divisible by k.) It also defines certain “dual blocks” \(\mathbf {B}_{[ik+2,(i+1)k+1]}\), which are the primal blocks shifted one to the right. The algorithm then tries to simultaneously satisfy certain primal and dual conditions on these blocks. Namely, it tries to SVP-reduce each primal block—i.e., it tries to make the first vector in the block \(\textit{\textbf{b}}_{ik+1}^*\) a shortest vector in \(\mathcal {L}(\mathbf {B}_{[ik+1,(i+1)k]})\), where \(\textit{\textbf{b}}_{j}^* := \pi _j(\textit{\textbf{b}}_j)\). Simultaneously, it tries to dual SVP-reduce (DSVP-reduce) the dual blocks. (See Sect. 2.3 for the definition of DSVP reduction.) We call a basis that satisfies all of these conditions simultaneously slide-reduced (Fig. 2).
An SVP oracle for lattices with rank k is sufficient to enforce all primal conditions or all dual conditions separately. (E.g., we can enforce the primal conditions by simply finding a shortest non-zero vector in each primal block and including this vector in an updated basis for the block.) Furthermore, if all primal and dual conditions hold simultaneously, then \(\Vert \textit{\textbf{b}}_1\Vert \le \delta _{\mathsf {GN},S} \lambda _1(\mathcal {L})\) with \(\delta _{\mathsf {GN},S}\) as in Eq. (2), so that \(\Vert \textit{\textbf{b}}_1\Vert \) yields a solution to \(\delta _{\mathsf {GN},S}\)-SVP. This follows from repeated application of a “gluing” lemma on such bases, which shows how to “glue together” two reduced block to obtain a larger reduced block. (See Lemma 1.) Finally, Gama and Nguyen showed that, if we alternate between SVP-reducing the primal blocks and DSVP-reducing the dual blocks, then the basis will converge quite rapidly to a slide-reduced basis (up to some small slack) [GN08a]. Combining all of these facts together yields the main result in [GN08a]. (See Sect. 4.)
The case \(n > 2k\). We wish to extend slide reduction to the case when \(n = p k + q\) for \(1 \le q < k\). So, intuitively, we have to decide what to do with “the extra q vectors in the basis.” To answer this, we exploit a “gluing” property, which is implicit in LLL and slide reduction, but which we make explicit: given an integer \(\ell \in \{1,\dots ,n\}\), any basis B of a lattice L defines two blocks \(B_1 = B_{[1,\ell ]}\) and \(B_2=B_{[\ell +1,n]}\). The first block \(B_1\) is a basis of a (primitive) sublattice \(L_1\) of L, and the second block \(B_2\) is a basis of another lattice \(L_2\) which can be thought as the quotient \(L/L_1\). Intuitively, the basis B glues the two blocks \(B_1\) and \(B_2\) together: a gluing property (Lemma 1) provides sufficient conditions on the two blocks \(B_1\) and \(B_2\) to guarantee that the basis B is (H)SVP-reduced. Crucially, the gluing property shows that there is an asymmetry between \(B_1\) and \(B_2\): B can be SVP-reduced without requiring both \(B_1\) and \(B_2\) to be SVP-reduced. Namely, it suffices that \(B_1\) is HSVP-reduced, \(B_2\) is SVP-reduced together with a gluing condition relating the first vectors of \(B_1\) and \(B_2\).Footnote 2
The HSVP reduction of \(B_1\) can be handled by the algorithm from [MW16], irrespective of the rank of \(B_1\). The SVP reduction of \(B_2\) can be handled by our SVP oracle if the rank of \(B_2\) is chosen to be k, or by slide reduction [GN08a] if the rank of \(B_2\) is chosen to be a multiple of k. Finally, the gluing condition can be achieved by duality, by reusing the main idea of [GN08a]. Thus, “the extra q vectors in the basis” can simply be included in the first block \(B_1\).
Interestingly, the HSVP approximation factor achieved by [MW16] (which we use for \(B_1\)) and the SVP approximation factor achieved by [GN08a] (which we can use for \(B_2\)) are exactly what we need to apply our gluing lemma. (This is not a coincidence, as we explain in Sect. 4.) The result is Theorem 1.
The case \(n < 2k\). For \(n = k + q < 2k\), the above idea cannot work. In particular, a “big block” of size \(k + q\) in this case would be our entire basis! So, instead of working with one big block and some “regular blocks” of size k, we work with a “small block” of size q and one regular block of size k. We then simply perform slide reduction with (primal) blocks \(\mathbf {B}_{[1,q]}\) and \(\mathbf {B}_{[q+1,n]} = \mathbf {B}_{[n-k+1,n]}\). If we were to stop here, we would achieve an approximation factor of roughly \(\gamma _q\) (see [LW13, Th. 4.3.1]), which for \(q = \varTheta (k)\) is essentially the same as the approximation factor of roughly \(\gamma _k\) that we get when the rank is 2k. I.e., we would essentially “pay for two blocks of length k,” even though one block has size \(q < k\).
However, we notice that a slide-reduced basis guarantees more than just a short first vector. It also promises a very strong bound on \(\mathrm {vol}(\mathbf {B}_{[1,q]})\). In particular, since \(q < k\) and since we have access to an oracle for lattices with rank k, it is natural to try to extend this small block \(\mathbf {B}_{[1,q]}\) with low volume to a larger block \(\mathbf {B}_{[1,k]}\) of length k that still has low volume. Indeed, we can use our SVP oracle to guarantee that \(\mathbf {B}_{[q+1,k]}\) consists of relatively short vectors so that \(\mathrm {vol}(\mathbf {B}_{[q+1,k]})\) is relatively small as well. (Formally, we SVP-reduce \(\mathbf {B}_{[i,n]}\) for \(i \in [q+1,k]\). Again, we are ignoring a certain degenerate case, as in Footnote 2.) This allows us to upper bound \(\mathrm {vol}(\mathbf {B}_{[1,k]}) = \mathrm {vol}(\mathbf {B}_{[1,q]}) \cdot \mathrm {vol}(\mathbf {B}_{[q+1,k]})\), which implies that \(\lambda _1(\mathcal {L}(\mathbf {B}_{[1,k]}))\) is relatively short. We can therefore find a short vector by making an additional SVP oracle call on \(\mathcal {L}(\mathbf {B}_{[1,k]})\). (Micciancio and Walter used a similar idea in [MW16].)
1.3 Open Questions and Directions for Future Work
Table 1 suggests an obvious open question: can we find a non-trivial basis reduction algorithm that provably solves \(\delta \)-SVP for \(\delta \le O(\sqrt{n})\)? More formally, can we reduce \(O(\sqrt{n})\)-SVP on lattices with rank n to exact SVP on lattices with rank \(k = c n\) for some constant \(c < 1\). Our current proof techniques seem to run into a fundamental barrier here in that they seem more-or-less incapable of achieving \(\delta \ll \sqrt{\gamma _{k}}\). This setting is interesting in practice, as many record lattice computations use block reduction with \(k \ge n/2\) as a subroutine, such as [CN12]. (One can provably achieve approximation factors \(\delta \ll \sqrt{\gamma _k}\) when \(k = (1-o(1))n\) with a bit of work,Footnote 3 but it is not clear if these extreme parameters are useful.)
Next, we recall that this work shows how to exploit the existing very impressive algorithms for HSVP (in particular, DBKZ [MW16]) to obtain better algorithms for SVP. This suggests two closely related questions for future work: (1) can we find better algorithms for HSVP (e.g., for \(\delta \)-HSVP with \(\delta \approx \sqrt{\gamma _n}\)—i.e., “near-exact” HSVP); and (2) where else can we profitably replace SVP oracles with HSVP oracles? Indeed, most of our analysis (and the analysis of other basis reduction algorithms) treats the \(\delta \)-SVP oracle as a \(\delta \sqrt{\gamma _k}\)-HSVP oracle. We identified one way to exploit this to actually get a faster algorithm, but perhaps more can be done here—particularly if we find faster algorithms for HSVP.
Finally, we note that we present two distinct (though similar) algorithms: one for lattices with rank \(n \le 2k\) and one for lattices with rank \(n \ge 2k\). It is natural to ask whether there is a single algorithm that works in both regimes. Perhaps work on this question could even lead to better approximation factors.
2 Preliminaries
We denote column vectors \(\textit{\textbf{x}} \in \mathbb {R}^m\) by bold lower-case letters. Matrices \(\mathbf {B}\in \mathbb {R}^{m \times n}\) are denoted by bold upper-case letters, and we often think of a matrix as a list of column vectors, \(\mathbf {B}= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_n)\). For a matrix \(\textit{\textbf{B}} =(\mathbf {b}_{1}, \ldots , \mathbf {b}_{n})\) with n linearly independent columns, we write \(\mathcal {L}(\textit{\textbf{B}}) :=\{z_1 \textit{\textbf{b}}_1 + \cdots + z_n \textit{\textbf{b}}_n \ : \ z_i \in \mathbb {Z}\}\) for the lattice generated by \(\mathbf {B}\) and \(\Vert \textit{\textbf{B}}\Vert = \max \{\Vert \mathbf {b}_{1}\Vert ,\ldots , \Vert \mathbf {b}_{n}\Vert \}\) for the maximum norm of a column. We often implicitly assume that \(m \ge n\) and that a basis matrix \(\mathbf {B}\in \mathbb {R}^{m \times n}\) has rank n (i.e., that the columns of \(\mathbf {B}\) are linearly independent). We use the notation \(\log := \log _2 \) to mean the logarithm with base two.
2.1 Lattices
For any lattice \(\mathcal {L}\), its dual lattice is
If \(\mathbf {B}\in \mathbb {R}^{m \times n}\) is a basis of \(\mathcal {L}\), then \(\mathcal {L}^\times \) has basis \(\mathbf {B}^\times := \mathbf {B}(\mathbf {B}^T \mathbf {B})^{-1}\), called the dual basis of \(\mathbf {B}\). The reversed dual basis \(\mathbf {B}^{-s}\) of \(\mathbf {B}\) is simply \(\mathbf {B}^\times \) with its columns in reversed order [GHN06].
2.2 Gram-Schmidt Orthogonalization
For a basis \(\mathbf {B}= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_n) \in \mathbb {R}^{m \times n}\), we associate a sequence of projections \(\pi _{i} := \pi _{\{\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_{i-1}\}^\perp }\). Here, \(\pi _{W^\perp }\) means the orthogonal projection onto the subspace \(W^\perp \) orthogonal to W. As in [GN08a], \(\mathbf {B}_{[i,j]}\) denotes the projected block \((\pi _{i}(\textit{\textbf{b}}_i),\pi _{i}(\textit{\textbf{b}}_{i+1}),\ldots , \pi _{i}(\textit{\textbf{b}}_j))\).
We also associate to \(\mathbf {B}\) its Gram-Schmidt orthogonalization (GSO) \(\textit{\textbf{B}}^{*} := (\mathbf {b}_1^{*}, \ldots , \mathbf {b}_{n}^{*})\), where \(\textit{\textbf{b}}_i^* := \pi _{i}(\textit{\textbf{b}}_i) = \textit{\textbf{b}}_i - \sum _{j < i} \mu _{i,j} \textit{\textbf{b}}_j^*\), and \(\mu _{i,j} = \langle \textit{\textbf{b}}_i, \textit{\textbf{b}}_j^* \rangle /\Vert \textit{\textbf{b}}_j^*\Vert ^2\).
We say that \(\mathbf {B}\) is size-reduced if \(|\mu _{i,j}| \le \frac{1}{2}\) for all \(i \ne j\): then \(\Vert \mathbf {B}\Vert \le \sqrt{n}\Vert \mathbf {B}^{*}\Vert \). Transforming a basis into this form without modifying \(\mathcal {L}(\mathbf {B})\) or \(\mathbf {B}^{*}\) is called size reduction, and this can be done easily and efficiently.
2.3 Lattice Basis Reduction
LLL reduction. Let \(\textit{\textbf{B}} = (\mathbf {b}_1, \ldots , \mathbf {b}_n)\) be a size-reduced basis. For \(\varepsilon \in [0, 1]\), we say that \(\textit{\textbf{B}}\) is \(\varepsilon \)-LLL-reduced [LLL82] if every rank-two projected block \(\textit{\textbf{B}}_{[i,i+1]}\) satisfies Lovász’s condition: \(\Vert \mathbf {b}_{i}^{*}\Vert ^2 \le (1 + \varepsilon )\Vert \mu _{i,i-1}\mathbf {b}_{i-1}^{*} + \mathbf {b}_{i}^{*}\Vert ^2\) for \(1 < i \le n\). For \(\varepsilon \ge 1/\mathrm {poly}(n)\), one can efficiently compute an \(\varepsilon \)-LLL-reduced basis for a given lattice.
SVP reduction and its extensions. Let \(\textit{\textbf{B}} = (\mathbf {b}_1, \ldots , \mathbf {b}_{n})\) be a basis of a lattice \(\mathcal {L}\) and \(\delta \ge 1\) be an approximation factor.
We say that \(\textit{\textbf{B}}\) is \(\delta \)-SVP-reduced if \(\Vert \mathbf {b}_1\Vert \le \delta \cdot \lambda _{1}(\mathcal {L})\). Similarly, we say that \(\textit{\textbf{B}}\) is \(\delta \)-HSVP-reduced if \(\Vert \textit{\textbf{b}}_1\Vert \le \delta \cdot \mathrm {vol}(\mathcal {L})^{1/n}\).
\(\textit{\textbf{B}}\) is \(\delta \)-DSVP-reduced [GN08a] (where D stands for dual) if the reversed dual basis \(\textit{\textbf{B}}^{-s}\) is \(\delta \)-SVP-reduced and \(\mathbf {B}\) is \(\frac{1}{3}\)-LLL-reduced. Similarly, we say that \(\textit{\textbf{B}}\) is \(\delta \)-DHSVP-reduced if \(\textit{\textbf{B}}^{-s}\) is \(\delta \)-HSVP-reduced.
The existence of such \(\delta \)-DSVP-reduced bases is guaranteed by a classical property of LLL that \(\Vert \textit{\textbf{b}}_{n}^{*}\Vert \) never decreases during the LLL-reduction process [LLL82].
We can efficiently compute a \(\delta \)-(D)SVP-reduced basis for a given rank n lattice \(\mathcal {L}\subseteq \mathbb {Z}^m\) with access to an oracle for \(\delta \)-SVP on lattices with rank at most n. Furthermore, given a basis \(\mathbf {B}= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_n) \in \mathbb {Z}^{m\times n}\) of \(\mathcal {L}\) and an index \(i \in [1,n-k+1]\), we can use a \(\delta \)-SVP oracle for lattices with rank at most k to efficiently compute a size-reduced basis \(\mathbf {C} = (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_{i-1}, \textit{\textbf{c}}_i,\ldots , \textit{\textbf{c}}_{i+k-1}, \textit{\textbf{b}}_{i+k},\ldots , \textit{\textbf{b}}_n)\) of \(\mathcal {L}\) such that the block \(\mathbf {C}_{[i,i+k-1]}\) is \(\delta \)-SVP reduced or \(\delta \)-DSVP reduced:
-
If \(\mathbf {C}_{[i,i+k-1]}\) is \(\delta \)-SVP-reduced, the procedures in [GN08a, MW16, LN19] equipped with \(\delta \)-SVP-oracle ensure that \(\Vert \mathbf {C}^{*}\Vert \le \Vert \mathbf {B}^{*}\Vert \);
-
If \(\mathbf {C}_{[i,i+k-1]}\) is \(\delta \)-DSVP-reduced, the inherent LLL reduction implies \(\Vert \mathbf {C}^{*}\Vert \le 2^{k}\Vert \mathbf {B}^{*}\Vert \). Indeed, the GSO of \(\mathbf {C}_{[i,i+k-1]}\) satisfies
$$\begin{aligned} \Vert (\mathbf {C}_{[i,i+k-1]})^{*}\Vert \le 2^{k/2}\lambda _{k}(\mathcal {L}(\mathbf {C}_{[i,i+k-1]})) \end{aligned}$$(by [LLL82, p. 518, Line 27]) and \(\lambda _{k}(\mathcal {L}(\mathbf {C}_{[i,i+k-1]}))\le \sqrt{k}\Vert \mathbf {B}^{*}\Vert \). Here, \(\lambda _k(\cdot )\) denotes the k-th minimum.
With size-reduction, we can iteratively perform \(\mathrm {poly}(n, \log \Vert \mathbf {B}\Vert )\) many such operations efficiently. In particular, doing so will not increase \(\Vert \mathbf {B}^{*}\Vert \) by more than a factor of \(2^{\mathrm {poly}(n,\log \Vert \mathbf {B}\Vert )}\), and therefore the same is true of \(\Vert \mathbf {B}\Vert \). That is, all intermediate entries and the total cost during execution (excluding oracle queries) remain polynomially bounded in the initial input size; See, e.g., [GN08a, LN14] for the evidence. Therefore, to bound the running time of basis reduction, it suffices to bound the number of calls to these block reduction subprocedures.
Twin reduction and gluing. We define the following notion, which was implicit in [GN08a] and will arise repeatedly in our proofs. \(\textit{\textbf{B}} = (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_{d+1})\) is \(\delta \)-twin-reduced if \(\mathbf {B}_{[1,d]}\) is \(\delta \)-HSVP-reduced and \(\mathbf {B}_{[2,d+1]}\) is \(\delta \)-DHSVP-reduced. The usefulness of twin reduction is illustrated by the following fact, which is the key idea behind Gama and Nguyen’s slide reduction (and is remarkably simple in hindsight).
Fact 3
If \(\mathbf {B}:= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_{d+1}) \in \mathbb {R}^{m \times (d+1)}\) is \(\delta \)-twin-reduced, then
Furthermore,
Proof
By definition, we have \( \Vert \textit{\textbf{b}}_1\Vert ^d \le \delta ^d \mathrm {vol}(\mathbf {B}_{[1,d]}) \), which is equivalent to
Similarly,
Combining these two inequalities yields Eq. (3).
Finally, we have \(\Vert \textit{\textbf{b}}_1\Vert ^d \Vert \textit{\textbf{b}}_{d+1}^*\Vert \le \delta ^{d} \mathrm {vol}(\mathbf {B})\). Applying Eq. (3) implies the first inequality in Eq. (4), and similar analysis yields the second inequality. \(\square \)
The following gluing lemma, which is more-or-less implicit in prior work, shows conditions on the blocks \(\mathbf {B}_{[1,d]}\) and \(\mathbf {B}_{[d+1,n]}\) that are sufficient to imply (H)SVP reduction of the full basis \(\mathbf {B}\). Notice in particular that the decay of the Gram-Schmidt vectors guaranteed by Eq. (3) is what is needed for Item 2 of the lemma below, when \(\eta = \delta ^{1/(d-1)}\). And, with this same choice of \(\eta \), the HSVP reduction requirement on \(\mathbf {B}_{[1,d]}\) in Fact 3 is the same as the one in Item 2 of Lemma 1.
Lemma 1
(The gluing lemma). Let \(\mathbf {B}:= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_n) \in \mathbb {R}^{m \times n}\), \(\alpha , \beta , \eta \ge 1\), and \(1 \le d \le n\).
-
1.
If \(\mathbf {B}_{[d+1,n]}\) is \(\beta \)-SVP-reduced, \(\Vert \textit{\textbf{b}}_1\Vert \le \alpha \Vert \textit{\textbf{b}}_{d+1}^*\Vert \), and \(\lambda _1(\mathcal {L}(\mathbf {B})) < \lambda _1(\mathcal {L}(\mathbf {B}_{[1,d]}))\), then \(\mathbf {B}\) is \(\alpha \beta \)-SVP-reduced.
-
2.
If \(\mathbf {B}_{[1,d]}\) is \(\eta ^{d-1}\)-HSVP-reduced, \(\mathbf {B}_{[d+1,n]}\) is \(\eta ^{n-d-1}\)-HSVP-reduced, and \(\Vert \textit{\textbf{b}}_1\Vert \le \eta ^{2d} \Vert \textit{\textbf{b}}_{d+1}^*\Vert \), then \(\mathbf {B}\) is \(\eta ^{n-1}\)-HSVP-reduced.
Proof
For Item 1, since \(\lambda _1(\mathcal {L}(\mathbf {B})) < \lambda _1(\mathcal {L}(\mathbf {B}_{[1,d]}))\), there exists a shortest non-zero vector \(\textit{\textbf{u}} \in \mathcal {L}(\mathbf {B})\) with \(\Vert \textit{\textbf{u}}\Vert = \lambda _1( \mathcal {L}(\mathbf {B}))\) and \(\pi _d(\textit{\textbf{u}}) \ne 0\). Since \(\mathbf {B}_{[d+1,n]}\) is \(\beta \)-SVP-reduced, it follows that \(\Vert \textit{\textbf{b}}_{d+1}^* \Vert /\beta \le \Vert \pi _d(\textit{\textbf{u}})\Vert \le \Vert \textit{\textbf{u}}\Vert = \lambda _1(\mathcal {L}(\mathbf {B}))\). Finally, we have \( \Vert \textit{\textbf{b}}_1\Vert \le \alpha \Vert \textit{\textbf{b}}_{d+1}^*\Vert \le \alpha \beta \lambda _1(\mathcal {L})\) as needed.
Turning to Item 2, we note that the HSVP conditions imply that \(\Vert \textit{\textbf{b}}_1\Vert ^d \le \eta ^{d(d-1)} \mathrm {vol}(\mathbf {B}_{[1,d]})\) and \(\Vert \textit{\textbf{b}}_{d+1}^*\Vert ^{n-d} \le \eta ^{(n-d)(n-d-1)}\mathrm {vol}(\mathbf {B}_{[d+1,n]})\). Using the bound on \(\Vert \textit{\textbf{b}}_1\Vert \) relative to \(\Vert \textit{\textbf{b}}_{d+1}^*\Vert \), we have
as needed. \(\square \)
2.4 The Micciancio-Walter DBKZ Algorithm
We recall Micciancio and Walter’s elegant DBKZ algorithm [MW16], as we will need it later. Formally, we slightly generalize DBKZ by allowing for the use of a \(\delta \)-SVP-oracle. We provide only a high-level sketch of the proof of correctness, as the full proof is the same as the proof in [MW16], with Hermite’s constant \(\gamma _k\) replaced by \(\delta ^2 \gamma _k\).
Theorem 4
For integers \(n > k \ge 2\), an approximation factor \(1 \le \delta \le 2^k\), an input basis \(\mathbf {B}_{0} \in \mathbb {Z}^{m \times n}\) for a lattice \(\mathcal {L}\subseteq \mathbb {Z}^m\), and \( N := \lceil (2n^2/(k-1)^2) \cdot \log (n\log (5\Vert \mathbf {B}_{0}\Vert )/\varepsilon ) \rceil \) for some \(\varepsilon \in [2^{-\mathrm {poly}(n)},1]\), Algorithm 1 outputs a basis \(\mathbf {B}\) of \(\mathcal {L}\) in polynomial time (excluding oracle queries) such that
by making \(N \cdot (2n-2k+1)+1\) calls to the \(\delta \)-SVP oracle for lattices with rank k.
Proof
(Proof sketch). We briefly sketch a proof of the theorem, but we outsource the most technical step to a claim from [MW16], which was originally proven in [Neu17]. Let \(\mathbf {B}^{(\ell )}\) be the basis immediately after the \(\ell \)th tour, and let \(x_i^{(\ell )} := \log \mathrm {vol}(\mathbf {B}_{[1,k+i-1]}^{(\ell )}) - \frac{k+i-1}{n} \log \mathrm {vol}(\mathcal {L})\) for \(i=1,\ldots ,n-k\). Let
By [MW16, Claim 3] (originally proven in [Neu17]), we have
where \(\xi := 1/(1+n^2/(4k(k-1))) \ge 4(k-1)^2/(5n^2)\). Furthermore, notice that
It follows that
In other words,
Notice that the first vector \(\textit{\textbf{b}}_1\) of the output basis is a \(\delta \)-approximate shortest vector in \(\mathcal {L}\big (\mathbf {B}_{[1,k]}^{(N)}\big )\). Therefore,
as needed. \(\square \)
3 Slide Reduction for \(n\le 2k\)
In this section, we consider a generalization of Gama and Nguyen’s slide reduction that applies to the case when \(k < n \le 2k\) [GN08a]. Our definition in this case is not particularly novel or surprising, as it is essentially identical to Gama and Nguyen’s except that our blocks are not the same size.Footnote 4
What is surprising about this definition is that it allows us to achieve sublinear approximation factors for SVP when the rank is \(n = k+q\) for \(q = \varTheta (k)\). Before this work, it seemed that approximation factors less than roughly \(\gamma _q \approx n\) could not be achieved using the techniques of slide reduction (or, for that matter, any other known techniques with formal proofs). Indeed, our slide-reduced basis only achieves \(\Vert \textit{\textbf{b}}_1\Vert \lesssim \gamma _q\lambda _1(\mathcal {L})\) (see [LW13, Th. 4.3.1]), which is the approximation factor resulting from the gluing lemma, Lemma 1. (This inequality is tight.) We overcome this barrier by using our additional constraints on the primal together with some additional properties of slide-reduced bases (namely, Eq. (4)) to bound \(\lambda _1(\mathcal {L}(\mathbf {B}_{[1,k]}))\). Perhaps surprisingly, the resulting bound is much better than the bound on \(\Vert \textit{\textbf{b}}_1\Vert \), which allows us to find a much shorter vector with an additional oracle call.
Definition 1
(Slide reduction). Let \(n =k+q\) where \(1 \le q \le k\) are integers. A basis \(\mathbf {B}\) of a lattice with rank n is \((\delta ,k)\)-slide-reduced (with block size \(k \ge 2\) and approximation factor \(\delta \ge 1\)) if it is size-reduced and satisfies the following set of conditions.
-
1.
Primal conditions: The blocks \(\mathbf {B}_{[1,q]}\) and \(\mathbf {B}_{[i,n]}\) for \(i\in [q+1,\max \{k,q+1\}]\) are \(\delta \)-SVP-reduced.
-
2.
Dual condition: The block \(\mathbf {B}_{[2,q+1]}\) is \(\delta \)-DSVP-reduced.
A reader familiar with the slide reduction algorithm from [GN08a] will not be surprised to learn that such a basis can be found (up to some small slack) using polynomially many calls to a \(\delta \)-SVP oracle on lattices with rank at most k. Before presenting and analyzing the algorithm, we show that such a slide-reduced basis is in fact useful for approximating SVP with sub-linear factors. (We note in passing that a slight modification of the proof of Theorem 5 yields a better result when \(q = o(k)\). This does not seem very useful on its own, though, since when \(q = o(k)\), the running times of our best SVP algorithms are essentially the same for rank k and rank \(k+q\).)
Theorem 5
Let \(\mathcal {L}\) be a lattice with rank \(n =k+q\) where \(2 \le q \le k\) are integers. For any \(\delta \ge 1\), if a basis \(\textit{\textbf{B}}\) of \(\mathcal {L}\) is \((\delta ,k)\)-slide-reduced, then,
Proof
Let \(\mathbf {B}=(\mathbf {b}_{1}, \ldots , \mathbf {b}_{n})\). We distinguish two cases.
First, suppose that there exists an index \(i \in [q+1,\max \{k,q+1\}]\) such that \(\Vert \textit{\textbf{b}}_{i}^{*}\Vert >\delta \lambda _{1}(\mathcal {L})\). Let \(\textit{\textbf{v}}\) be a shortest non-zero vector of \(\mathcal {L}\). We claim that \(\pi _i(\textit{\textbf{v}}) = 0\), i.e., that \(\textit{\textbf{v}} \in \mathcal {L}(\mathbf {B}_{[1,i-1]})\). If this is not the case, since \(\mathbf {B}_{[i,n]}\) is \(\delta \)-SVP-reduced, we have that
which is a contradiction. Thus, we see that \(\textit{\textbf{v}} \in \mathcal {L}(\mathbf {B}_{[1,i-1]}) \subseteq \mathcal {L}(\mathbf {B}_{[1,k]})\), and hence \(\lambda _{1}(\mathcal {L}(\textit{\textbf{B}}_{[1,k]}))=\lambda _{1}(\mathcal {L})\) (which is much stronger than what we need).
Now, suppose that \(\Vert \textit{\textbf{b}}_{i}^{*}\Vert \le \delta \lambda _{1}(\mathcal {L})\) for all indices \(i \in [q+1,\max \{k,q+1\}]\). By definition, the primal and dual conditions imply that \(\textit{\textbf{B}}_{[1,q+1]}\) is \(\delta \sqrt{\gamma _q}\)-twin-reduced. Therefore, by Eq. (4) of Fact 3, we have
where we have used the assumption that \(\Vert \textit{\textbf{b}}_{i}^{*}\Vert \le \delta \lambda _{1}(L)\) for all indices \(i \in [q+1,\max \{k,q+1\}]\) (and by convention we take the product to equal one in the special case when \(q = k\)). By the definition of Hermite’s constant, this implies that
as needed. \(\square \)
3.1 The Slide Reduction Algorithm for \(n \le 2k\)
We now present our slight generalization of Gama and Nguyen’s slide reduction algorithm that works for all \(k+2 \le n \le 2k\).
Our proof that Algorithm 2 runs in polynomial time (excluding oracle calls) is essentially identical to the proof in [GN08a].
Theorem 6
For \(\varepsilon \ge 1/\mathrm {poly}(n)\), Algorithm 2 runs in polynomial time (excluding oracle calls), makes polynomially many calls to its \(\delta \)-SVP oracle, and outputs a \(((1+\varepsilon )\delta , k)\)-slide-reduced basis of the input lattice \(\mathcal {L}\).
Proof
First, notice that if Algorithm 2 terminates, then its output must be \(((1+\varepsilon )\delta , k)\)-slide-reduced. So, we only need to argue that the algorithm runs in polynomial time (excluding oracle calls).
Let \(\mathbf {B}_0 \in \mathbb {Z}^{m \times n}\) be the input basis and let \(\mathbf {B}\in \mathbb {Z}^{m \times n}\) denote the current basis during the execution of the algorithm. As is common in the analysis of basis reduction algorithms [LLL82, GN08a, LN14], we consider an integral potential of the form
The initial potential satisfies \(\log P(\mathbf {B}_{0}) \le 2q \cdot \log \Vert \mathbf {B}_{0}\Vert \), and every operation in Algorithm 2 either preserves or significantly decreases \(P(\mathbf {B})\). More precisely, if the \(\delta \)-DSVP-reduction step (i.e., Step 8) occurs, then the potential \(P(\mathbf {B})\) decreases by a multiplicative factor of at least \((1+\varepsilon )^{2}\). No other step changes \(\mathcal {L}(\mathbf {B}_{[1,q]})\) or \(P(\mathbf {B})\).
Therefore, Algorithm 2 updates \(\mathcal {L}(\mathbf {B}_{[1,q]})\) at most \(\frac{\log P(\mathbf {B}_{0})}{2\log (1+\varepsilon )}\) times, and hence it makes at most \(\frac{qk\log \Vert \mathbf {B}_{0}\Vert }{\log (1+\varepsilon )}\) calls to the \(\delta \)-SVP-oracle. From the complexity statement in Sect. 2.3, it follows that Algorithm 2 runs efficiently (excluding the running time of oracle calls). \(\square \)
Corollary 1
For any constant \(c \in (1/2, 1]\) and \(\delta := \delta (n) \ge 1\), there is an efficient reduction from \(O(\delta ^{2c+1} n^c)\)-SVP on lattices with rank n to \(\delta \)-SVP on lattices with rank \(k := \lceil n/(2c) \rceil \).
Proof
On input (a basis for) an integer lattice \(\mathcal {L}\subseteq \mathbb {Z}^m\) with rank n, the reduction first calls Algorithm 2 to compute a \(((1+\varepsilon )\delta , k)\)-slide-reduced basis \(\mathbf {B}\) of \(\mathcal {L}\) with, say, \(\varepsilon = 1/n\). The reduction then uses its \(\delta \)-SVP oracle once more on \(\mathbf {B}_{[1,k]}\) and returns the resulting nonzero short lattice vector.
It is immediate from Theorem 6 that this reduction is efficient, and by Theorem 5, the output vector is a \(\delta '\)-approximate shortest vector, where
as needed. \(\square \)
4 Slide Reduction for \(n\ge 2k\)
We now introduce a generalized version of slide reduction for lattices with any rank \(n \ge 2k\). As we explained in Sect. 1.2, at a high level, our generalization of the definition from [GN08a] is the same as the original, except that (1) our first block \(\mathbf {B}_{[1,k+q]}\) is bigger than the others (out of necessity, since we can no longer divide our basis evenly into disjoint blocks of size k); and (2) we only \(\eta \)-HSVP reduce the first block (since we cannot afford to \(\delta \)-SVP reduce a block with size larger than k). Thus, our notion of slide reduction can be restated as “the first block and the first dual block are \(\eta \)-(D)HSVP reduced and the rest of the basis \(\mathbf {B}_{[k+q+1,n]}\) is slide-reduced in the sense of [GN08a].”Footnote 5
However, the specific value of \(\eta \) that we choose in our definition below might look unnatural at first. We first present the definition and then explain where \(\eta \) comes from.
Definition 2
(Slide reduction). Let n, k, p, q be integers such that \(n = pk + q\) with \(p,k \ge 2\) and \(0 \le q \le k-1\), and let \(\delta \ge 1\). A basis \(\mathbf {B}\in \mathbb {R}^{m \times n}\) is \((\delta ,k)\)-slide-reduced if it is size-reduced and satisfies the following three sets of conditions.
-
1.
Mordell conditions: The block \(\mathbf {B}_{[1,k+q]}\) is \(\eta \)-HSVP-reduced and the block \(\mathbf {B}_{[2,k+q+1]}\) is \(\eta \)-DHSVP-reduced for \(\eta := (\delta ^2 \gamma _{k})^{\frac{k+q-1}{2(k-1)}}\).
-
2.
Primal conditions: for all \(i \in [1, p-1]\), the block \(\mathbf {B}_{[i k+q+1,(i+1)k+q]}\) is \(\delta \)-SVP-reduced.
-
3.
Dual conditions: for all \(i \in [1, p-2]\), the block \(\mathbf {B}_{[ik+q+2,(i+1)k+q+1]}\) is \(\delta \)-DSVP-reduced.Footnote 6
There are two ways to explain our specific choice of \(\eta \). Most simply, notice that the output of the DBKZ algorithm—due to [MW16] and presented in Sect. 2.4—is \(\eta \)-HSVP reduced when the input basis has rank \(k+q\) (up to some small slack \(\varepsilon \)). In other words, one reason that we choose this value of \(\eta \) is because we actually can \(\eta \)-HSVP reduce a block of size \(k+q\) efficiently with access to a \(\delta \)-SVP oracle for lattices with rank k. If we could do better, then we would in fact obtain a better algorithm, but we do not know how. Second, this value of \(\eta \) is natural in this context because it is the choice that “makes the final approximation factor for HSVP match the approximation factor for the first block”. I.e., the theorem below shows that when we plug in this value of \(\eta \), a slide-reduced basis of rank n is \((\delta ^2 \gamma _{k})^{\frac{n-1}{2(k-1)}}\)-HSVP, which nicely matches the approximation factor of \(\eta = (\delta ^2 \gamma _{k})^{\frac{k+q-1}{2(k-1)}}\)-HSVP that we need for the first block (whose rank is \(k+q\)). At a technical level, this is captured by Fact 3 and Lemma 1.
Of course, the fact that these two arguments suggest the same value of \(\eta \) is not a coincidence. Both arguments are essentially disguised proofs of Mordell’s inequality, which says that \(\gamma _n \le \gamma _k^{(n-1)/(k-1)}\) for \(2 \le k \le n\). E.g., with \(\delta = 1\) the primal Mordell condition says that \(\textit{\textbf{b}}_1\) yields a witness to Mordell’s inequality for \(\mathbf {B}_{[1,k+q]}\).
Theorem 7
For any \(\delta \ge 1\), \(k \ge 2\), and \(n \ge 2k\), if \(\mathbf {B}=(\textit{\textbf{b}}_{1}, \ldots , \textit{\textbf{b}}_{n}) \in \mathbb {R}^{m \times n}\) is a \((\delta ,k)\)-slide-reduced basis of a lattice \(\mathcal {L}\), then
Furthermore, if \(\lambda _{1}(\mathcal {L}(\mathbf {B}_{[1,k+q]}))> \lambda _{1}(\mathcal {L})\), then
where \(0 \le q \le k-1\) is such that \(n=pk + q\).
Proof
Let \(d := k + q\). Theorem 9 of Appendix A shows that \(\mathbf {B}_{[d+1,n]}\) is both \((\delta ^{2} \gamma _{k})^{\frac{n-d-1}{2(k-1)}}\)-HSVP-reduced and \((\delta ^{2} \gamma _{k})^{\frac{n-d-k}{(k-1)}}\)-SVP-reduced. (We relegate this theorem and its proof to the appendix because it is essentially just a restatement of [GN08a, Theorem 1], since \(\mathbf {B}_{[d+1,n]}\) is effectively just a slide-reduced basis in the original sense of [GN08a].) Furthermore, \(\mathbf {B}_{[1,d+1]}\) is \((\delta ^{2} \gamma _{k})^{\frac{d-1}{2(k-1)}}\)-twin-reduced, so that \(\Vert \textit{\textbf{b}}_1\Vert \le (\delta ^{2} \gamma _{k})^{\frac{d}{k-1}} \Vert \textit{\textbf{b}}_{d+1}^*\Vert \). Applying Lemma 1 then yields both Eq. (5) and Eq. (6). \(\square \)
4.1 The Slide Reduction Algorithm for \(n \ge 2k\)
We now present our slight generalization of Gama and Nguyen’s slide reduction algorithm that works for all \( n \ge 2k\). Our proof that the algorithm runs in polynomial time (excluding oracle calls) is essentially identical to the proof in [GN08a].
Theorem 8
For \(\varepsilon \in [1/\mathrm {poly}(n),1]\), Algorithm 3 runs in polynomial time (excluding oracle calls), makes polynomially many calls to its \(\delta \)-SVP oracle, and outputs a \(((1+\varepsilon )\delta , k)\)-slide-reduced basis of the input lattice \(\mathcal {L}\).
Proof
First, notice that if Algorithm 3 terminates, then its output is \(((1+\varepsilon )\delta , k)\)-slide-reduced. So, we only need to argue that the algorithm runs in polynomial time (excluding oracle calls).
Let \(\mathbf {B}_{0} \in \mathbb {Z}^{m \times n}\) be the input basis and let \(\mathbf {B}\in \mathbb {Z}^{m \times n}\) denote the current basis during the execution of Algorithm 3. As is common in the analysis of basis reduction algorithms [LLL82, GN08a, LN14], we consider an integral potential of the form
The initial potential satisfies \(\log P(\mathbf {B}_{0}) \le 2n^{2} \cdot \log \Vert \mathbf {B}_{0}\Vert \), and every operation in Algorithm 3 either preserves or significantly decreases \(P(\mathbf {B})\). In particular, the potential is unaffected by the primal steps (i.e., Steps 2 and 4), which leave \(\mathrm {vol}(\mathbf {B}_{[1,i k+q]})\) unchanged for all i. The dual steps (i.e., Steps 7 and 12) either leave \(\mathrm {vol}(\mathbf {B}_{[1,i k+q]})\) for all i or decrease \(P(\mathbf {B})\) by a multiplicative factor of at least \((1+\varepsilon )\).
Therefore, Algorithm 2 updates \(\mathrm {vol}(\mathbf {B}_{[1,i k + q]})\) for some i at most \(\log P(\mathbf {B}_{0})/\log (1+\varepsilon )\) times. Hence, it makes at most \(4pn^2 \log \Vert \mathbf {B}_0\Vert /\log (1+\varepsilon )\) calls to the SVP oracle in the SVP and DSVP reduction steps (i.e., Steps 4 and 12), and similarly at most \(4n^2 \log \Vert \mathbf {B}_0\Vert /\log (1+\varepsilon )\) calls to Algorithm 1. From the complexity statement in Sect. 2.3, it follows that Algorithm 2 runs efficiently (excluding the running time of oracle calls), as needed.
\(\square \)
Corollary 2
For any constant \(c \ge 1\) and \(\delta := \delta (n) \ge 1\), there is an efficient reduction from \(O(\delta ^{2c+1} n^c)\)-SVP on lattices with rank n to \(\delta \)-SVP on lattices with rank \(k := \lfloor n/(c+1)\rfloor \).
Proof
On input (a basis for) an integer lattice \(\mathcal {L}\subseteq \mathbb {Z}^m\) with rank n, the reduction first calls Algorithm 3 to compute a \(((1+\varepsilon )\delta , k)\)-slide-reduced basis \(\mathbf {B}=(\mathbf {b}_1, \cdots , \mathbf {b}_{n})\) of \(\mathcal {L}\) with, say, \(\varepsilon = 1/n\). Then, the reduction uses the procedure from Corollary 1 on the lattice \(\mathcal {L}(\mathbf {B}_{[1,2k]})\) with \(c = 1\) (i.e., slide reduction on a lattice with rank 2k), to find a vector \(\textit{\textbf{v}} \in \mathcal {L}(\mathbf {B}_{[1,2k]})\) with \(0 < \Vert \textit{\textbf{v}}\Vert \le O(\delta ^{3}n) \lambda _1(\mathcal {L}(\mathbf {B}_{[1,2k]}))\). Finally, the reduction outputs the shorter of the two vectors \(\mathbf {b}_1\) and \(\mathbf {v}\).
It is immediate from Corollary 1 and Theorem 8 that this reduction is efficient. To prove correctness, we consider two cases.
First, suppose that \(\lambda _1(\mathcal {L}(\mathbf {B}_{[1,k+q]})) = \lambda _1(\mathcal {L})\). Then,
so that the algorithm will output a \(O(\delta ^{2c+1} n^c)\)-approximate shortest vector.
On the other hand, if \(\lambda _1(\mathcal {L}(\mathbf {B}_{[1,k+q]})) > \lambda _1(\mathcal {L})\), then by Theorem 7, we have
so that the algorithm also outputs a \(O(\delta ^{2c+1} n^c)\)-approximate shortest vector in this case. \(\square \)
Notes
- 1.
The concrete security of lattice-based cryptography is assessed using HSVP and a heuristic version of Eq. (2) where Hermite’s constant is replaced by a Gaussian heuristic estimate. In this work, we restrict our attention to what we can prove, and we focus on SVP rather than HSVP.
- 2.
We are ignoring a certain degenerate case here for simplicity. Namely, if all short vectors happen to lie in the span of the first block, and these vectors happen to be very short relative to the volume of the first block, then calling an HSVP oracle on the first block might not be sufficient to solve approximate SVP. Of course, if we know a low-dimensional subspace that contains the shortest non-zero vector, then finding short lattice vectors is much easier. This degenerate case is therefore easily handled separately (but it does in fact need to be handled separately).
- 3.
For example, it is immediate from the proof of Theorem 5 that the (very simple) notion of a slide-reduced basis for \(n \le 2k\) in Definition 1 is already enough to obtain \(\delta \approx \gamma _{n-k} \approx n- k\). So, for \(n \lesssim k + \sqrt{k}\), this already achieves \(\delta \lesssim \sqrt{n}\). With a bit more work, one can show that an extra oracle call like the one used in Corollary 1 can yield a still better approximation factor in this rather extreme setting of \(k = (1-o(1))n\).
- 4.
The only difference, apart from the approximation factor \(\delta \), is that we use SVP reduction instead of HKZ reduction for the primal. It is clear from the proof in [GN08a] that only SVP reduction is required, as was observed in [MW16]. We do require that additional blocks \(\mathbf {B}_{[i,n]}\) for \(q+1\le i \le k\) are SVP-reduced, which is quite similar to simply HKZ-reducing \(\mathbf {B}_{[q+1,n]}\), but this requirement plays a distinct role in our analysis, as we discuss below.
- 5.
Apart from the approximation factor \(\delta \), there is one minor difference between our primal conditions and those of [GN08a]. We only require the primal blocks to be SVP-reduced, while [GN08a] required them to be HKZ-reduced, which is a stronger condition. It is clear from the proof in [GN08a] that only SVP reduction is required, as was observed in [MW16].
- 6.
When \(p = 2\), there are simply no dual conditions.
References
Aggarwal, D., Dadush, D., Regev, O., Stephens-Davidowitz, N.: Solving the shortest vector problem in \(2^n\) time via discrete gaussian sampling. In: STOC (2015). http://arxiv.org/abs/1412.7994
Ajtai, M.: Generating hard instances of lattice problems. In: STOC (1996)
Ajtai, M., Kumar, R., Sivakumar, D.: A sieve algorithm for the shortest lattice vector problem. In: STOC (2001)
Aggarwal, D., Stephens-Davidowitz, N.: Just take the average! An embarrassingly simple \(2^n\)-time algorithm for SVP (and CVP). In: SOSA (2018). http://arxiv.org/abs/1709.01535
Becker, A., Ducas, L., Gama, N., Laarhoven, T.: New directions in nearest neighbor searching with applications to lattice sieving. In: SODA (2016)
Chen, Y., Nguyen, P.Q.: Faster algorithms for approximate common divisors: breaking fully-homomorphic-encryption challenges over the integers. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 502–519. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29011-4_30
Dadush, D., Peikert, C., Vempala, S.: Enumerative lattice algorithms in any norm via \(M\)-ellipsoid coverings. In: FOCS (2011)
Gama, N., Howgrave-Graham, N., Koy, H., Nguyen, P.Q.: Rankin’s constant and blockwise lattice reduction. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 112–130. Springer, Heidelberg (2006). https://doi.org/10.1007/11818175_7
Gama, N., Howgrave-Graham, N., Nguyen, P.Q.: Symplectic lattice reduction and NTRU. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 233–253. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_15
Gama, N., Nguyen, P.Q.: Finding short lattice vectors within Mordell’s inequality. In: STOC (2008)
Gama, N., Nguyen, P.Q.: Predicting lattice reduction. In: Smart, N. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 31–51. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78967-3_3
Gentry, C., Peikert, C., Vaikuntanathan, V.: Trapdoors for hard lattices and new cryptographic constructions. In: STOC (2008). https://eprint.iacr.org/2007/432
Hanrot, G., Pujol, X., Stehlé, D.: Analyzing blockwise lattice algorithms using dynamical systems. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 447–464. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22792-9_25
Joux, A., Stern, J.: Lattice reduction: a toolbox for the cryptanalyst. J. Cryptol. 11(3), 161–185 (1998). https://doi.org/10.1007/s001459900042
Kannan, R.: Improved algorithms for integer programming and related lattice problems. In: STOC (1983)
Lenstra Jr., H.W.: Integer programming with a fixed number of variables. Math. Oper. Res. 8(4), 538–548 (1983)
Lenstra, A.K., Lenstra Jr., H.W., Lovász, L.: Factoring polynomials with rational coefficients. Math. Ann. 261(4), 515–534 (1982)
Li, J., Nguyen, P.Q.: Approximating the densest sublattice from Rankin’s inequality. LMS J. Comput. Math. 17(Special Issue A) (2014). Contributed to ANTS-XI, 2014
Li, J., Nguyen, P.Q.: Computing a lattice basis revisited. In: ISSAC (2019)
Lovász, L.: An Algorithmic Theory of Numbers, Graphs and Convexity. Society for Industrial and Applied Mathematics, Philadelphia (1986)
Li, J., Wei, W.: Slide reduction, successive minima and several applications. Bull. Aust. Math. Soc. 88, 390–406 (2013)
Liu, M., Wang, X., Xu, G., Zheng, X.: Shortest lattice vectors in the presence of gaps (2011). http://eprint.iacr.org/2011/139
Micciancio, D., Voulgaris, P.: A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. SIAM J. Comput. 42(3), 1364–1391 (2013)
Micciancio, D., Walter, M.: Practical, predictable lattice basis reduction. In: Fischlin, M., Coron, J.-S. (eds.) EUROCRYPT 2016. LNCS, vol. 9665, pp. 820–849. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49890-3_31. http://eprint.iacr.org/2015/1123
Neumaier, A.: Bounding basis reduction properties. Des. Codes Cryptogr. 84(1), 237–259 (2016). https://doi.org/10.1007/s10623-016-0273-9
Computer Security Division NIST: Post-quantum cryptography (2018). https://csrc.nist.gov/Projects/Post-Quantum-Cryptography
Nguyen, P.Q., Stern, J.: The two faces of lattices in cryptology. In: Silverman, J.H. (ed.) CaLC 2001. LNCS, vol. 2146, pp. 146–180. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44670-2_12
Nguyen, P.Q., Vidick, T.: Sieve algorithms for the shortest vector problem are practical. J. Math. Cryptol. 2(2), 181–207 (2008)
Nguyen, P.Q., Vallée, B. (eds.): The LLL Algorithm: Survey and Applications. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-02295-1
Odlyzko, A.M.: The rise and fall of knapsack cryptosystems. Cryptol. Comput. Number Theory 42, 75–88 (1990)
Peikert, C.: Public-key cryptosystems from the worst-case shortest vector problem. In: STOC (2009)
Peikert, C.: A decade of lattice cryptography. Found. Trends Theor. Comput. Sci. 10(4), 283–424 (2016)
Pujol, X., Stehlé, D.: Solving the shortest lattice vector problem in time \(2^{2.465 n}\) (2009). http://eprint.iacr.org/2009/605
Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. J. ACM 56(6), 1–40 (2009)
Schnorr, C.-P.: A hierarchy of polynomial time lattice basis reduction algorithms. Theor. Comput. Sci. 53(23), 201–224 (1987)
Schnorr, C.-P., Euchner, M.: Lattice basis reduction: improved practical algorithms and solving subset sum problems. Math. Program. 66, 181–199 (1994). https://doi.org/10.1007/BF01581144
Shamir, A.: A polynomial-time algorithm for breaking the basic Merkle-Hellman cryptosystem. IEEE Trans. Inform. Theory 30(5), 699–704 (1984)
Wei, W., Liu, M., Wang, X.: Finding shortest lattice vectors in the presence of gaps. In: Nyberg, K. (ed.) CT-RSA 2015. LNCS, vol. 9048, pp. 239–257. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16715-2_13
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
A Properties of Gama and Nguyen’s Slide Reduction
A Properties of Gama and Nguyen’s Slide Reduction
In the theorem below, \(\mathbf {B}_{[d+1,n]}\) is essentially just a slide-reduced basis in the sense of [GN08a]. So, the following is more-or-less just a restatement of [GN08a, Theorem 1].
Theorem 9
Let \(\mathbf {B}= (\textit{\textbf{b}}_1,\ldots , \textit{\textbf{b}}_n) \in \mathbb {R}^{m \times n}\) with \(n = pk + d\) for some \(p \ge 1\) and \(d \ge k\) be \((\delta ,k)\)-slide reduced in the sense of Definition 2. Then,
Proof
By definition, for each \(i \in [0, p-2]\), the block \(\mathbf {B}_{[ik+d+1,(i+1)k+d+1]}\) is \(\delta \sqrt{\gamma _k}\)-twin-reduced. By Eq. (3) of Fact 3, we see that
which implies (7) by induction.
We prove (8) and (9) by induction over p. If \(p=1\), then both inequalities hold as \(\mathbf {B}_{[d+1,n]}\) is \(\delta \)-SVP reduced by the definition of slide reduction. Now, assume that Eqs. (8) and (9) hold for some \(p \ge 1\). Let \(n=(p+1)k+d\). Then \(\mathbf {B}\) satisfies the requirements of the theorem with \(d' := d+k\). Therefore, by the induction hypothesis, we have
Since \(\mathbf {B}_{[d+1,d+k]}\) is \(\delta \sqrt{\gamma _k}\)-HSVP reduced, we may apply Lemma 1.2 with \(\eta =(\delta ^2 \gamma _{k})^{\frac{1}{2(k-1)}}\), which proves (8) for \(\mathbf {B}_{[d+1,n]}\).
Furthermore, if \(\lambda _1(\mathcal {L}(\mathbf {B}_{[d+1,n]})) < \lambda _1(\mathcal {L}(\mathbf {B}_{[d+1,d+k]}))\), it follows from Lemma 1.1 that \(\mathbf {B}_{[d+1,n]}\) is \(\delta '\)-SVP-reduced for
as needed. If not, then \(\lambda _1(\mathcal {L}(\mathbf {B}_{[d+1,n]})) = \lambda _1(\mathcal {L}(\mathbf {B}_{[d+1,d+k]}))\), and \(\Vert \textit{\textbf{b}}_{d+1}^{*}\Vert \le \delta \lambda _1(\mathcal {L}(\mathbf {B}_{[d+1,n]}))\) because \(\mathbf {B}_{[d+1,d+k]}\) is \(\delta \)-SVP reduced. In all cases, we proved (9). This completes the proof of Theorem 9. \(\square \)
Rights and permissions
Copyright information
© 2020 International Association for Cryptologic Research
About this paper
Cite this paper
Aggarwal, D., Li, J., Nguyen, P.Q., Stephens-Davidowitz, N. (2020). Slide Reduction, Revisited—Filling the Gaps in SVP Approximation. In: Micciancio, D., Ristenpart, T. (eds) Advances in Cryptology – CRYPTO 2020. CRYPTO 2020. Lecture Notes in Computer Science(), vol 12171. Springer, Cham. https://doi.org/10.1007/978-3-030-56880-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-56880-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-56879-5
Online ISBN: 978-3-030-56880-1
eBook Packages: Computer ScienceComputer Science (R0)