DOKK Library

Inapproximability of the Shortest Vector Problem: Toward a Deterministic Reduction

Authors Daniele Micciancio,

License CC-BY-3.0

Plaintext
                              T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512
                                           www.theoryofcomputing.org




                       Inapproximability of
                   the Shortest Vector Problem:
                 Toward a Deterministic Reduction
                                                  Daniele Micciancio∗
                                  Received: May 12, 2011; published: September 24, 2012.




         Abstract: We prove that the Shortest Vector Problem (SVP) on point lattices is NP-
         hard to approximate for any constant factor under polynomial-time randomized reductions
         with one-sided error that produce no false positives. We also prove inapproximability for
         quasi-polynomial factors under the same kind of reductions running in subexponential time.
         Previous hardness results for SVP either incurred two-sided error, or only proved hardness
         for small constant approximation factors. Close similarities between our reduction and recent
         results on the complexity of the analogous problem for linear codes make our new proof an
         attractive target for derandomization, paving the road to a possible NP-hardness proof for
         SVP under deterministic polynomial-time reductions.

ACM Classification: F.2.2, F.1.3, G.1.6
AMS Classification: 68Q17, 52C07, 11H06, 11H31, 05B40
Key words and phrases: lattices, shortest vector problem, NP-hardness, hardness of approximation,
randomized reductions

1      Introduction
Lattices are regular arrangements of points in n-dimensional Euclidean space that arise in several areas of
computer science and mathematics. Two central problems in the computational study of lattices are the
     ∗ Supported in part by NSF grant CNS-1117936. Any opinions, findings, and conclusions or recommendations expressed in

this material are those of the author and do not necessarily reflect the views of the National Science Foundation.


    2012 Daniele Micciancio
    Licensed under a Creative Commons Attribution License                                        DOI: 10.4086/toc.2012.v008a022
                                                  DANIELE M ICCIANCIO

Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Informally, SVP asks to find a
shortest nonzero vector in a lattice. CVP is the inhomogeneous counterpart of SVP, and asks to find a
lattice point closest to a given target. The approximate version of SVP asks for a nonzero lattice point
having length at most γ times the shortest nonzero lattice vector, while the approximate version of CVP
asks for a lattice vector whose distance to a given point is at most γ times the distance of the closest lattice
vector to that point. Both SVP and CVP are hard combinatorial problems, and the asymptotically fastest
known deterministic algorithm to solve them exactly runs in time Õ(4n ) [22]. (See also [2] and follow-up
work [24, 23, 26] for a different class of randomized algorithms for SVP, which are theoretically slower
than [22], but admit much faster heuristic implementations [24, 23, 33].)
     SVP is the more famous and widely studied problem of the two. It is also the problem for which
proving strong intractability results has been most challenging. The NP-hardness of SVP (in the Euclidean
norm) was conjectured by van Emde Boas in 1981 [30], but remained an outstanding open problem in
computational complexity theory for almost two decades. In 1998, Ajtai [1] gave a first answer to this
problem, proving that solving SVP exactly is NP-hard1 under randomized reductions. This should be
contrasted with the inhomogeneous problem, CVP, which has been known to be NP-hard (even under
deterministic reductions) since the early 80s [30]. Moreover, CVP admits much simpler NP-hardness
proofs than SVP [19], and it is known to be NP-hard even in its approximate version (and, as before,
under deterministic reductions) for factors as large as n1/O(log log n) [10]. Proving the NP-hardness of SVP
under deterministic reductions is still an open problem, even for the exact version of SVP.
     Immediately following Ajtai’s breakthrough result, the complexity of SVP received renewed attention,
leading to several improvements, with the main goal of showing that the problem is hard even in its
approximate version. In [1], Ajtai had already observed that hardness for the exact version implies weak
                                                                                 c
inapproximability results for approximation factors of the form 1 + 1/2n that rapidly approach 1 as
the lattice dimension n grows. This was slightly improved by Cai and Nerurkar [7] to factors 1 + 1/nc ,
still approaching 1 in the limit, though at a lower rate. The first significant inapproximability result for
factors bounded away from 1√was achieved by the present author [20], who proved NP-hardness for any
constant factor smaller than 2 (independent of the lattice dimension). The proof in [20] has a simple
and intuitive high-level structure. Specifically, the NP-hardness of SVP is proved by reduction from
(a variant of) CVP, through what can be called a “homogenization process” [21]. The idea is roughly
the following: if the lattice vector v ∈ Λ is close to the target t, then t − v is a short vector in the lattice
generated by Λ and t. So, one can attempt to solve a CVP instance by means of an SVP computation
on an augmented lattice. This process, often used as a heuristic in cryptanalysis [17], does not work in
general (see Section 2). However, [20] showed that a natural geometric gadget (essentially consisting of
a lattice coset in Euclidean space with large minimum distance and many short vectors) can be used to
turn this simple idea into a valid reduction from CVP to SVP. The reduction of [20] still admits a nice
geometric interpretation (see Section 5 for details), and served as a starting point to obtain similar results
for the analogous Minimum Distance Problem (MDP) on linear codes.
     The history of the coding problem MDP, along with its inhomogeneous counterpart the Nearest

   1 All known hardness results hold not only for the SVP and CVP search problems as informally defined above, but also for

the decision/promise problems naturally associated to them. (See Definitions 2.1 and 2.2.) For simplicity in this introduction we
ignore the technical distinction between search and decision/promise problems, and state the known hardness results only for
search SVP and CVP.


                             T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                            488
                                               I NAPPROXIMABILITY OF SVP

Codeword Problem (NCP), mostly mirrors that of SVP and CVP. The NP-hardness of the inhomogeneous
problem, NCP, was already proved in the late 70s [6] for the exact version of the problem, and improved
to NP-hardness of approximation within any constant factor (or subpolynomial factors n1/O(log log n)
under subexponential time reductions)2 in [3]. Just like for lattices, proving the NP-hardness of the
homogeneous problem, MDP, took much longer. Hardness for the exact version of MDP was proved
by Vardy [32] around the same time as Ajtai’s discovery for SVP [1]. However, while Ajtai’s reduction
was randomized, Vardy [32] could prove NP-hardness of MDP under deterministic reductions. Building
on [20], Dumer, Micciancio, and Sudan proved that MDP is NP-hard even to approximate, for any
constant approximation factor. However, [11] inherited from [20] the use of randomization for the
construction of the embedding gadget required by the reduction. Finally, in a surprising development,
Cheng and Wan [8] showed that the probabilistic construction of the embedding gadget employed in the
reduction of [11] can be derandomized, leading to the NP-hardness of approximating MDP within any
constant factor under deterministic reductions. (The result of Cheng and Wan [8] has also been recently
simplified by Khot and Austrin [4].)
    Going back to lattices, the strongest inapproximability results for SVP known to date are Khot’s
proof [14] that SVP is NP-hard to approximate within any constant factor O(1), and Haviv and Regev’s
proof [12] that SVP cannot be approximated within some factor n1/O(log log n) unless NP is in random
subexponential time.3 However, just like Ajtai’s original proof [1], all subsequent inapproximability
results for SVP [7, 20, 14, 12] employed randomization, and little progress has been made in proving
NP-hardness under deterministic reductions, even for the exact version of SVP. In fact, the most recent
and quantitatively strongest results [14, 12] achieve larger inapproximability factors than [20] at the cost
of introducing even more randomness: while the randomized reduction in [20] had one-sided error, the
hardness proofs of Khot [14] and Haviv and Regev [12] incurred two-sided error. More specifically,
depending on the value of the random string, the probabilistic reduction of [14, 12] may produce both
false negatives (i. e., map YES instances to NO instances) and false positives (i. e., map NO instances to
YES instances). By contrast, the probabilistic reduction of [20] is guaranteed not to produce false positives
regardless of the random string used in the reduction process.4 Moreover, in addition to introducing
two-sided errors, the hardness proofs of [14, 12] depart from the homogenization framework of [20], and
incorporate additional probabilistic techniques (namely, the intersection of lattices with randomly chosen
subspaces) that make the high-level structure of the reduction more involved and harder to derandomize.
In particular, the use of randomization in [14, 12] is not restricted to the construction of a gadget with a
self-contained description as in [20], but permeates the entire reduction process.


   2 The hardness of NCP under subexponential time reductions easily follows from [3] using a standard tensor product argument.

It may well be the case that, similar to CVP [10], the hardness of NCP for subpolynomial approximation factors n1/O(log log n)
can be proved under deterministic polynomial-time reductions. However, [10] only considers CVP, and makes no claim about
the hardness of NCP. Extending the methods of [10] to NCP is an interesting open problem.
    3 Inapproximability results under subexponential assumptions were already given by Khot in [14], but for smaller approxima-

tion factors than [12].
    4 Randomized reductions with this one-sided error property are called reverse unfaithful random (RUR) reductions [13]. For

simplicity, in this paper, we avoid the use of this technical term, and refer to these reductions simply as probabilistic reductions
with one-sided error and no false positives. Probabilistic reductions with one-sided error and no false negatives (faithful random
reductions, using the terminology of [13]) also occur in the study of lattice problems [15], but they are not used in this paper.


                             T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                               489
                                                   DANIELE M ICCIANCIO

Our results We present a new, simpler proof that SVP is NP-hard to approximate within any constant
factor which goes back to the geometrically appealing approach of [20] and avoids the introduction of
additional probabilistic techniques from [14, 12]. In particular, we prove
     • the NP-hardness of SVP for any constant approximation factor, as in [14], and

     • the hardness of SVP for subpolynomial factors n1/O(log log n) under the assumption that NP is not in
       subexponential time, as in [14, 12],
thus matching the strongest known hardness results for SVP, but under probabilistic reductions with one-
sided error. We regard our results as a partial derandomization of the reductions [14, 12] with two-sided
error, and a step toward an NP-hardness proof for SVP under deterministic reductions. Randomness is
used within our proof exclusively for the construction of a geometric gadget with properties similar to
the one originally introduced by Micciancio in [20]. Beside the technical advantage of resulting in a
reduction with one-sided error, we believe this takes us closer to a possible NP-hardness proof for SVP
under deterministic reductions for the following reasons:
     • In [20], Micciancio showed that a lattice gadget similar to the one used in this paper can be
       constructed in deterministic polynomial time under a certain (plausible but unproven) number-
       theoretic conjecture on the distribution of smooth numbers.5 While proving the number-theoretic
       conjecture of [20] seems difficult, the result in [20] suggests that randomness is not essential to
       prove NP-hardness results for SVP.

     • The probabilistic construction of a similar gadget for linear codes used in [11] to prove the
       NP-hardness of MDP has been successfully derandomized [8]. This provides hope that a deran-
       domization of the lattice gadget employed in this paper may be possible too.

     • The lattice gadget presented in this paper is constructed using techniques from the theory of linear
       codes, rather than the number-theoretic methods of [20]. So, the techniques in [8, 4] for the
       derandomization of the coding gadget of [11] may help to derandomize the construction of the
       lattice gadget described in this paper.
While proving the NP-hardness of (approximating) SVP under deterministic polynomial time reductions
is a goal yet to be reached, we believe that our results offer a viable approach to the resolution of this
outstanding open problem.

Techniques A standard method to prove hardness results within large approximation factors for lattice
and coding problems is to first prove hardness for some fixed small constant factor, and then amplify the
constant using some polynomial-time (or quasi-polynomial-time) transformation. For example, the tensor
product of linear codes is used in [11] to amplify the NP-hardness of approximating MDP to arbitrarily
large constant factors. This suggests the use of tensor products of lattices to prove the NP-hardness of
SVP
√ within large constant factors, starting from the inapproximability result of [20] for factors below
  2. In fact, using the tensor product is a common theme in the sequence of papers [20, 14, 12] proving
   5 Namely, [20] conjectured that for every ε > 0 there is a c ≥ 1 such that for all sufficiently large n the interval [n, n + nε ]

contains a square-free smooth number, i. e., an integer whose prime factors are all distinct and bounded by logc n.


                             T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                               490
                                              I NAPPROXIMABILITY OF SVP

hardness of approximation results for SVP. Unfortunately, while the minimum distance of a linear code
gets squared when one takes the tensor product of the code with itself, the same is not always true for
the length of the shortest vectors in a lattice. The length of a shortest vector in the tensor product of a
lattice with itself can be essentially the same as the length of shortest vectors in the original lattice (e. g.,
see [12, Lemma 2.4]), and this is why Micciancio [20] could not prove NP-hardness (under randomized
reductions with one-sided error) of SVP within larger constant factors. Subsequent work [14, 12] went
around this obstacle in various ways. Khot [14] introduced a nonstandard notion of “augmented tensor
products,” and used it to prove NP-hardness results for any constant approximation factor (and some
subpolynomial factors under subexponential reductions) starting from a new hardness result for small
constants based on BCH codes. Haviv and Regev [12] were able to prove that the lattices produced by
the basic reduction of [14] behave well6 with respect to the standard tensor product operation, leading to
stronger hardness results under superpolynomial-time reductions.
     We remark that the proofs in [14, 12] that the (augmented) tensor product does amplify the approx-
imation factor are specific to the basic lattices of [14]. In this paper we revisit the general problem of
amplifying the approximation factor for SVP by the standard tensor product operation, and prove that
tensoring works when applied to an appropriate variant of SVP. Specifically, we introduce a new method
to measure the length of the vectors in a lattice, which can be seen as a hybrid between the Euclidean
length typically used for lattices and the Hamming metric used for linear codes. Specifically, the measure
associated to an integer vector v is given by the product of the largest power of 2 (or some other fixed
integer q) that divides v times the square root of the number of nonzero coordinates in v. Using this
measure, we define a variant of SVP and prove that it behaves well with respect to the tensor product.
Then, we prove that our SVP variant is NP-hard to approximate within some constant factor, under
randomized reductions with one-sided error. Tensoring immediately yields inapproximability results
for SVP within larger factors, still under randomized reductions with one-sided error. Moreover, our
basic NP-hardness proof—within small approximation factors—is very similar to those in [20, 11] so, as
explained in the previous paragraphs, it may be more easily derandomized. We remark that the standard
tensor product operation amplifies the approximation factor for any instance of the SVP variant defined
in this paper, not just for the output of our basic reduction. So, the amplification method proposed here is
fairly general: any proof that the SVP variant is NP-hard to approximate within some constant factor
under deterministic reductions (not necessarily obtained by derandomizing the specific reduction given in
this paper) immediately yields similar deterministic NP-hardness results for arbitrarily large constants.
     Similarly to recent SVP NP-hardness proofs [14, 12], our reductions use BCH codes, but only within
the construction of the homogenization gadget. As a historical note, the use of BCH codes in the context
of proving NP-hardness results for homogeneous lattice and coding problems was first suggested in [11].
More specifically, [11] proved that codes beating the Gilbert-Varshamov bound can be used to build
geometric gadgets similar to the one of [20], and mentioned Reed-Solomon, Algebraic-Geometry and
BCH codes as examples of codes beating this bound. BCH codes were later used by [14, 12] to prove
the hardness of SVP for any constant approximation factor and beyond, but in an ad-hoc manner, i. e.,
without explicitly connecting them to the general framework of [20, 11]. Our work perhaps offers a more
intuitive explanation of why BCH codes are useful in proving inapproximability results for SVP.
   6 Informally, a lattice Λ behaves well with respect to the tensor product if the minimum distance of the tensor product of Λ

with itself is essentially the square of the minimum distance of Λ. See Theorem 3.5 for a formal definition.


                            T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                           491
                                           DANIELE M ICCIANCIO

Organization The rest of the paper is organized as follows. In Section 2 we give some background
about lattices, codes, and the homogenization framework of [20]. In Section 3 we describe our basic
techniques used to amplify inapproximability results for SVP via tensoring. In Section 4 we give a
construction of very dense lattices with large minimum distance that behave well with respect to the
tensor product operation. In Section 5 we give our main NP-hardness proof for SVP under nonuniform
reductions with one-sided error. We chose to first present our result as a nonuniform reduction to make the
reduction and analysis as simple as possible and self-contained. However, the nonuniformity of the advice
is not used in any essential way in our proof, and in Section 6 we use combinatorial techniques from [1, 20]
to replace the nonuniform advice with a uniformly chosen random string, leading to NP-hardness results
under randomized reductions with one-sided error.


2    Background
In this section we give some standard background on computational complexity, lattices, and some
combinatorial results used in this paper. The reader familiar with computational complexity can safely
skip most of this section without much loss.
      We use R, Z, and 2A to denote the set of the real numbers, the set of the integers, and the power set of
an arbitrary set A, respectively. We use standard asymptotic notation and write f = O(g) or g = Ω( f )
if | f (n)/g(n)| is bounded, and f = o(g) or g = ω( f ) if limn→∞ | f (n)/g(n)| = 0. We call a function
 f : Z → R negligible if f (n) = n−ω(1) .

Computational complexity As is standard in the study of computational complexity, we formulate
algorithmic problems on lattices as decision (or, more precisely, promise) problems. A promise problem
is a pair of disjoint languages ΠYES , ΠNO ⊆ {0, 1}∗ . The elements of ΠYES and ΠNO are called YES
instances and NO instances, respectively. The computational problem associated to (ΠYES , ΠNO ) is,
given a string w ∈ ΠYES ∪ ΠNO , determine if w ∈ ΠYES or not. (If w ∈     / ΠYES ∪ ΠNO , then any answer is
acceptable.) Decision problems correspond to the special case where ΠYES ∪ ΠNO = {0, 1}∗ , and the
promise w ∈ ΠYES ∪ ΠNO is trivially satisfied. When studying lattices and other computational problems,
we assume that common mathematical objects (integers, matrices, vectors, pairs, etc.) are represented as
binary strings in some standard way.
      A reduction between two promise problems (ΠYES , ΠNO ) and (ΣYES , ΣNO ) is a function f : {0, 1}∗ →
{0, 1}∗ such that f (ΠYES ) ⊆ ΣYES and f (ΠNO ) ⊆ ΣNO . In this paper we also use randomized and nonuni-
form reductions, defined below. Both randomized and nonuniform reductions are specified by a function
 f (x, y) that takes two inputs, a regular input x and an auxiliary input y. The auxiliary input y is called
advice in the case of nonuniform reductions, and randomness in the case of randomized reductions. The
complexity of the reduction is the time complexity of computing f , expressed as a function of the length
of the first input x alone. So, we say that f (x, y) is a polynomial-time reduction if there is an algorithm
that on input x and y, outputs f (x, y) in time |x|O(1) .
      A function f (x, y) is a nonuniform reduction from (ΠYES , ΠNO ) to (ΣYES , ΣNO ) if for any input length
n, there is an advice string y such that f (ΠYES ∩ {0, 1}n , y) ⊆ ΣYES and f (ΠNO ∩ {0, 1}n , y) ⊆ ΣNO . When
studying reductions between concrete computational problems (e. g., lattice problems in this paper) it
is common to have the advice string depend not on the input length |x|, but on some other parameter

                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                              492
                                          I NAPPROXIMABILITY OF SVP

of interest associated with the size of the input, e. g., the lattice dimension. (See the next paragraph
for background about lattices.) This is without loss of generality, as the lattice dimension is always at
most |x|, and therefore y can include a piece of advice for each dimension n ≤ |x| while incurring only a
polynomial blow-up in the size of the advice.
    A function f (x, y) is a randomized reduction from (ΠYES , ΠNO ) to (ΣYES , ΣNO ) if

     Pr{ f (x, y) ∈ ΣYES } ≥ 2/3 for all x ∈ ΠYES ,        and   Pr{ f (x, y) ∈ ΣNO } ≥ 2/3 for all x ∈ ΠNO ,
      y                                                           y


where both probabilities are computed over the uniformly random choice of y ∈ {0, 1}r(|x|) among the set
of all binary strings of a given length r(|x|). (One can always assume that the number r(|x|) of random
bits used by the reduction is bounded by the reduction running time t(|x|).) Randomized reductions
between concrete computational problems often assume that the randomness y is not just a uniformly
random binary string, but it is the binary representation of a structured object chosen according to some
efficiently sampleable (but not necessarily uniform) distribution. Formally, the randomized reduction
uses a uniformly random y ∈ {0, 1}r(|x|) as a seed to run the efficient sampling algorithm that produces
(or approximates up to negligible errors) the desired distribution.
     Notice that nonuniform and randomized reductions have two-sided error: depending on the value of
the advice or randomness y, they can produce both false positives (i. e., map x ∈ ΠNO to f (x, y) ∈ ΣYES ) and
false negatives (i. e., map x ∈ ΠYES to f (x, y) ∈ ΣNO ). In this paper we consider reductions with one-sided
error, that may produce false negatives, but are guaranteed not to produce false positives. Formally,
nonuniform reductions with one-sided error and no false positives satisfy f (ΠNO ∩ {0, 1}n , y) ⊆ ΣNO for
all advice strings y. Similarly, randomized reductions with one-sided error and no false positives satisfy
Pry { f (x, y) ∈ ΣNO } = 1 for all x ∈ ΠNO .

Lattices The n-dimensional Euclidean space is denoted Rn . A lattice in Rn is the set of all integer
combinations                           (               )
                                                      k
                                             Λ=      ∑ xi bi : xi ∈ Z
                                                     i=1

of k linearly independent vectors b1 , . . . , bk in Rn (n ≥ k). The set of vectors b1 , . . . , bk is called a lattice
basis, and the integer k is the lattice rank or dimension. When k = n, Λ is called a full-rank or full-
dimensional lattice. A basis can be compactly represented by the matrix B = [b1 , . . . , bk ] ∈ Rn×k having the
basis vectors as columns. The lattice generated by B is denoted L(B). Notice that L(B) = {Bx : x ∈ Zk },
where Bx is the usual matrix-vector multiplication. The determinant of a lattice L(B) is the volume of the
parallelepiped spanned by the basis vectors B, and does not depend on the choice of the basis B. When B
                                 absolute matrix determinant det(L(B)) = | det(B)|. More generally, for
is a square matrix, it equals thep
nonsquare bases, det(L(B)) = det(BT B), where BT is the matrix transpose of B.                                q
     Lattice problems can be defined with respect to any norm, but the Euclidean norm kxk = ∑i xi2
is the most common, and the one we focus on in this paper. We recall that the Euclidean norm is, in a
technical sense, the one for which lattice problems are algorithmically easiest, and hardness results for
other norms can be obtained via norm embeddings [27]. The minimum distance λ (Λ) of a lattice Λ is the
smallest distance between any two distinct lattice points and equals the length of the shortest nonzero

                          T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                     493
                                           DANIELE M ICCIANCIO

lattice vectors:                                          
                     λ (Λ) = min kx − yk : x 6= y ∈ Λ = min kxk : x ∈ Λ, x 6= 0 .
For any vector x ∈ Rn and real r, let B(v, r) = {w ∈ Rn : kv − wk ≤ r} be the ball of radius r centered at
v. When the ball is centered around the origin v = 0, we simply write B(r).
     When discussing computational issues related to lattices, it is customary to assume that the lattices
are represented by a basis matrix B and that the basis B has integer entries. This is without much loss
of generality as real lattices can be approximated by rational ones with arbitrarily high precision, and
rational lattices are easily mapped to integer ones simply by scaling them. In any case, as in this paper we
are concerned with (worst-case) hardness results, restricting the definitions to integer lattices only makes
the results stronger. We study the decisional (length/distance estimation) variants of SVP and CVP as
defined below.

Definition 2.1. The promise problem GapSVPγ (where γ ≥ 1 may be a function of the lattice dimension)
is defined as follows. Instances are pairs (B, d), where B ∈ Zn×k is a lattice basis and d is a positive
number such that

   1. (B, d) is a YES instance if λ (L(B)) ≤ d, i. e., kBxk ≤ d for some x ∈ Zk \ {0};

   2. (B, d) is a NO instance if λ (L(B)) > γ · d, i. e., kBxk > γ · d for all x ∈ Zk \ {0}.

Definition 2.2. The promise problem GapCVPγ (where γ ≥ 1 may be a function of the lattice dimension)
is defined as follows. Instances are triples (B, y, d), where B ∈ Zn×k is a lattice basis, y ∈ Zn is a vector,
and d is a positive number such that

   1. (B, y, d) is a YES instance if ky − Bxk ≤ d for some x ∈ Zk ;

   2. (B, y, d) is a NO instance if ky − Bxk > γ · d for all x ∈ Zk .

    We remark that any algorithm that (approximately) solves SVP in its standard formulation (i. e., given
a lattice, finds a nonzero lattice vector of length within a factor γ from the shortest) can immediately be
used to solve GapSVPγ . So, proving hardness results for GapSVPγ implies hardness of approximating
the standard SVP as well. The same observation applies to CVP and GapCVPγ . However we remark that
the converse is not known to be true: giving a reduction from approximate SVP to GapSVPγ (or from
approximate CVP to GapCVPγ ) is an important open problem in the complexity of lattice problems.

Linear codes Some of our constructions rely on techniques from the study of linear codes. For any
finite field F, and finite-dimensional vector space Fn over F, a linear code of block length n and dimension
k is a k-dimensional linear subspace of Fn . As for lattices, linear codes are usually represented by a
generator matrix C ∈ Fn×k , which is a basis for the code C(C) = {Cx : x ∈ Fk } ⊆ Fn . The difference
n − k is called the co-dimension of the code. The Hamming weight of a vector v ∈ Fn is the number kvkH
of nonzero coordinates of v. The minimum distance of a linear code C ⊆ Fn is the smallest Hamming
weight of a nonzero vector in the code min{kvkH : v ∈ C \ {0}}. In this paper, we are primarily interested
in binary linear codes, i. e., linear codes over the field F2 = {0, 1} with two elements. A binary linear code
with block length n, dimension k, and minimum distance d is usually denoted C[n, k, d]2 . In Section 4

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                               494
                                                   I NAPPROXIMABILITY OF SVP

we build a very dense lattice starting from the family of extended primitive narrow-sense binary BCH
codes. For brevity, we refer to these codes just as extended BCH codes. Extended BCH codes can be
defined for any block length m = 2κ that is a power of 2. The properties of these codes needed in this
paper are summarized in the following theorem. For completeness, we also include a brief sketch of the
construction and analysis of extended BCH codes, and refer the reader to [25, Chapter 1, Section 7] for
details.

Theorem 2.3. For any m being a power of two, and any even positive integer h ≤ m, the extended BCH
code EBCHm  h [m, k, d] has minimum distance d ≥ h and co-dimension m − k ≤ (log2 m) · (h/2 − 1) + 1.
                                          m          m              m
Moreover, these codes satisfy Fm 2 = EBCH0 ⊇ EBCH2 ⊇ · · · ⊇ EBCHm . Generator matrices for these
codes can be constructed in time polynomial in m.

Proof. Extended BCH codes of block length m = 2κ are obtained by appending a parity check bit to
a BCH code of block length n = m − 1. BCH codes are polynomial codes, i. e., they can be described
algebraically as the set of all (coefficient vectors of) polynomials of degree less than n that are divisible by
a given generating polynomial g(X) ∈ F2 [X]. The co-dimension of a polynomial code equals the degree
n − k = deg(g) of the generating polynomial. Let α be a generator of the multiplicative group of Fm , the
finite field with m = 2κ elements. A basic fact in the theory of polynomial codes (BCH bound, see [25,
Chapter 1, Theorem 6.1]) is that if g(α i ) = 0 for t consecutive powers of α, then the polynomial code
generated by g(X) has minimum distance at least t + 1.
     For any even h ≤ m, the BCH code with designed minimum distance h − 1 ≤ m − 1 is the polynomial
code generated by the least common multiple gh (X) of the minimal polynomials

                                                    p1 (X), p3 (X), . . . , ph−3 (X)

of the first h/2 − 1 odd powers α 1 , α 3 , . . . , α h−3 of the primitive element α. Notice that for any even
power α 2 j , it holds that gh (α 2 j ) = (gh (α j ))2 because squaring is a linear operation in the polynomial
ring Fm [X] (as a vector space over Fm ). So, gh (α j ) = 0 for all j = 1, . . . , h − 2, and the minimum distance
of the BCH code is at least h − 1. The extended BCH code EBCHm            h [m, k, d]2 is obtained by appending a
parity check bit to the code generated by gh (X). Since h − 1 is odd, appending a parity check bit increases
the (designed) minimum distance of the code to d ≥ h. The block length and co-dimension also increase
by 1, while the dimension of the code remains the same. Since the degree of each minimal polynomial
p j is at most κ, the degree of gh is bounded by κ · (h/2 − 1), and the co-dimension of EBCHm            h [m, k, d]
is at most m − k ≤ κ(h/2 − 1) + 1. Notice that for any h ≤ h0 , gh0 is a multiple of gh , and therefore
EBCHm   h ⊇ EBCHh0 .
                    m



Tensor products For any two matrices B(1) ∈ Rn1 ×k1 and B(2) ∈ Rn2 ×k2 , define the Kronecker product
B = B(1) ⊗ B(2) ∈ Rn1 n2 ×k1 k2 as the matrix with entries
                              (1)       (2)
                  bi, j = bi1 , j1 · bi2 , j2 ,   where i = (i1 − 1) · n2 + i2 and j = ( j1 − 1) · k2 + j2 ,

for i1 = 1, . . . , n1 , i2 = 1, . . . , n2 , j1 = 1, . . . , k1 and j2 = 1, . . . , k2 . Informally, B(1) ⊗ B(2) is the block
                                                        (1)                                  (1)
matrix obtained by replacing each entry bi1 , j1 of B(1) with the matrix bi1 , j1 · B(2) . The tensor product of

                              T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                        495
                                               DANIELE M ICCIANCIO

lattices L(B(1) ) and L(B(2) ) is the k1 k2 -dimensional lattice L(B(1) ⊗ B(2) ) in n1 n2 -dimensional space gen-
erated by the Kronecker product of the two basis matrices. Identifying the set Rn1 n2 of n1 n2 -dimensional
vectors with the set Rn1 ×n2 of n1 × n2 matrices in the obvious way, the tensor product of two lattices can
be conveniently defined as the set of all matrices

                                 L(B(1) ⊗ B(2) ) = {B(1) X(B(2) )T | X ∈ Zk1 ×k2 }.

The tensor product of two linear codes is defined similarly. As mentioned in the introduction, the tensor
product operation can be used to amplify hardness results for certain coding and lattice problems to large
approximation factors. For example, if C is a linear code with minimum distance d, then the product code
C ⊗ C has minimum distance d 2 . So, if the minimum distance of C is hard to approximate within a factor γ,
then the minimum distance of C ⊗ C is also hard to approximate within a factor γ 2 . Similar amplification
results are also possible for NCP and CVP. However, this method to amplify the approximation factor
of a problem does not work for GapSVP. It is easy to prove that λ (Λ1 ⊗ Λ2 ) ≤ λ (Λ1 ) · λ (Λ2 ) for any
two lattices Λ1 and Λ2 , and therefore λ (Λ ⊗ Λ) ≤ λ (Λ)2 . However, in general λ (Λ ⊗ Λ) can be much
smaller than λ (Λ)2 . (E. g., see [12, Lemma 2.4].) Lattices Λ1 for which λ (Λ1 ⊗ Λ2 ) = λ (Λ1 ) · λ (Λ2 ) for
every lattice Λ2 are called “E-type” lattices [16], and are special in this regard.
    We will prove the NP-hardness of GapSVP by reduction from the following NP-hard variant of CVP.

Definition 2.4. The promise problem TensorCVPγ (where γ ≥ 1 may be a function of the lattice dimen-
sion) is defined as follows. Instances are triples (B, y,t) where B ∈ Zn×k is a lattice basis, y ∈ Zn is a
vector, and t is a positive number such that

    • (B, y,t) is a YES instance if ky − Bxk ≤ t for some x ∈ {0, 1}k ;
                                   p
    • (B, y,t) is a NO instance if ky − BxkH > γt for all x ∈ Rk .

     TensorCVP differs from CVP as follows. In the YES instances, the target is required to be close
to a binary combination of the basis vectors. In the NO instances, the target is required to be far in
Hamming distance from the entire linear space spanned by the lattice. TensorCVP is a fairly standard
NP-hard variant of CVP, similar to those used in previous works on the computational complexity of
lattice problems [20, 14, 12]. We call this CVP variant TensorCVP because we will use it to prove the
NP-hardness of TensorSVP, the variant of SVP defined in Definition 3.4, which is closely related to the
use of the tensor product to amplify the approximation factor. The NP-hardness of TensorCVP is proved
by reduction from the exact set cover problem. Remember that an instance of exact set cover consists of a
collection of sets S1 , . . . , Sn ⊆ {1, . . . , u} and an integer t ≤ n. A cover C is a subcollection C ⊆ {1, . . . , n}
such that i∈C Si = {1, . . . , u}. A cover is exact if the sets Si (i ∈ C) in the cover are pairwise disjoint, i. e.,
           S

{Si }i∈C is a partition of {1, . . . , u}. When reducing set cover problems to lattice problems it is convenient
to represent the collection {S1 , . . . , Sn } as a matrix S = [s1 , . . . , sn ] ∈ {0, 1}u×n where the columns si are
the indicator vectors of the sets Si . Using matrix notation, a cover of size t is represented by a binary
vector c ∈ {0, 1}n of Hamming weight t such that Sc ≥ 1, where 1 is the all-ones vector and the inequality
holds componentwise. The cover is exact if Sc = 1.

Definition 2.5. For any γ ≥ 1, an instance of the γ-approximate exact set cover problem is a pair (S,t)
with S ∈ {0, 1}u×n and t ∈ {1, . . . , n} such that

                           T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                      496
                                       I NAPPROXIMABILITY OF SVP

    • (S,t) is a YES instance if there is an exact cover of size at most t, i. e., a binary vector c ∈ {0, 1}n
      of Hamming weight kckH ≤ t such that Sc = 1;

    • (S,t) is a NO instance if all covers have size bigger than γt, i. e., all binary vectors c ∈ {0, 1}n such
      that Sc ≥ 1 have Hamming weight kckH > γt.

   The NP-hardness of TensorCVPγ easily follows from the following NP-hardness result for exact set
cover proved in [5].

Theorem 2.6. The exact set cover problem is NP-hard to approximate within any constant factor γ ≥ 1.

    The NP-hardness of TensorCVPγ is already implicit in [3], and, with minor variations, it has been
used in many previous works on the complexity of GapSVP [20, 14, 12]. Here we give a slightly simpler
proof than the one originally given in [3] and commonly reported in the literature [14, 12], that avoids the
introduction of auxiliary variables and large constants.

Theorem 2.7. For any constant γ ≥ 1, TensorCVPγ is NP-hard.

Proof. We use the fact that exact set cover is NP-hard for any constant approximation factor γ (Theo-
rem 2.6), and we reduce it to TensorCVP√γ . On an input that is an exact set cover instance (S,t), the
                                                        √
reduction produces a TensorCVP√γ instance (B, y, t) where B ∈ Zn×k is a basis for the lattice of all
integer vectors v such that Sv = 0, and y ∈ Zn is an arbitrary integer solution to Sy = 1. (Both B and y
can be efficiently computed using linear algebra. If no solution y exists, then (S,t) is necessarily a NO
instance and the reduction can output an arbitrary NO instance of TensorCVP√γ .) We need to prove that
the reduction is correct.
    If (S,t) is a YES instance, then there is an exact cover of size t, i. e., a vector c ∈ {0, 1}n of Hamming
weight kckH ≤ t such that Sc = 1. It follows that S(y − c) =√0, i. e., y − c is a lattice vector.  √ Moreover
this lattice vector is within distance ky − (y − c)k = kck ≤ t from y, proving that (B, y, t) is a YES
instance of TensorCVP√γ . Now assume (S,t) is a NO instance. Notice that for any v in the linear span of
B, S(y − v) = Sy = 1. So, the nonzero coordinates      of y − v form a set cover. It follows that y − v must
                                                p              √
have more than γt nonzero coordinates, i. e., ky − vkH > γt.

The homogenization framework We briefly recall the framework of [20] to prove hardness results for
GapSVP. Let (B, y) be a CVP instance. A common heuristic to find a lattice vector Bx closest to y is to
search for a short vector in the augmented lattice L([B, y]). However, this simple heuristic does not work
in general (even if one can solve SVP exactly), and it only yields a reduction from Bounded Distance
Decoding (BDD) to the unique Shortest Vector Problem (uSVP), two restricted versions of CVP and
SVP [18]. For the general CVP and SVP, there are two different ways in which this approach may fail:

    • A shortest nonzero vector in L([B, y]) may be of the form Bx + c · y with |c| ≥ 2. This yields a
      lattice vector Bx close to a multiple of the original target y.

    • A shortest nonzero vector in L([B, y]) may be of the form Bx. This will be the case if the distance
      of the target y from the lattice L(B) is larger than λ (L(B)).

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                497
                                           DANIELE M ICCIANCIO

In the context of proving the NP-hardness of SVP, the first problem is easily solved by reducing from
a variant of CVP (like TensorCVP, see Definition 2.4) where either the target is close to the lattice, or
all of its nonzero (integer) multiples are far from it. The second problem is more fundamental, and also
arises in the context of proving similar results for linear codes [11]. Building on techniques from [1],
Micciancio [20] solved this problem essentially by embedding B and y into a higher dimensional space
to obtain a new lattice L(B0 ) and target vector t0 such that

    • if y is close to the lattice L(B), the embedded target y0 is still close to the lattice L(B0 ), and

    • the embedding operation increases the minimum distance of the lattice L(B), so that the distance
      of y0 from L(B0 ) is strictly smaller than λ (L(B0 )).

This transformation ensures that the shortest vectors in L([B0 , y0 ]) are not in L(B0 ), and therefore must
necessarily make use of the target vector y0 . In [20], it is shown that such a transformation can be easily
carried out using a geometric gadget consisting of a lattice coset L(L) − s with large minimum distance
λ (L(L)) and many short vectors (L(L) − s) ∩ B(r). (Specifically, the length bound r on these vectors
should be strictly smaller than the minimum distance of L(L) by a constant factor.) Moreover, if L(L) is
sufficiently dense (i. e., if its determinant is not too big), then an appropriate coset is guaranteed to exist
and can be probabilistically found by choosing s as a random short vector.
    In [20] a gadget of this type is constructed using techniques from elementary number theory, which
are less “combinatorial” than the coding theory tools used in the NP-hardness proof of [14], and arguably
harder to derandomize. In this paper we give an alternative and more refined construction of Micciancio’s
geometric gadget. Similarly to the proofs in [14, 12], we rely on tools from coding theory, namely the
construction of BCH codes (see Theorem 2.3) rather than number-theoretic methods, and make extensive
use of integer vectors with all-even coordinates, an element already present in [14].
    Beside its potential for easier derandomization, the new construction, which combines lattice and
coding elements, has the advantage of behaving well with respect to the tensor product of lattices.
    Our nonuniform reduction from TensorCVP to GapSVP, similarly to previous work [1, 20], uses the
following combinatorial result, due to Vapnik and Chervonenkis [31], Sauer [28], and Shelah [29] and
commonly referred to as “Sauer’s lemma.”

                                                                let A ⊆ 2 be an arbitrary collection of
Lemma 2.8 (Sauer’s Lemma). Let M be a set of size m, and                   M
                                                         k−1 m
subsets of M. For any integer k ≥ 1 such that |A| > ∑i=0 i , there exists a subset T ⊂ M of size |T | = k
that is shattered by A, i. e., the restriction A|T = {A ∩ T : A ∈ A} equals 2T .

    We restate Sauer’s lemma in matrix language, interpreting the elements of A as (the indicator functions
of) subsets of M = {1, . . . , m}, and letting T ∈ {0, 1}k×m be the projection matrix corresponding to the
set of coordinates T ⊂ M.

Lemma 2.9 (Sauer’s Lemma restated). Let m be a positive integer, and A ⊂ {0, 1}m an arbitrary set
                                               m
of m-dimensional binary vectors. If |A| > ∑k−1
                                           i=0 i , then there exists a matrix T ∈ {0, 1}
                                                                                        k×m such that

{0, 1} ⊆ {Tz : z ∈ A}.
      k



                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                498
                                        I NAPPROXIMABILITY OF SVP

3    Basic techniques
Central to our construction and hardness results is a new method to measure the length of a vector that is,
in a sense, a hybrid between the Euclidean norm and the Hamming metric. The definition is parametrized
by an integer q that we will later set to q = 2.

Definition 3.1. For any integer vector x, let powq (x) be the largest power of q that divides x, and define
                    p
τ q (x) = powq (x) · kxkH , where kxkH is the Hamming weight of x. For any integer lattice Λ,
                                                  
                                     τ q (Λ) = min τ q (x) : x ∈ Λ \ {0}

is the minimum of τ q over all nonzero lattice vectors.

    Notice that τ q is not a norm because it satisfies neither the linearity property kc · xk = c · kxk, nor the
triangle inequality kx + yk ≤ kxk + kyk required of a norm. Still, the quantity τ q (Λ) is useful to study
SVP because it gives a lower bound on the norm of integer vectors, and it behaves well with respect to
the tensor product of lattices, as shown below.

Lemma 3.2. For any integer vector x ∈ Zn , τ q (x) ≤ kxk.

Proof. The vector x has kxkH nonzero entries, and each of them is at least powq (x) in absolute value.
                          p
Therefore kxk ≥ powq (x) · kxkH = τ q (x).

Lemma 3.3. For any integer lattice Λ and (arbitrary) lattice Λ0 ,

                               τ q (Λ) · λ (Λ0 ) ≤ λ (Λ ⊗ Λ0 ) ≤ λ (Λ) · λ (Λ0 ) .

Proof. Let Λ = L(B) and Λ0 = L(B0 ). For the upper bound, simply observe that for any two lattice vectors
Bx and B0 y, the product lattice Λ ⊗ Λ0 contains a vector Bx(B0 y)T of length kBxk · kB0 yk. Choosing
Bx and B0 y as shortest nonzero vectors in Λ and Λ0 yields a vector in Λ ⊗ Λ0 of length λ (Λ) · λ (Λ0 ). In
order to prove the lower bound we consider an arbitrary nonzero v = BX(B0 )T in the tensor product
lattice Λ ⊗ Λ0 , and show that kvk ≥ τ q (Λ) · λ (Λ0 ). Let h be the number of nonzero rows in BX. Clearly,
                                                                                                     √
all columns c of BX have Hamming weight at most kckH ≤ h, and therefore τ q (c) ≤ powq (c) · h. It
                                                                   √           √
follows that all nonzero columns c satisfy powq (c) ≥ τ q (c)/ h ≥ τ q (Λ)/ h. In particular, the largest
                                                                    √
power qi that divides the entire matrix BX satisfies qi ≥ τ q (Λ)/ h. Notice that v = (BX) · (B0 )T contains
exactly h nonzero rows (because BX has h nonzero rows, and the√columns of B0 are linearly independent),
and each of them is a nonzero vector in qi Λ0 . Therefore, kvk ≥ h qi λ (Λ0 ) ≥ τ q (Λ)λ (Λ0 ).

    We use the quantity τ q (Λ) to define a variant of SVP that behaves well with respect to the tensor
product of lattices. Our variant of SVP is defined using the Euclidean norm for the YES instances, and
our new measure τ q for the NO instances.

Definition 3.4. The promise problem TensorSVPγ (where γ ≥ 1 may be a function of the lattice dimen-
sion) is defined as follows. Instances are pairs (B, d) where B ∈ Zn×k is a lattice basis and d is a positive
number such that

                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                499
                                             DANIELE M ICCIANCIO

    • (B, d) is a YES instance if λ (L(B)) ≤ d;

    • (B, d) is a NO instance if τ q (L(B)) > γd.

    Notice that TensorSVPγ is a special case of the standard GapSVPγ problem because the defining
condition for YES instances is the same, and in the NO instances τ q (L(B)) is a lower bound on λ (L(B)).
So, in order to establish NP-hardness results for GapSVPγ it is enough to prove the NP-hardness of
TensorSVPγ . Moreover, TensorSVPγ behaves well with respect to the tensor product of lattices, as
described in the next theorem.

Theorem 3.5. For any positive integer c, the map (B, d) 7→ (B⊗c , d c ) is a reduction from TensorSVPγ
to GapSVPγ c , where B⊗c denotes the iterated tensor product of c copies of B. The running time of the
reduction is polynomial in Sc where S = |(B, d)| is the size of the input.

Proof. Let (B, d) be an instance of TensorSVPγ . If (B, d) is a YES instance, then λ (L(B)) ≤ d and, by
Lemma 3.3, λ (L(B⊗c )) ≤ d c . So, (B⊗c , d c ) is a YES instance of GapSVPγ c . Conversely, if (B, d) is a
NO instance, then λ (L(B)) ≥ τ q (L(B)) > γd and, by Lemma 3.3, λ (L(B⊗c )) > (γd)c . So, (B⊗c , d c ) is
a NO instance of GapSVPγ c .

    Notice that for any constant c, the transformation in Theorem 3.5 runs in polynomial time. So, if
TensorSVPγ is NP-hard for some constant γ > 1, then GapSVPγ is NP-hard for any constant γ 0 ≤ γ c .
Similarly, using reductions that run in superpolynomial time, one obtains inapproximability results for
even larger factors. (See Corollary 5.2.)
    Notice that all definitions and results presented in this section hold for any integer q. For simplicity,
in the rest of the paper, we fix q = 2, as this is enough to prove the hardness of GapSVP.


4    A dense lattice construction
Similarly to [20], our NP-hardness proof is based on the construction of a very dense lattice L(L) with
large minimum distance. However, since we want to prove the NP-hardness of TensorSVP (rather than
just GapSVP as in [20]), we need a lattice L(L) such that not only λ (L(L)), but also τ 2 (L(L)) is large.
Our lattice is obtained by combining together a carefully chosen sequence of linear codes described in
the following lemma.

Lemma 4.1. There is an efficient algorithm that, on input m and h, both being powers of two and
                √
satisfying h ≤ m, outputs a sequence of generator matrices Ci ∈ Fm×k      2
                                                                                i
                                                                                  (i = 0, . . . , log2 h) for binary
linear codes Ci [m, ki , di ]2 = C(Ci ) of common block length m, minimum distance di ≥ 4i , and co-dimension
m − ki ≤ (log2 m) · (4i /2 − 1) + 1 (for i ≥ 1), such that Fm2 = C0 ⊇ C1 ⊇ · · · ⊇ Clog2 h . The running time of
the algorithm is polynomial in m.

Proof. For i = 0, . . . , ` = log2 h, let Ci be a generator matrix for the extended BCH code EBCHm
                                                                                                 4i of block
length m and designed distance 4i ≤ di . By Theorem 2.3 the generator matrices can be constructed in
time polynomial in m, they have co-dimension m − ki ≤ (log2 m) · (4i /2 − 1) + 1 for i ≥ 1, and the codes
they generate form a chain Fm    2 = C0 ⊇ C1 ⊇ · · · ⊇ C` .


                          T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                   500
                                            I NAPPROXIMABILITY OF SVP

    These codes are combined into a lattice L(L) using a method that is essentially “construction D”
from [9, Chapter 8], the only difference being that here we use a scaled copy of the lattice so that L is an
integer matrix, and we express the bound in terms of τ 2 (L(L)) rather than λ (L(B)).

Theorem 4.2. There is an efficient algorithm that, on input m and h, both being powers of two and satis-
         √
fying h ≤ m, outputs an m-dimensional full-rank integer lattice basis L ∈ Zm×m such that τ 2 (L(L)) ≥ h
                   2
and det(L(L)) < m2h /3 . The running time of the algorithm is polynomial in m.

Proof. We need to give an efficient construction that, on input m = 2κ and h = 2` with ` ≤ κ/2, produces
                                                                                                         2
an m-dimensional full-rank integer lattice basis L such that τ 2 (L(L)) ≥ h and det(L(L)) < m2h /3 .
     Run the algorithm of Lemma 4.1 on input m and h to obtain the generator matrices C0 , C1 , · · · , C` for
the sequence of binary linear codes Ci [m, ki , di ]2 . We recall that these are codes of common block length
m, minimum distance di ≥ 4i , and co-dimension m − ki ≤ κ(4i /2 − 1) + 1 for i ≥ 1, and C0 [m, m, 1]2 = Fm      2.
We combine these codes into a lattice using “construction D” from [9, Chapter 8, Theorem 13]. More
specifically, we define the m-dimensional integer lattice L(L) generated by the columns of 2`−i Ci for all
i = 0, . . . , `, where where each Ci ∈ F2 m×ki is interpreted as an integer matrix in {0, 1}m×ki by identifying
the elements of F2 = {0, 1} with the integers {0, 1} ⊂ Z. Of course, the vectors in 2` C0 , 2`−1 C1 , . . . , C`
are not linearly independent, but a basis for the lattice they generate can be easily obtained as follows.
Using the inclusions C0 ⊇ · · · ⊇ C` , we may assume that each generating matrix Ci equals the last ki
columns of C0 . In other words, C0 = [K0 , . . . , K` ], and each generating matrix Ci = [Ki , Ci+1 ] is obtained
extending the generating matrix of the next code in the sequence Ci+1 with ki0 = ki − ki+1 more columns
Ki . (For convenience, we also define k`+1 = 0 and k`0 = k` − k`+1 = k` .) By properly choosing the order
of the rows and performing elementary column operations (see below for details), we may further assume
that each Ki has the form                                 0 
                                                            Ki
                                                  Ki =  Ti 
                                                            O
                          0
where K0i ∈ F2 (m−ki )×ki , Ti is a ki0 × ki0 nonsingular upper-triangular7 matrix, and O is the ki+1 × ki0 all-zero
matrix. In more detail, this can be achieved using a simple variant of the Gaussian elimination process as
follows:

   1. Let ci, j be the entries of the matrix C0 , and select an index i such that ci,m = 1.

   2. Swap rows i and m so that cm,m = 1.

   3. Subtract the column m from all columns j < m with cm, j = 1, so that cm, j = 0.

   4. Proceed inductively on the (m − 1) × (m − 1) upper left submatrix of C0 .

This results in a nonsingular upper-triangular matrix C0 that is equivalent to the original C0 .
   Define the m × m integer matrix

                                             L = [2` K0 , 2`−1 K1 , . . . , K` ]
   7 While not needed here, one could also turn each T into the identity matrix T = I 0 by Gaussian elimination.
                                                      i                          i   ki


                              T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                  501
                                               DANIELE M ICCIANCIO

                                        0                                                      0
where, as before, each Ki ∈ F2 m×ki is interpreted as an integer matrix in {0, 1}m×ki .
    The columns of L are a subset of 2` C0 , 2`−1 C1 , . . . , C` . Moreover, all vectors in 2` C0 , 2`−1 C1 , . . . , C`
can be obtained by multiplying the columns of L by appropriate powers of 2. Therefore L(L) is precisely
the lattice generated by 2` C0 , 2`−1 C1 , . . . , C` .
    Consider an arbitrary nonzero lattice vector v = ∑i (2`−i · Ki )xi = ∑i Ki yi , where yi = 2`−i xi . We want
to prove that τ 2 (v) ≥ h. Let 2P = pow2 (y0 , . . . , yk ) be the largest power of 2 that divides yi for 0 ≤ i ≤ k.
Clearly, 2P also divides v. If P ≥ `, then we immediately get τ 2 (v) ≥ pow2 (v) ≥ 2` = h. So, assume
P < ` and let
                                     p = min i : yi 6= 0, pow2 (yi ) = 2P
                                                   

be the smallest index such that y p is divisible precisely by 2P . Notice that P ≥ ` − p because

                                    2P = pow2 (y p ) = pow2 (2`−p x p ) ≥ 2`−p .

By definition of p and P, all yi /2P are integer vectors, and y p /2P 6= 0 = yi /2P (mod 2) for all i < p. So,


               kvkH = kv/2P kH ≥ k(v/2P ) mod 2kH =                ∑ Ki (yi /2P ) mod 2       ≥ dp ≥ 4p
                                                                   i≥p                    H

where we have used the fact that ∑i≥p Ki (yi /2P ) mod 2 is a nonzero codeword in C p . It follows that
                                          p               √
                        τ 2 (v) = pow2 (v) kvkH ≥ 2P · 4 p ≥ 2`−p · 2 p = 2` .

This proves that τ 2 (L(L)) ≥ 2` = h.
    In order to bound the determinant of the lattice, we notice that, by our choice of Ki , the matrix C0 is
upper triangular. It follows that L is also a triangular matrix with ki0 diagonal entries equal to 2`−i for
i = 0, . . . , `. So, the determinant satisfies
                                                         `                `
                                  log2 det(L(L)) = ∑ (` − i)ki0 = ∑ (m − ki ) .                                   (4.1)
                                                       i=0               i=1

Finally, using the bound on the co-dimension m − ki ≤ κ · (4i /2 − 1) + 1 from Lemma 4.1 we get
                   `                `
                                          4i                                       2κh2
                                                     
                                                            2 `
                  ∑  (m − ki ) ≤ ∑    κ ·    − (κ − 1)  = κ   (4 − 1) − (κ − 1)` <      .
                  i=1            i=1      2                 3                       3

(In fact, using the assumption ` ≤ κ/2 < (2/3)κ, one can slightly strengthen the upper bound to
κ (2/3)h2 − ` .) Substituting this bound into the expression for the determinant gives
                                                               2         2 2
                                            det(L(L)) < 22κh /3 = m 3 h .

     We conclude this section with some remarks about the lattice L(L) constructed in Theorem 4.2.
To start with, we observe that Theorem 4.2 can be applied using a single (BCH) code C1 ⊆ Fn2 . This
is reminiscent of the use of BCH codes found in [14, 12], and corresponds to “Construction A” of [9].
However, using only a single code (i. e., ` = log2 h = 1) results in lattices with very small τ 2 (L(L)) =

                           T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                      502
                                          I NAPPROXIMABILITY OF SVP

h = 2, which are not directly useful to prove the NP-hardness of SVP. Of course, one can always increase
τ 2 (L(L)) by multiplying the lattice by a power of 2. For example, the orthogonal lattice hZm satisfies
τ 2 (hZm ) = h and det(hZm ) = hm , but it is not very interesting. Our lattice construction achieves the same
                                                                                         `
τ 2 (L(L)) ≥ h but, by (4.1), it has much smaller determinant det(L(L)) = hm /2∑i=1 ki , where k1 , . . . , k`
are the dimensions of the codes Ci from Lemma 4.1.
     A systematic way to compare different lattices obtained from this or other constructions is to evaluate
their Hermite factor γ(Λ) = (λ (Λ)/ det(Λ)1/m )2 . This factor is closely related to the packing density of
the lattice
                              vol(B(λ (Λ)/2))                      p          m
                                                   = vol(B(1)) ·     γ(Λ)/2 ,
                                    det(Λ)

i. e., the largest fraction of space occupied by disjoint equal spheres centered around all lattice points.
The trivial fact that the packing density cannot be higher than 1 yields Minkowski’s upper bound on
the Hermite factor γ(Λ) ≤ O(m). Notice that the Hermite factor γ(Λ) is invariant under scaling of the
lattice. So, one can consider arbitrary values of h in Theorem 4.2, and then scale the lattice to increase
τ 2 (L(L)) ≤ λ (L(L)) as desired, without affecting the Hermite factor of the lattice. Using

                                            hm
                         det(L(L)) =         `       and   λ (L(L)) ≥ τ 2 (L(L)) ≥ h
                                          2∑i=1 ki
                                      `
gives Hermite factor γ(L(L)) ≥ 4∑i=1 ki /m . The integer lattice Zm (corresponding to ` = 0) achieves
Hermite factor γ ≥ 1. Using a single BCH code C1 ⊆ Fm   2 (corresponding to ` = 1) as inp[14, 12] gives
only marginally higher Hermite factor γ ≥ 4k1 /m ≥ 41+o(1) . By contrast, choosing h = Θ( m/ log m) in
Theorem 4.2 so to maximize the Hermite factor yields an almost optimal
                                        2
                                             2
                               γ ≥ h/mO(h /m) = Ω(h2 ) = Ω(m/ log m) ,

within a logarithmic factor from Minkowski’s upper bound γ ≤ O(m).
    Our lattice should also be compared to the “Schnorr-Adleman” prime number lattice, already used
in [1, 20, 21] to prove NP-hardness results for SVP. In [21, Chapter 5] it is shown that this lattice
(parametrized by a real α > 0 and a sequence of relatively prime integers a1 , . . . , am ) has minimum
distance λ ≥ 2 ln α (see [21, Lemma 5.3]) and determinant
                                            s               
                                   det(Λ) =   1 + α 2 ∑ ln ak ∏ ln ak
                                                            k        k


(see [21, Proposition 5.9]). The Hermite factor is maximized by setting a1 , . . . , am to the first m prime
numbers and α ≈ em/2 , which yields γ(Λ) = Ω(m/ log m), just like for the lattice given in Theorem 4.2.
For other examples of explicit lattice constructions achieving similar or even higher density see [9,
Chapter 1, Section 1.5]. The novelty in Theorem 4.2 is that it provides not only a lattice with large
minimum distance (and Hermite factor), but also large τ 2 (Λ).

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                               503
                                           DANIELE M ICCIANCIO

5    The main reduction
In this section we prove that TensorSVP is NP-hard to approximate within some constant factor under
nonuniform polynomial-time reductions with one-sided error. In Section 6 we show how the nonuniform
advice required by our proof can be computed in probabilistic polynomial time. We present our main
result as a nonuniform reduction first in order to make the presentation as simple as possible. We remark
that the nonuniform reduction presented in this section is just as good a starting point for derandomization
as the probabilistic reduction presented in the next section. A randomized uniform reduction is presented
in Section 6 mostly to reassure the reader that here we are not using the nonuniformity of the advice in
any essential way. p
    For any 1 < λ < 3/2, the reduction uses a gadget (L, s, T, r) consisting of a lattice basis L ∈ Zm×` ,
a vector s ∈ Zm , a linear transformation T ∈ Zk×m , and a bound r such that
                                                                    √
                                                      1 ≤ r ≤ m,                                       (5.1)
                                                 τ 2 (L(L)) ≥ λ · r ,                                     (5.2)
                                    T((L(L) − s) ∩ B(r)) ⊇ {0, 1} .      k
                                                                                                          (5.3)

Informally, the lattice has large τ 2 , and still one of its cosets contains 2k short vectors that are mapped by
the linear transformation T to the set of all binary vectors in dimension k.
    The gadget is obtained using Theorem 4.2 to efficiently build a very dense lattice L(L) satisfying (5.2).
Since this lattice has small determinant, there must be a coset (L(L) − s) containing many short binary
vectors of norm bounded by r. Provided the number of such short binary vectors is large enough,
Lemma 2.9 implies that there is a matrix T ∈ {0, 1}k×m such that the image of the short vectors under
T includes all binary strings in {0, 1}k , i. e., (5.3) is satisfied. Both s and T are part of the nonuniform
advice used by the reduction.
                                          p
Theorem 5.1. For any 1 ≤ γ < λ < 3/2 and
                                                    s
                                                                 4
                                             γ̃ = γ 1 +
                                                            (λ /γ)2 − 1

there is a nonuniform polynomial-time reduction with one-sided error and no false positives from
TensorCVPγ̃ to TensorSVPγ .
Proof. Let (B, y,t) be a TensorCVPγ̃ instance with B ∈ Zn×k and y ∈ Zn . We recall that membership
in any lattice L(B) can be efficiently determined by linear algebra. If the target is a lattice vector
y ∈ L(B), then (B, y,t) is certainly a YES instance and the reduction can output an arbitrary YES instance
                                                                                                          √
of TensorSVPγ . So, assume without loss of generality that y ∈      / L(B). Assume also that 1 ≤ t < n.
                                                           √
Again, this is without loss of generality because if t ≥ n, then (B, y,t) is not a NO instance and it can
be mapped to an arbitrary YES instance of TensorSVPγ . Similarly, if t < 1, then (B, y,t) is not a YES
instance and it can be mapped to an arbitrary NO instance of TensorSVPγ .
     We will give a nonuniform polynomial-time construction of a gadget (L, s, T, r) satisfying proper-
ties (5.1), (5.2) and (5.3), where (s, T) is the nonuniform advice. Notice that (5.1) and (5.2) do not depend
on the advice (s, T), and so they are always satisfied. Only (5.3) relies on the advice (s, T) being properly

                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                               504
                                           I NAPPROXIMABILITY OF SVP

chosen. Given, (L, s, T, r), we proceed as follows. p We begin by scaling      √ the input (B, y,t) and the gadget
(L, s, T, r) so that ε/2 ≤ t/r < ε, where ε = (λ /γ)2 − 1 ∈ (0, 1/ 2). Specifically, if t/r < ε/2, then
we replace (B, y,t) with (1c2 ⊗ B, 1c2 ⊗ y, c ·t) where c = dεr/(2t)e. Similarly, if t/r ≥ ε, then we replace
(L, s, T, r) with (1c2 ⊗ L, 1c2 ⊗ s, e1 ⊗ T, c · r), where e1 = [1, 0, . . . , 0] is the first c2 -dimensional standard
                                                                                                             √ √
unit row vector and c = b2t/(εr)c. Notice that in either case c ≤ O(max(r/t,t/r)) ≤ O(max( m, n))
is polynomially bounded, and therefore the scaling transformation runs in polynomial time. It is also
easy to verify that the transformation results in an equivalent TensorCVPγ̃ instance (B, y,t) and gadget
(L, s, T, r) such that ε/2 ≤ t/r < ε.                      √
    The output of the reduction is (V, d), where d = t 2 + r2 and
                                                                      
                                                     BTL BTs + y
                                            V=                             .
                                                      L    s

Notice that V ∈ Z(n+m)×(`+1) is a basis, i. e., its columns are linearly independent. To see this, notice that
L itself is a basis (making the first ` columns of V linearly independent). So, the only way that V could
possibly fail to be a basis is for the last column to be a linear combination of the first `, i. e., s = Lz and
BTs + y = BTLz = BTs for some z ∈ R` . But this implies y = 0 ∈ L(B), which we assumed not to be
the case.
    We show that the reduction is correct. First, assume (B, y,t) is a NO instance of TensorCVPγ̃ , i. e.,
ky − BxkH > (γ̃t)2 for all x ∈ Rk , and let the advice (s, T) be arbitrary. Consider any nonzero lattice
vector                                                                  
                                             z        BT(Lz + ws) + wy
                                  v=V            =                           .
                                             w             Lz + ws
On the one hand, if w 6= 0, then the vector v satisfies

                                 τ 2 (v)2 ≥ kvkH
                                           ≥ kBT(Lz + ws) + wykH
                                           = ky − B(−T(Lz/w + s))kH
                                           > (γ̃t)2 = γ 2 (t 2 + (2t/ε)2 ) ≥ γ 2 d 2 ,

where the last inequality is equivalent to the condition ε/2 ≤ t/r. On the other hand, if w = 0, then z 6= 0
and                                                          
                                                     BT(Lz)
                                              v=
                                                        Lz
is divisible by pow2 (Lz). Moreover, kvkH ≥ kLzkH , and therefore

                  τ 2 (v)2 ≥ pow2 (Lz)2 · kLzkH = τ 2 (Lz)2 ≥ τ 2 (L(L))2 ≥ λ 2 r2 > γ 2 · d 2 ,

where we have used (5.2), and the last inequality is equivalent to the condition t/r < ε. This proves that
τ 2 (L(B)) > γd, i. e., (B, d) is a NO instance of TensorSVPγ .
     Now let (s, T) be an advice such that (5.3) holds true, and assume (B, y,t) is a YES instance of
TensorCVPγ̃ , i. e., there is an x ∈ {0, 1}k such that ky − Bxk ≤ t. By (5.3), there is an integer vector

                          T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                     505
                                          DANIELE M ICCIANCIO

z ∈ Zl such that T(Lz − s) = x and kLz − sk ≤ r. So, the lattice vector
                                                                       
                                  z         BT(Lz − s) − y           Bx − y
                         v=V           =                        =
                                 −1              Lz − s              Lz − s

has squared norm kvk2 = kBx − yk2 + kLz − sk2 ≤ t 2 + r2 = d 2 . This proves that (B, d) is a YES instance
of TensorSVPγ .
     In order to complete the proof we need to construct a gadget (L, s, T, r) as required by the reduction.
We recall that the construction takes as input the√lattice dimension k and a nonuniform advice (s, T), and
should run in time polynomial in k. Let h = ω( k) be a sufficiently large (still, polynomially bounded
h = kO(1) ) power of 2, and set m = hc for some integer constant
                                               1 1 −1
                                          c>      − λ2        > 2.
                                                2 3
             p
Define r = b(h/λ )2 c ≤ h/λ , and run the algorithm of Theorem 4.2 on input m and h to obtain a basis
                                                                   2
L ∈ Zm×m such that τ 2 (L(L)) ≥ h ≥ λ r and det(L(L)) < m2h /3 . Clearly,   p (5.2) is always satisfied by
                                                               √
construction. Also (5.1) holds true because r ≤ h/λ < h < m and r ≥ bh2 /(3/2)c > 1.
     We need to show that there exists an advice (s, T) such that (5.3) also holds true. Let A be the set of
all vectors in {0, 1}m of norm r. Notice that r2 is an integer, and A equals the set of all binary vectors of
Hamming weight r2 . In particular, the size of A is
                                       2  2
                                       m        m r         m r          2 2
                              |A| = 2 ≥ 2              > 2        = m(1− c )r .                         (5.4)
                                       r        r           h
We claim that there exists an s ∈ Zm such that
                                                              k−1    
                                                          k         m
                                   |A ∩ (L(L) − s)| ≥ m > ∑             .
                                                              i=0   i

It will follow from Lemma 2.9 that there is a matrix T ∈ {0, 1}k×m such that T((L(L) − s) ∩ A) ⊇ {0, 1}k ,
i. e., (5.3) holds true.
      It remains to show that |A ∩ (L(L) − s)| ≥ mk for some s ∈ Zm . Notice that L(L) has precisely
det(L(L)) cosets of the form L(L) − s with s ∈ Zm . It follows by an averaging argument that there is
some s ∈ Zm such that
                                      |A|
                                                                                
                                                                     1−2/c 2
                                                    2 2     2   2         − h2 −(1− 2c )
                |A ∩ (L(L) − s)| ≥           > m(1− c )r −( 3 )h > m λ 2 3               .
                                   det(L(L))

We need the exponent in this last expression to be at least k. But the coefficient of h2 in the exponent is a
strictly positive constant because c > ( 12 − 31 λ 2 )−1 . Therefore the exponent is at least Ω(h2 ) − O(1) =
ω(k) − O(1) = ω(k) ≥ k for all sufficiently large k.

    It easily follows that GapSVPγ is NP-hard to approximate within any constant approximation factor
under polynomial-time nonuniform reductions with one-sided error. Larger inapproximability factors can
also be obtained under superpolynomial-time reductions.

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                              506
                                      I NAPPROXIMABILITY OF SVP

Corollary 5.2. GapSVPγ is NP-hard for any constant factor γ under polynomial-time nonuniform
reductions with one-sided error and no false positives. Moreover, for every ε > 0 there is a δ > 0 such
that GapSVPγ is NP-hard for γ(n) = nδ / log log n under nonuniform reductions with no false positives,
                                   ε
running in subexponential time 2O(n ) .
Proof. By Theorem 2.7, TensorCVPγ̃ is NP-hard (under deterministic polynomial-time reductions) for
any constant factor γ̃. If follows from Theorem 5.1 that TensorSVPγ0 is NP-hard for some constant γ0 > 1
under nonuniform reductions. Finally, for any constant γ, applying Theorem 3.5 with c = dlog γ/ log γ0 e,
we get that GapSVPγ is NP-hard under the same kind of reductions. (Notice that for any constant γ, c is
a constant and the reduction in Theorem 3.5 runs in polynomial time.)
    In general, the reduction runs in time polynomial in N = nc and produces GapSVPγ instances in
dimension N that are hard to approximate within a factor γ = γ0c . For any ε > 0, let δ = ε · log γ0 and set
                                      ε                                                              ε
c = nε / log n, so that N = nc = 2n and the reduction runs in subexponential time N O(1) = 2O(n ) . The
resulting inapproximability factor is γ(N) = γ0c = N δ / log log N .


6    A probabilistic reduction
The reduction presented in Section 5 uses the nonuniform advice only for the construction of a gadget
(L, s, T, r) satisfying properties (5.1), (5.2) and (5.3). In the proof of Theorem 5.1, we gave a (uniform,
deterministic) polynomial-time construction of L and r satisfying (5.1) and (5.2), and then we proved
that there exist s and T such that (5.3) holds true as well. This leads to a nonuniform reduction using
(s, T) as advice. In this section we show that nonuniformity is not essential, and an advice (s, T) with
the desired property can be efficiently found in probabilistic polynomial time. The idea is simple, and
follows the same path as previous work [1, 20, 11]. First we find a coset L(L) − s containing many short
vectors. Since the lattice L(L) has small determinant, the average number of short vectors in a random
coset L(L) − s is large, and choosing s at random will give with high probability a coset containing many
short vectors. After finding a coset L(L) − s that contains many short vectors, we use the following
combinatorial theorem from [20] which can be interpreted as a constructive (probabilistic) variant of
Sauer’s lemma. We remark that a weaker form of the following theorem had already been proved and
used by Ajtai [1].

                           √ of [20]). Let Z ⊆ {0, 1} be a set of vectors of Hamming weight u. For any
Theorem 6.1 (Theorem 5.9                               m

k and ε > 0, if |Z| ≥ u! m4 uk/ε , and T ∈ {0, 1} k×m is chosen by setting each entry to 1 independently at
random with probability p = 1/(4uk), then the probability that all binary vectors {0, 1}k are contained
in T(Z) = {Tz : z ∈ Z} is at least 1 − 6ε.
   We use Theorem 6.1 to prove the following probabilistic variant of Theorem 5.1.
                             p
Theorem 6.2. For any γ < λ < 3/2 and
                                           s
                                                      4
                                     γ̃ = γ 1 +
                                                 (λ /γ)2 − 1
there is a probabilistic polynomial-time reduction with one-sided error and no false positives from
TensorCVPγ̃ to TensorSVPγ .

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                             507
                                            DANIELE M ICCIANCIO

Proof. All that is needed to turn the nonuniform reduction of Theorem 5.1 into a randomized one is a
probabilistic polynomial-time construction of a gadget (L, s, T, r), where L ∈ Zm×m , s ∈ Zm , T ∈ Zk×m
and r ∈ R satisfy (5.1), (5.2) and (5.3). Moreover, for the reduction to have one-sided error with no false
positives, properties (5.1) and (5.2) should hold for any value of the randomness. The lattice basis L and
bound r are defined as in the proof of Theorem 5.1, but for larger values of h = ω(k),
                                                    1  1 −1
                                              c>2      − λ2
                                                      2 3
                                p
and m = hc . Specifically, r = b(h/λ )2 c ≤ h/λ , and L ∈ Zm×m is a matrix (obtained invoking Theo-
                                                                                   2
rem 4.2 on input m and h) such that τ 2 (L(L)) ≥ h ≥ λ r and det(L(L)) < m2h /3 . As in the proof of
Theorem 5.1, properties (5.1) and (5.2) are always satisfied by construction. Here we give a probabilistic
polynomial-time construction of s and T satisfying (5.3) with probability arbitrarily close to 1.
     The construction is simple. The vector s is chosen by first picking a ∈ A uniformly at random among
all binary vectors A of norm r, and then setting s = −a. The matrix T is chosen at random as specified in
Theorem 6.1 with u = r2 . For any ε > 0, if the set Z = A ∩ (L(L) + a) = A ∩ (L(L) − s) has size at least
|Z| ≥ u! m4rk/ε , then by Theorem 6.1 property (5.3) is satisfied, except with probability at most 6ε over
the choice of T. It remains to show that |Z| ≥ u! m4rk/ε with high probability, over the choice of a ∈ A.
This bound is based on the simple observation (also used in previous work [1, 20, 11, 14]) that for any
finite function f : A → B, the size of a random preimage satisfies
                                                               
                                             −1             |A|
                                      Pr | f ( f (a))| ≤ ε         ≤ε,
                                     a∈A                    |B|

when a ∈ A is chosen uniformly at random. In our setting A = A, a = a ∈ A ⊂ Zm , B = Zm /L(L)
and f (a) = a (mod L(L)). Notice that Z = A ∩ (L(L) + a) = f −1 ( f (a)). Therefore, by the above
observation,
                                          |B|              det(L(L))
                    Pr[|Z| ≤ u!m4rk/ε ] ≤     · u!m4rk/ε =           · u!m4rk/ε .
                    a                     |A|                 |A|
                                2                                          2
Using (5.4), det(L(L)) < m2h /3 and the bound u! < uu < h2u = m2r /c , we get
                                                                                  
                    det(L(L))                2 2      4 2 4rk      2
                                                                     −(1− 4c ) 12 h2 +o(h2 )
                              · u!m4rk/ε < m 3 h −(1− c )r + ε = m 3          λ              .
                       |A|

Since c > 2/( 12 − 13 λ 2 ), the coefficient of h2 in the last expression is a strictly negative constant, and the
                                                            2    2           2
probability that Pra [|Z| ≤ u!m4rk/ε ] is at most m−Ω(h )+o(h ) = m−Ω(h ) < ε.

   As in the previous section, the inapproximability factor can be amplified using the tensor product.
The proof is virtually identical to that of Corollary 5.2.

Corollary 6.3. GapSVPγ is NP-hard for any constant factor γ under probabilistic polynomial-time
reductions with one-sided error and no false positives. Moreover, for every ε > 0 there is a δ > 0 such
that GapSVPγ is NP-hard for γ(n) = nδ / log log n under randomized reductions with one-sided error and
                                                       ε
no false positives running in subexponential time 2O(n ) .

                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                                  508
                                       I NAPPROXIMABILITY OF SVP

7    Conclusion
We proved the hardness of approximating the Shortest Vector Problem for approximation factors matching
the best currently known results [14, 12], but under probabilistic reductions with one-sided error. In
particular, our reductions make more restricted use of randomness than [14, 12] and may be easier to
derandomize. Randomness in our reductions is used only to produce a lattice coset L(L) − s with large
minimum (τ 2 ) distance and still containing a large number of short vectors, which map via an integer
linear transformation T onto the set of all binary vectors {0, 1}k . We gave a deterministic polynomial-time
construction of the lattice L(L), and randomness is used only for the selection of s and T. In fact, the
matrix T is chosen at random mostly as a byproduct of the fact that the selection of s is probabilistic:
intuitively, no matrix T is good for every s, so if s is chosen at random, then T must be chosen at random
as well. We believe that all that is needed in order to derandomize our proof is an explicit description of a
vector s such that L(L) − s contains many short vectors. With such a vector s (and a proof that s is good),
finding a matrix T that maps all short vectors in L(L) − s to {0, 1}k is likely to be easy.


8    Acknowledgments
The author thanks Oded Regev and the anonymous referees of Theory of Computing for their many and
useful comments and corrections to an earlier draft of this paper.


References
 [1] M IKLÓS A JTAI: The shortest vector problem in L2 is NP-hard for randomized reductions. In Proc.
     30th STOC, pp. 10–19. ACM Press, 1998. ECCC. [doi:10.1145/276698.276705] 488, 489, 492,
     498, 503, 507, 508

 [2] M IKLÓS A JTAI , R AVI K UMAR , AND D. S IVAKUMAR: A sieve algorithm for the shortest lattice
     vector problem. In Proc. 33rd STOC, pp. 601–610. ACM Press, 2001. [doi:10.1145/380752.380857]
     488

 [3] S ANJEEV A RORA , L ÁSZLÓ BABAI , JACQUES S TERN , AND E LIZABETH Z. S WEEDYK: The
     hardness of approximate optima in lattices, codes, and systems of linear equations. J. Comput.
     System Sci., 54(2):317–331, 1997. Preliminary version in FOCS’93. [doi:10.1006/jcss.1997.1472]
     489, 497

 [4] P ER AUSTRIN AND S UBHASH K HOT: A simple deterministic reduction for the gap minimum
     distance of code problem. In Proc. 38th Internat. Colloq. on Automata, Languages and Programming
     (ICALP’11), pp. 474–485. Springer, 2011. [doi:10.1007/978-3-642-22006-7_40] 489, 490

 [5] M IHIR B ELLARE , S HAFI G OLDWASSER , C ARSTEN L UND , AND A LEX RUSSELL: Efficient
     probabilistically checkable proofs and applications to approximations. In Proc. 25th STOC, pp.
     294–304. ACM Press, 1993. [doi:10.1145/167088.167174] 497

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                              509
                                      DANIELE M ICCIANCIO

 [6] E LWYN R. B ERLEKAMP, ROBERT J. M C E LIECE , AND H ENK C. A. VAN T ILBORG: On the
     inherent intractability of certain coding problems. IEEE Trans. Inform. Theory, 24(3):384–386,
     1978. [doi:10.1109/TIT.1978.1055873] 489

 [7] J IN -Y I C AI AND A JAY N ERURKAR: Approximating the SVP to within a factor (1+1/dimε ) is
     NP-hard under randomized reductions. J. Comput. System Sci., 59(2):221–239, 1999. Preliminary
     version in CCC’98. [doi:10.1006/jcss.1999.1649] 488, 489

 [8] Q I C HENG AND DAQING WAN: A deterministic reduction for the gap minimum distance problem.
     In Proc. 41st STOC, pp. 33–38. ACM Press, 2009. [doi:10.1145/1536414.1536421] 489, 490

 [9] J OHN H. C ONWAY AND N EIL J. A. S LOANE: Sphere Packings, Lattices and Groups. Springer, 3rd
     edition, 1998. 501, 502, 503

[10] I RIT D INUR , G UY K INDLER , R AN R AZ , AND S HMUEL S AFRA: Approximating CVP to within
     almost-polynomial factors is NP-hard. Combinatorica, 23(2):205–243, 2003. Preliminary version
     in FOCS’98. [doi:10.1007/s00493-003-0019-y] 488, 489

[11] I LYA D UMER , DANIELE M ICCIANCIO , AND M ADHU S UDAN: Hardness of approximating the
     minimum distance of a linear code. IEEE Trans. Inform. Theory, 49(1):22–37, 2003. Preliminary
     version in FOCS’99. [doi:10.1109/TIT.2002.806118] 489, 490, 491, 498, 507, 508

[12] I SHAY H AVIV AND O DED R EGEV: Tensor-based hardness of the Shortest Vector Problem to within
     almost polynomial factors. Theory of Computing, 8(23):513–531, 2012. Preliminary version in
     STOC’07. [doi:10.4086/toc.2012.v008a023] 489, 490, 491, 496, 497, 498, 502, 503, 509

[13] DAVID S. J OHNSON: A catalog of complexity classes. In Handbook of Theoretical Computer
     Science, volume A (Algorithms and Complexity), chapter 2, pp. 67–161. Elsevier, 1990. 489

[14] S UBHASH K HOT: Hardness of approximating the shortest vector problem in lattices. J. ACM,
     52(5):789–808, 2005. Preliminary version in FOCS’04. [doi:10.1145/1089023.1089027] 489, 490,
     491, 496, 497, 498, 502, 503, 508, 509

[15] S UBHASH K HOT: Hardness of approximating the shortest vector problem in high ` p
     norms. J. Comput. System Sci., 72(2):206–219, 2006. Preliminary version in FOCS’03.
     [doi:10.1016/j.jcss.2005.07.002] 489

[16] YOSHIYUKI K ITAOKA: Tensor products of positive definite quadratic forms. Proc. Japan Acad.,
     52(9):498–500, 1976. [doi:10.3792/pja/1195518215] 496

[17] J EFFERY C. L AGARIAS AND A NDREW M. O DLYZKO: Solving low-density subset sum problems.
     J. ACM, 32(1):229–246, 1985. Preliminary version in FOCS’83. [doi:10.1145/2455.2461] 488

[18] VADIM LYUBASHEVSKY AND DANIELE M ICCIANCIO: On bounded distance decoding, unique
     shortest vectors, and the minimum distance problem. In 29th Ann. Internat. Cryptology Conf.
     (CRYPTO’09), pp. 577–594. Springer, 2009. [doi:10.1007/978-3-642-03356-8_34] 497

                      T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                      510
                                      I NAPPROXIMABILITY OF SVP

[19] DANIELE M ICCIANCIO: The hardness of the closest vector problem with preprocessing. IEEE
     Trans. Inform. Theory, 47(3):1212–1215, 2001. [doi:10.1109/18.915688] 488

[20] DANIELE M ICCIANCIO: The shortest vector in a lattice is hard to approximate to within
     some constant. SIAM J. Comput., 30(6):2008–2035, 2001. Preliminary version in FOCS’98.
     [doi:10.1137/S0097539700373039] 488, 489, 490, 491, 492, 496, 497, 498, 500, 503, 507, 508

[21] DANIELE M ICCIANCIO AND S HAFI G OLDWASSER: Complexity of Lattice Problems: a crypto-
     graphic perspective. Volume 671 of The Kluwer International Series in Engineering and Computer
     Science. Kluwer Academic Publishers, 2002. 488, 503

[22] DANIELE M ICCIANCIO AND PANAGIOTIS VOULGARIS: A deterministic single exponential time
     algorithm for most lattice problems based on Voronoi cell computations. In Proc. 42nd STOC, pp.
     351–358. ACM Press, 2010. ECCC. [doi:10.1145/1806689.1806739] 488

[23] DANIELE M ICCIANCIO AND PANAGIOTIS VOULGARIS: Faster exponential time algorithms for the
     shortest vector problem. In Proc. 21st Ann. ACM-SIAM Symp. on Discrete Algorithms (SODA’10),
     pp. 1468–1480. ACM Press, 2010. ECCC. [ACM:1873720] 488

[24] P HONG Q. N GUYEN AND T HOMAS V IDICK: Sieve algorithms for the shortest vector problem are
     practical. J. Math. Cryptology, 2(2):181–207, 2008. [doi:10.1515/JMC.2008.009] 488

[25] V ERA S. P LESS AND W ILLIAM C ARY H UFFMAN, editors. Handbook of Coding Theory. North-
     Holland, 1998. 495

[26] X AVIER P UJOL AND DAMIEN S TEHLÉ: Solving the shortest lattice vector problem in time 22.465n .
     Report 2009/605, IACR ePrint archive, 2009. IACR. 488

[27] O DED R EGEV AND R ICKY ROSEN: Lattice problems and norm embeddings. In Proc. 38th STOC,
     pp. 447–456. ACM Press, 2006. [doi:10.1145/1132516.1132581] 493

[28] N ORBERT S AUER: On the density of families of sets. J. Combin. Theory Ser. A, 13(1):145–147,
     1972. [doi:10.1016/0097-3165(72)90019-2] 498

[29] S AHARON S HELAH: A combinatorial problem; stability and order for models and theories in
     infinitary languages. Pacific J. Math., 41(1):247–261, 1972. Project Euclid. 498

[30] P ETER VAN E MDE B OAS: Another NP-complete partition problem and the complexity of computing
     short vectors in a lattice. Technical Report 81-04, Mathematische Instituut, University of Amsterdam,
     1981. Available at author’s home page. 488

[31] V LADIMIR N. VAPNIK AND A LEXEY YA . C HERVONENKIS: On the uniform covergence of relative
     frequencies of events to their probabilities. Theory of Probability and Appl., XVI(2):264–280, 1971.
     Translation from the Russian. 498

[32] A LEXANDER VARDY: The intractability of computing the minimum distance of a code. IEEE
     Trans. Inform. Theory, 43(6):1757–1766, 1997. [doi:10.1109/18.641542] 489

                       T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                            511
                                     DANIELE M ICCIANCIO

[33] X IAOYUN WANG , M INGJIE L IU , C HENGLIANG T IAN , AND J INGGUO B I: Improved Nguyen-
     Vidick heuristic sieve algorithm for shortest vector problem. In Proc. 6th ACM Symp. on In-
     formation, Computer and Communications Security (ASIACCS’11), pp. 1–9. ACM Press, 2011.
     [doi:10.1145/1966913.1966915] 488


AUTHOR

     Daniele Micciancio
     Professor
     University of California at San Diego (UCSD), La Jolla, CA, USA
     daniele cs ucsd edu
     http://cseweb.ucsd.edu/~daniele/


ABOUT THE AUTHOR

     DANIELE M ICCIANCIO got his Ph. D. in Computer Science from M.I.T. in 1998, under
       the supervision of Shafi Goldwasser. He is a full professor in CSE department at the
       University of California, San Diego (UCSD), where he has worked since 1999. He
       is broadly interested in theoretical computer science and cryptography. His research
       focuses on lattice cryptography and the complexity of lattice and coding problems.




                     T HEORY OF C OMPUTING, Volume 8 (2012), pp. 487–512                      512