DOKK Library

Tensor-based Hardness of the Shortest Vector Problem to within Almost Polynomial Factors

Authors Ishay Haviv, Oded Regev,

License CC-BY-3.0

Plaintext
                            T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531
                                         www.theoryofcomputing.org




                  Tensor-based Hardness of the
                Shortest Vector Problem to within
                   Almost Polynomial Factors
                                      Ishay Haviv∗                  Oded Regev†
                              Received: August 26, 2011; published: September 25, 2012.




       Abstract: We show that unless NP ⊆ RTIME(2poly(log n) ), there is no polynomial-time
       algorithm approximating the Shortest Vector Problem (SVP) on n-dimensional lattices in the
                                                          1−ε
       ` p norm (1 ≤ p < ∞) to within a factor of 2(log n) for any ε > 0. This improves the previous
                              1/2−ε
       best factor of 2(log n)      under the same complexity assumption due to Khot (J. ACM, 2005).
       Under the stronger assumption NP * RSUBEXP, we obtain a hardness factor of nc/ log log n
       for some c > 0.
            Our proof starts with Khot’s SVP instances that are hard to approximate to within some
       constant. To boost the hardness factor we simply apply the standard tensor product of lattices.
       The main novelty is in the analysis, where we show that the lattices of Khot behave nicely
       under tensorization. At the heart of the analysis is a certain matrix inequality which was first
       used in the context of lattices by de Shalit and Parzanchevski (2006).

ACM Classification: F.2.2, F.1.3, G.1.6
AMS Classification: 68Q17, 52C07, 11H06, 11H31, 05B40
Key words and phrases: lattices, shortest vector problem, NP-hardness, hardness of approximation
   ∗ Supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities. Work done while at Tel

Aviv University.
   † Supported by an Alon Fellowship, by the Binational Science Foundation, by the Israel Science Foundation, by the European

Commission under the Integrated Project QAP funded by the IST directorate as Contract Number 015848, and by the European
Research Council (ERC) Starting Grant.


  2012 Ishay Haviv and Oded Regev
  Licensed under a Creative Commons Attribution License                                        DOI: 10.4086/toc.2012.v008a023
                                      I SHAY H AVIV AND O DED R EGEV

1     Introduction
A lattice is a periodic geometric object defined as the set of all integer combinations of some linearly
independent vectors in Rn . The interesting combinatorial structure of lattices has been investigated by
mathematicians over the last two centuries, and for at least three decades it has also been studied from an
asymptotic algorithmic point of view. Roughly speaking, most fundamental problems on lattices are not
known to be efficiently solvable. Moreover, there are hardness results showing that such problems cannot
be solved by polynomial-time algorithms unless the polynomial-time hierarchy collapses. One of the
main motivations for research on the hardness of lattice problems is their applications in cryptography,
as was demonstrated by Ajtai [3], who came up with a construction of cryptographic primitives whose
security relies on the worst-case hardness of certain lattice problems.
     Two main computational problems associated with lattices are the Shortest Vector Problem (SVP)
and the Closest Vector Problem (CVP). In the former, for a lattice given by some basis we are supposed
to find (the length of) a shortest nonzero vector in the lattice. The problem CVP is an inhomogeneous
variant of SVP, in which given a lattice and some target point one has to find (its distance from) the
closest lattice point. The hardness of lattice problems partly comes from the fact that there are many
possible bases for the same lattice.
     In this paper we improve the best hardness result known for SVP. Before presenting our results let us
start with an overview of related work.

1.1   Related work
In the early 1980s, Lenstra, Lenstra, and Lovász (LLL) [20] presented the first polynomial-time approx-
imation algorithm for SVP. Their algorithm achieves an approximation factor of 2O(n) , where n is the
dimension of the lattice. Using their algorithm, Babai [7] gave an approximation algorithm for CVP
achieving the same approximation factor. A few years later, improved algorithms were presented for
                                                                                                  2
both problems, obtaining a slightly sub-exponential approximation factor, namely 2O(n(log log n) / log n) [31],
and this has since been improved slightly [4, 26]. The best algorithm known for solving SVP exactly
requires exponential running time in n [18, 4, 26]. All the above results hold with respect to any ` p norm
(1 ≤ p ≤ ∞).
     On the hardness side, it was proven in 1981 by van Emde Boas [32] that it is NP-hard to solve SVP
exactly in the `∞ norm. The question of extending this result to other norms, and in particular to the
Euclidean norm `2 , remained open until the breakthrough result by Ajtai [2] showing that exact SVP in
the `2 norm is NP-hard under randomized reductions. Then, Cai and Nerurkar [10] obtained hardness
of approximation to within 1 + n−ε for any ε > 0. The first inapproximability result of SVP to within a
factor bounded away from 1 is that of Micciancio [21], who showed that under randomized
                                                                                      √            reductions
SVP in the ` p norm is NP-hard to approximate to within any factor smaller than p 2. For the `∞ norm, a
considerably stronger result is known: Dinur [14] showed that SVP is NP-hard to approximate in the `∞
norm to within a factor of nc/ log log n for some constant c > 0.
     To date, the strongest hardness result known for SVP in the ` p norm is due to Khot [19] who showed
NP-hardness of approximation to within arbitrarily large constants under randomized reductions for any
1 < p < ∞. Furthermore, under randomized quasipolynomial-time reductions (i. e., reductions that run
                                                            1/2−ε
in time 2poly(log n) ), the hardness factor becomes 2(log n)      for any ε > 0. Khot speculated there that it

                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                              514
                    T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

                                             1−ε
might be possible to improve this to 2(log n) , as this is the hardness factor known for the analogous
problem in linear codes [16].
    Khot’s proof does not work for the `1 norm. However, it was shown in [30] that for lattice problems,
the `2 norm is the easiest in the following sense: for any 1 ≤ p ≤ ∞, there exists a randomized reduction
from lattice problems such as SVP and CVP in the `2 norm to the respective problem in the ` p norm with
essentially the same approximation factor. In particular, this implies that Khot’s results also hold for the
`1 norm.
    Finally, we mention that a considerably stronger result is known for CVP, namely that for any
1 ≤ p ≤ ∞, it is NP-hard to approximate CVP in the ` p norm to within nc/ log log n for some constant
c > 0 [15]. We also mention that in contrast to the above hardness results,
                                                                          p it is known that for any c > 0,
SVP and CVP are unlikely to be NP-hard to approximate to within a cn/ log n factor, as this would
imply the collapse of the polynomial-time hierarchy [17, 1].

1.2   Our results
The main result of this paper improves the best NP-hardness factor known for SVP under randomized
quasipolynomial-time reductions. This and two additional hardness results are stated in the following
theorem. Here, RTIME is the randomized one-sided error analogue of DTIME. Namely, for a function
 f we denote by RTIME( f (n)) the class of problems having a probabilistic algorithm running in time
O( f (n)) on inputs of size n that accepts YES inputs with probability at least 2/3, and rejects NO inputs
with certainty.
Theorem 1.1. For every 1 ≤ p ≤ ∞ the following holds.
   1. For every constant c ≥ 1, there is no polynomial-time algorithm that approximates SVP in the ` p
      norm to within a factor of c unless
                                                       [
                                          NP ⊆ RP =          RTIME(nc ) .
                                                       c≥1


   2. For every ε > 0, there is no polynomial-time algorithm that approximates SVP on n-dimensional
                                                             1−ε
      lattices in the ` p norm to within a factor of 2(log n) unless

                                           NP ⊆ RTIME(2poly(log n) ) .

   3. There exists a c > 0 such that there is no polynomial-time algorithm that approximates SVP on
      n-dimensional lattices in the ` p norm to within a factor of nc/ log log n unless
                                                             \               δ
                                     NP ⊆ RSUBEXP =                 RTIME(2n ) .
                                                             δ >0

     Theorem 1.1 improves on the best known hardness result for any p < ∞. For p = ∞, a better hardness
result is already known, namely that for some c > 0, approximating to within nc/ log log n is NP-hard [14].
Moreover, item 1 was already proved by Khot [19] and we provide an alternative proof. We remark that
all three items follow from a more general statement (see Theorem 3.1).

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                             515
                                      I SHAY H AVIV AND O DED R EGEV

1.3   Techniques
A standard method to prove hardness of approximation for large constant or super-constant factors is to
first prove hardness for some fixed constant factor, and then amplify the constant using some polynomial-
time (or quasipolynomial-time) transformation. For example, the tensor product of linear codes is used
to amplify the NP-hardness of approximating the minimum distance in a linear code of block length n
                                                                                   1−ε
to arbitrarily large constants under polynomial-time reductions and to 2(log n) (for any ε > 0) under
quasipolynomial-time reductions [16]. This example motivates one to use the tensor product of lattices to
increase the hardness factor known for approximating SVP. However, whereas the minimum distance of
the k-fold tensor product of a code C is simply the kth power of the minimum distance of C, the behavior
of the length of a shortest nonzero vector in a tensor product of lattices is more complicated and not so
well understood.
     Khot’s approach in [19] was to prove a constant hardness factor for SVP instances that have some
“code-like” properties. The rationale is that such lattices might behave in a more predictable way under the
tensor product. The construction of these “basic” SVP instances is ingenious, and is based on BCH codes
as well as a restriction into a random sublattice. However, even for these code-like lattices, the behavior
of the tensor product was not clear. To resolve this issue, Khot introduced a variant of the tensor product,
                                                                                                  1/2−ε
which he called augmented tensor product, and using it he showed the hardness factor of 2(log n)        . This
unusual hardness factor can be seen as a result of the augmented tensor product. In more detail, for the
augmented tensor product to work, the dimension of Khot’s basic SVP instances grows to nΘ(k) , where
k denotes the number of times we intend to apply the augmented tensor product. After applying it, the
                            2
dimension grows to nΘ(k ) and the hardness factor becomes 2Θ(k) . This limits the hardness factor as a
                                         1/2−ε
function of the dimension n to 2(log n)        .
     Our main contribution is showing that Khot’s basic SVP instances do behave well under the (standard)
tensor product. The proof of this fact uses a new method to analyze vectors in the tensor product of
lattices, and is related to a technique used by de Shalit and Parzanchevski [13]. Theorem 1.1 now follows
easily: we start with (a minor modification of) Khot’s basic SVP instances, which are known to be hard
to approximate to within some constant. We then apply the k-fold tensor product for appropriately chosen
values of k and obtain instances of dimension nO(k) with hardness 2Ω(k) .

1.4   Open questions
Some open problems remain. The most obvious is proving that SVP is hard to approximate to within
factors greater than nc/ log log n under some plausible complexity assumption. Such a result, however, is
not known for CVP nor for the minimum distance problem in linear codes, p and most likely proving it
there first would be easier. An alternative goal is to improve on the O( n/ log n) upper bound beyond
which SVP is not believed to be NP-hard [17, 1].
     A second open question is whether our complexity assumptions can be weakened. For instance, our
nc/ log log n hardness result is based on the assumption that NP * RSUBEXP. For CVP, such a hardness
factor is known based solely on the assumption P 6= NP [15]. Showing something similar for SVP would
be very interesting. In fact, coming up with a deterministic reduction (even for constant approximation
factors) already seems very challenging; all known hardness proofs for SVP in ` p norms, p < ∞, use
randomized reductions. (We note, though, that [21] does describe a deterministic reduction based on a

                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                             516
                      T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

certain number-theoretic conjecture.) Ideas appearing in the recent NP-hardness proofs of the minimum
distance problem in linear codes [11, 6] might be useful. Finally, we mention that a significant step
towards derandomization was recently made by Micciancio [23]: he strengthened our results by showing
reductions with only one-sided error.

1.5    Outline
The rest of the paper is organized as follows. In Section 2 we gather some background on lattices and on
the central tool in this paper—the tensor product of lattices. In Section 3 we prove Theorem 1.1. For
the sake of completeness, Section 4 provides a summary of Khot’s work [19] together with the minor
modifications that we need to introduce.


2     Preliminaries
2.1    Lattices
A lattice is a discrete additive subgroup of Rn . Equivalently, it is the set of all integer combinations
                                            (                                     )
                                                    m
                            L(b1 , . . . , bm ) =   ∑ xi bi : xi ∈ Z for all 1 ≤ i ≤ m
                                                    i=1

of m linearly independent vectors b1 , . . . , bm in Rn (n ≥ m). If the rank m equals the dimension n, then
we say that the lattice is full-rank. The set {b1 , . . . , bm } is called a basis of the lattice. Note that a lattice
has many possible bases. We often represent a basis by an n × m matrix B having the basis vectors as
columns, and we say that the basis B generates the lattice L. In such case we write L = L(B). It is well
known and easy to verify that two bases B1 and B2 generate the same lattice if and only if B1 = B2U for
some unimodular matrix U ∈ Zm×m (i. e., a matrix whose entries are all integers     p       and whose determinant
is ±1). The determinant of a lattice generated by a basis B is det(L(B)) = det (BT B). It is easy to show
that the determinant of a lattice is independent of the choice of basis and is thus well-defined. A sublattice
of L is a lattice L(S) ⊆ L generated by some linearly independent lattice vectors S = {s1 , . . . , sr } ⊆ L.
It is known that any integer matrix B can be written as [H 0]U where H has full column rank and U is
unimodular. One way to achieve this is by using the Hermite Normal Form (see,          p e. g., [12, Page 67]).
     For any 1 ≤ p < ∞, the ` p norm of a vector x ∈ Rn is defined as kxk p = p ∑i |xi | p and its `∞ norm is
                                                                               (p)
kxk∞ = maxi |xi |. One basic parameter of a lattice L, denoted by λ1 (L), is the ` p norm of a shortest
                                         (p)
nonzero vector in it. Equivalently, λ1 (L) is the minimum ` p distance between two distinct points in
the lattice L. This definition can be generalized to define the ith successive minimum as the smallest r
such that B p (r) contains i linearly independent lattice points, where B p (r) denotes the ` p ball of radius r
centered at the origin. More formally, for any 1 ≤ p ≤ ∞, we define
                                               n                              o
                                (p)
                              λi (L) = min r : dim span L ∩ B p (r) ≥ i .

                                       (p)
We often omit the superscript in λi          when p = 2.

                          T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                                    517
                                     I SHAY H AVIV AND O DED R EGEV

   In 1896, Hermann Minkowski [28] proved the following classical result, known as Minkowski’s First
Theorem. We consider here the `2 norm, although the result has an easy extension to other norms. For a
simple proof the reader is referred to [24, Chapter 1, Section 1.3].

Theorem 2.1 (Minkowski’s First Theorem). For any rank-r lattice L,

                                                λ1 (L) r
                                                     
                                     det(L) ≥     √       .
                                                    r

    Our hardness of approximation results will be shown through the promise version GapSVPγp , defined
for any 1 ≤ p ≤ ∞ and for any approximation factor γ ≤ 1 as follows.

Definition 2.2 (Shortest Vector Problem). An instance of GapSVPγp is a pair (B, s), where B is a lattice
                                             (p)                                       (p)
basis and s is a number. In YES instances λ1 (L(B)) ≤ γ · s, and in NO instances λ1 (L(B)) > s.

2.2   Tensor product of lattices
A central tool in the proof of our results is the tensor product of lattices. Let us first recall some basic
definitions. For two column vectors u and v of dimensions n1 and n2 respectively, we define their tensor
product u ⊗ v as the n1 n2 -dimensional column vector
                                                         
                                                    u1 v
                                                 .. 
                                                 . .
                                                    un1 v

If we think of the coordinates of u⊗v as arranged in an n1 ×n2 matrix, we obtain the equivalent description
of u ⊗ v as the matrix u · vT . More generally, any n1 n2 -dimensional vector w can be written as an n1 × n2
matrix W . To illustrate the use of this notation, notice that if W is the matrix corresponding to w then

                                            kwk22 = tr(W W T ) .                                      (2.1)

Finally, for an n1 × m1 matrix A and an n2 × m2 matrix B, one defines their tensor product A ⊗ B as the
n1 n2 × m1 m2 matrix                                           
                                         A11 B · · · A1m1 B
                                      ..                  ..
                                                                .
                                                                
                                      .                    .
                                         An1 1 B · · · An1 m1 B
    Let L1 be a lattice generated by the n1 × m1 matrix B1 and L2 be a lattice generated by the n2 × m2
matrix B2 . Then the tensor product of L1 and L2 is defined as the n1 n2 -dimensional lattice generated
by the n1 n2 × m1 m2 matrix B1 ⊗ B2 , and is denoted by L = L1 ⊗ L2 . Equivalently, L is generated by the
m1 m2 vectors obtained by taking the tensor product of two column vectors, one from B1 and one from B2 .
If we think of the vectors in L as n1 × n2 matrices, then we can also define it as

                                L = L1 ⊗ L2 = {B1 XBT2 : X ∈ Zm1 ×m2 } ,

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                             518
                      T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

with each entry in X corresponding to one of the m1 m2 generating vectors. We will mainly use this
definition in the proof of the main result.
     As alluded to before, in the present paper we are interested in the behavior of the shortest nonzero
vector in a tensor product of lattices. It is easy to see that for any 1 ≤ p ≤ ∞ and any two lattices L1 and
L2 , we have
                                     (p)                 (p)         (p)
                                   λ1 (L1 ⊗ L2 ) ≤ λ1 (L1 ) · λ1 (L2 ).                                           (2.2)

Indeed, any two vectors v1 and v2 satisfy kv1 ⊗ v2 k p = kv1 k p · kv2 k p . Applying this to shortest nonzero
vectors of L1 and L2 implies inequality (2.2).
                                                                   (p)
    Inequality (2.2) has an analogue for linear codes, with λ1 replaced by the minimum distance of
the code under the Hamming metric. There, it is not too hard to show that the inequality is in fact an
equality: the minimal distance of the tensor product of two linear codes always equals to the product
of their minimal distances. However, contrary to what one might expect, there exist lattices for which
inequality (2.2) is strict. More precisely, for any sufficiently large n there exist n-dimensional lattices L1
and L2 satisfying
                                      λ1 (L1 ⊗ L2 ) < λ1 (L1 ) · λ1 (L2 ) .
The following lemma due to Steinberg shows this fact. Although we do not use this fact later on, the
proof is instructive and helps motivate the need for a careful analysis of tensor products. To present this
proof we need the notion of a dual lattice. For a full-rank lattice L ⊆ Rn , its dual lattice L∗ is defined as

                                    L∗ = {x ∈ Rn : hx, yi ∈ Z for all y ∈ L} .

A self-dual lattice is one that satisfies L = L∗ . It can be seen that for a full-rank lattice L generated by a
basis B, the basis (B−1 )T generates the lattice L∗ .

Lemma 2.3 ([27, Page 48]). For any n ≥ 1 there exists an n-dimensional self-dual lattice L satisfying
               √                                    √
λ1 (L ⊗ L∗ ) ≤ n and λ1 (L) = λ1 (L∗ ) = Ω( n).
                                                                                                √
Proof. We first show that for any full-rank n-dimensional lattice L, λ1 (L ⊗ L∗ ) ≤ n. Let L be a lattice
generated by a basis B = (b1 , . . . , bn ). Let (B−1 )T = (b˜1 , . . . , b˜n ) be the basis generating its dual lattice
L∗ . Now consider the vector ∑ni=1 bi ⊗ b̃i ∈ L ⊗ L∗ . Using our matrix notation, this vector can be written
as
                                           B In ((B−1 )T )T = B B−1 = In ,
                           √
and clearly has `2 norm n. To complete the proof, we need to use the (non-trivial) fact that for any
n ≥ 1 there exists a full-rank, n-dimensional and self-dual lattice with shortest nonzero vector of norm
    √
Ω( n). This fact is due to Conway and Thompson; see [27, Page 46] for details.


3    Proof of results
The following is our main technical result. As we will show later, Theorem 1.1 follows easily by plugging
in appropriate values of k.

                           T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                                     519
                                          I SHAY H AVIV AND O DED R EGEV

Theorem 3.1. For any 1 ≤ p ≤ ∞ there exist c,C > 0 such that the following holds. There exists a
randomized reduction that takes as input a SAT instance and an integer k ≥ 1 and outputs a GapSVPγp
instance of dimension nCk with gap γ = 2−ck , where n denotes the size of the SAT instance. The reduction
runs in time polynomial in nCk and has two-sided error, namely, given a YES (resp., NO) instance it
outputs a YES (resp., NO) instance with probability 9/10.
    In fact, we will only need to prove this theorem for the case p = 2 since, as is easy to see, the general
case follows from the following theorem (applied with, say, ε = 1/2).1
Theorem 3.2 ([30]). For any ε > 0, γ < 1 and 1 ≤ p ≤ ∞ there exists a randomized polynomial-time
reduction from GapSVP2γ 0 to GapSVPγp , where γ 0 = (1 − ε)γ.

3.1   Basic SVP
As already mentioned, our reduction is crucially based on a hardness result of a variant of SVP stemming
from Khot’s work [19]. Instances of this variant have properties that make it possible to amplify the gap
using the tensor product. The following theorem summarizes the hardness result on which our proof is
based. For a proof the reader is referred to Section 4.
Theorem 3.3 ([19]). There are a constant γ < 1 and a polynomial-time randomized reduction from SAT
to SVP outputting a lattice basis B, satisfying L(B) ⊆ Zn for some integer n, and an integer d, such that:
                                                                                    √
   1. For any YES instance of SAT, with probability at least 9/10, λ1 (L(B)) ≤ γ · d.
   2. For any NO instance of SAT, with probability at least 9/10, for every nonzero vector v ∈ L(B),
          • v has at least d nonzero coordinates, or
          • all coordinates of v are even and at least d/4 of them are nonzero, or
          • all coordinates of v are even and kvk2 ≥ d.
                                  √
      In particular, λ1 (L(B)) ≥ d.

3.2   Boosting the SVP hardness factor
As mentioned before, we boost the hardness factor using the tensor product of lattices. For a lattice L we
denote by L⊗k the k-fold tensor product of L. An immediate corollary of inequality (2.2) is that if (B, d)
is a YES instance of the SVP variant in Theorem 3.3, and L = L(B), then
                                               λ1 (L⊗k ) ≤ γ k d k/2 .                                                 (3.1)
For the case in which (B, d) is a NO instance we will show that any nonzero vector of L⊗k has norm at
least d k/2 , i. e.,
                                                λ1 (L⊗k ) ≥ d k/2 .                                                    (3.2)
This yields a gap of γ k between the two cases. Inequality (3.2) easily follows by induction from the
central lemma below, which shows that NO instances “tensor nicely.”
  1 We note that our results can be shown directly for any 1 < p < ∞ without using Theorem 3.2 by essentially the same proof.



                           T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                                          520
                     T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

Lemma 3.4. Let (B, d) be a NO instance of the SVP variant given in Theorem 3.3, and denote by L1 the
lattice generated by the basis B. Then for any lattice L2 ,
                                                       √
                                      λ1 (L1 ⊗ L2 ) ≥ d · λ1 (L2 ) .

    The proof of this lemma is based on some properties of sublattices of NO instances which are
established in the following claim.

Claim 3.5. Let (B, d) be a NO instance of the SVP variant given in Theorem 3.3, and let L ⊆ L(B) be a
sublattice of rank r > 0. Then at least one of the following properties holds:

   1. Every basis matrix of L has at least d nonzero rows (i. e., rows that are not all zero).

   2. Every basis matrix of L contains only even entries and has at least d/4 nonzero rows.

   3. det(L) ≥ d r/2 .

Proof. Assume that L does not have either of the first two properties. Our goal is to show that the third
property holds. Since the first property does not hold, we have r < d and also that any vector in L has
fewer than d nonzero coordinates. By Theorem 3.3, this implies that L ⊆ 2 · Zn . By the assumption
that the second property does not hold, there must exist a basis of L that has fewer than d/4 nonzero
rows. Therefore, all nonzero vectors in L have fewer than d/4 nonzero coordinates, and hence have
norm at least d, again by Theorem 3.3. We conclude that λ1 (L) ≥ d, and by Minkowski’s First Theorem
(Theorem 2.1) and r < d we have

                                                  λ1 (L) r
                                                       
                                      det(L) ≥     √       ≥ d r/2 .
                                                      r

Proof of Lemma 3.4. Let v be an arbitrary nonzero vector in L1 ⊗ L2 . Our goal is to show that kvk2 ≥
√
  d · λ1 (L2 ). We can write v in matrix notation as B1 XB2 T , where the integer matrix B1 is a basis of
L1 , B2 is a basis of L2 , and X is an integer matrix of coefficients. Let U be a unimodular matrix for
which X = [H 0]U, where H is a matrix with full column rank. Thus, the vector v can be written as
B1 [H 0](B2U T )T . Since U T is also unimodular, the matrices B2 and B2U T generate the same lattice. Now
remove from B2U T the columns corresponding to the zero columns in [H 0] and denote the resulting
matrix by B02 . Furthermore, denote the matrix B1 H by B01 . Observe that both of the matrices B01 and B02
are bases of the lattices they generate, i. e., they have full column rank. The vector v equals B01 B02 T , where
L01 := L(B01 ) ⊆ L1 and L02 := L(B02 ) ⊆ L2 .
     Claim 3.5 guarantees that the lattice L01√defined above has at least one of the three properties
mentioned in the claim. We show that kvk2 ≥ d · λ1 (L02 ) in each of these three cases. Then, by the fact
that λ1 (L02 ) ≥ λ1 (L2 ), the lemma will follow.

Case 1: Assume that at least d of the rows in the basis matrix B01 are nonzero. Thus, at least d of the
     rows of B01 B02 T are nonzero lattice points from L02 , and thus
                                                         √
                                                kvk2 ≥    d · λ1 (L02 ) .

                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                                 521
                                      I SHAY H AVIV AND O DED R EGEV

Case 2: Assume that the basis matrix B01 contains only even entries and has at least d/4 nonzero rows.
     Hence, at least d/4 of the rows of B01 B02 T are even multiples of nonzero lattice vectors from L02 .
     Therefore, every such row has `2 norm at least 2 · λ1 (L02 ), and it follows that

                                                                     √
                                              r
                                                  d
                                     kvk2 ≥         · 2 · λ1 (L02 ) = d · λ1 (L02 ) .
                                                  4

    The third case is based on the following central claim, which is similar to Proposition 1.1 in [13]. The
proof is based on an elementary matrix inequality relating the trace and the determinant of a symmetric
positive semidefinite matrix (see, e. g., [9, Page 47]).

Claim 3.6. Let L1 and L2 be two rank-r lattices generated by the bases U = (u1 , . . . , ur ) and W =
(w1 , . . . , wr ) respectively. Consider the vector v = ∑ri=1 ui ⊗ wi in L1 ⊗ L2 , which can be written as
UIrW T = UW T in matrix notation. Then,
                                             √                       1/r
                                    kvk2 ≥    r · det(L1 ) · det(L2 )     .

Proof. Define the two r × r symmetric positive definite matrices G1 = U T U and G2 = W T W (known
as the Gram matrices of U and W ). By the fact that tr(AB) = tr(BA) for any matrices A and B and by
equation (2.1),

                                                               1/2 1/2   1/2   1/2 
              kvk22 = tr (UW T )(UW T )T = tr(G1 G2 ) = tr G1 G2 G2 = tr G2 G1 G2 ,
                                        


        1/2                                                           1/2      1/2
where G2 is the positive square root of G2 . The matrix G = G2 G1 G2 is also symmetric and positive
definite, and as such it has r real and positive eigenvalues. We can thus apply the inequality of arithmetic
and geometric means on these eigenvalues to get
                                                                              1/r
                         kvk22 = tr(G) ≥ r det(G)1/r = r · det(G1 ) · det(G2 )     .

Taking the square root of both sides of this equation completes the proof.

    Equipped with Claim 3.6 we turn to deal with the third case. In order to bound from below the norm
of v, we apply the claim to its matrix form B01 B02 T with the lattices L01 and L02 as above.

Case 3: Assume that the lattice L01 satisfies det(L01 ) ≥ d r/2 , where r denotes its rank. Combining
     Claim 3.6 and Minkowski’s First Theorem we have that

                           √                         1/r √                 λ1 (L0 ) √
                  kvk2 ≥    r · det(L01 ) · det(L02 )    ≥ r · (d r/2 )1/r · √ 2 = d · λ1 (L02 ) ,
                                                                                r

      and this completes the proof of the lemma.

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                             522
                    T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

3.3   Proof of the main theorem
Proof of Theorem 3.1. Recall that it suffices to prove the theorem for p = 2. Given a SAT instance of size
n, we apply the reduction from Theorem 3.3 and obtain in time poly(n) a pair (B, d) where B is a basis of
a poly(n)-dimensional lattice. We then output (B⊗k , d k ), where B⊗k is the k-fold tensor product of B, i. e.,
a basis of the lattice L(B)⊗k . The dimension of this lattice is poly(nk ), and combining inequalities (3.1)
and (3.2) we infer a gap of 2−ck .

Proof of Theorem 1.1. For item 1, choose k to be a sufficiently large constant and apply Theorem 3.1.
This shows that any constant factor approximation algorithm to SVP implies a two-sided error algorithm
for SAT. Using known self-reducibility properties of SAT (see, e. g., [29, Chapter 11]), this also implies
a one-sided error polynomial-time algorithm for SAT. For item 2, apply Theorem 3.1 with k = (log n)1/ε
(where n is the size of the input SAT instance) and let N = nCk be the dimension of the output lattice.
Since
                                               1
                                                         log N 1−ε
                                                            
                                         log N 1+ε
                                   k=               >                ,
                                           C               C
                                                                        1−ε
the gap we obtain as a function of the dimension N is 2Ω((log N) ) . Therefore, an algorithm that
approximates SVP better than this gap implies a randomized SAT algorithm running in time 2poly(log n) ,
and hence the desired containment NP ⊆ RTIME(2poly(log n) ). Item 3 follows similarly by applying
Theorem 3.1 with k = nδ for all δ > 0.


4     Proof of Theorem 3.3
In this section we prove Theorem 3.3. The proof is essentially the same as the one in [19] with minor
modifications.


4.1   Comparison with Khot’s theorem
For the reader familiar with Khot’s proof, we now describe how Theorem 3.3 differs from the one in [19].
First, our theorem is only stated for the `2 norm (since we use Theorem 3.2 to extend the result to other
norms). Second, the YES instances of Khot had another property that we do not need here (namely, that
the coefficient vector of the short lattice vector is also short). Third, as a result of the augmented tensor
product, Khot’s theorem includes an extra parameter k that specifies the number of times the lattice is
supposed to be tensored with itself. Since we do not use the augmented tensor product, we simply fix
k to be some constant. In more detail, we choose the number of columns in the BCH code to be d O(1) ,
as opposed to d O(k) . This eventually leads to our improved hardness factor. Finally, the third possibility
in our NO case is different from the one in Khot’s theorem (which says that there exists a coordinate
with absolute value at least d O(k) ). We note that coordinates with huge values are used several times in
Khot’s construction in order to effectively restrict a lattice to a subspace. We instead work directly with
the restricted lattice, making the reduction somewhat cleaner.

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                                523
                                       I SHAY H AVIV AND O DED R EGEV

4.2     The proof
The proof of Theorem 3.3 proceeds in three steps. In the first, a variant of the Exact Set Cover problem,
which is known to be NP-hard, is reduced to a gap variant of CVP. In the second step we construct a basis
Bint of a lattice which, informally, contains many short vectors in the YES case, and few short vectors in
the NO case. Finally, in the third step we complete the reduction by taking a random sublattice.

4.2.1    Step 1
First, consider the following variant of Exact Set Cover. Let η > 0 be an arbitrarily small constant.
An instance of the problem is a pair (S, d), where S = {S1 , . . . , Sn0 } is a collection of subsets of some
universe [n00 ] = {1, . . . , n00 }, and d is a positive integer. In YES instances, there exists S0 ⊆ S of size ηd
that covers each element of the universe exactly once. In NO instances, there is no S0 ⊆ S of size less
than d that covers all elements of the universe. This problem is known to be NP-hard for an arbitrarily
small 0 < η < 1 and for n0 = O(d) [8]. Moreover, it is easy to see that the problem remains NP-hard if
we fix η to be any negative power of 2 and restrict d to be a power of 2. Thus, to prove Theorem 3.3, it
suffices to reduce from this problem.
    In the first step we use a well-known reduction from the above variant of Exact Set Cover to a variant
of CVP. For an instance (S, d) we identify S with the n00 × n0 matrix over {0, 1} whose columns are the
characteristic vectors of the sets in S. The reduction outputs an instance (BCVP ,t), where BCVP is a basis
                                      0
generating the lattice {y ∈ Zn : Sy = 0} and t is some integer vector satisfying St = −(1, 1, . . . , 1). (If no
such t exists, the reduction outputs an arbitrary NO instance.) We note that given S the basis BCVP can be
constructed in polynomial time (see, e. g., [22, Lemma 3.1]).
Lemma 4.1. If (S, d) is a YES instance of the above variant of Exact Set Cover, then there is a lattice
vector z ∈ L(BCVP ) such that z − t is a {0, 1} vector and has exactly ηd coordinates equal to 1. If (S, d)
is a NO instance, then for any lattice vector z ∈ L(BCVP ) and any nonzero integer j0 , the vector z + j0t
has at least d nonzero coordinates.
                                                                            0
Proof. If (S, d) is a YES instance then there exists a vector y ∈ {0, 1}n with exactly ηd coordinates equal
to 1 for which Sy = (1, 1, . . . , 1). This implies that S(y+t) = 0, so z = y+t is the required lattice vector. On
the other hand, if (S, d) is a NO instance, then for any z ∈ L(BCVP ) we have S(z + j0t) = − j0 · (1, 1, . . . , 1).
This implies that the nonzero coordinates of z + j0t correspond to a cover S0 ⊆ S of all elements in [n00 ],
and hence their number must be at least d.

4.2.2    Step 2
The second step of the reduction is based on BCH codes, as described in the following theorem.
Theorem 4.2 ([5, Page 255]). Let N, d, h be integers satisfying h = (d/2) log2 N. Then there exists an
efficiently constructible matrix PBCH of size h × N with {0, 1} entries such that the rows of the matrix are
linearly independent over GF(2) and any d columns of the matrix are linearly independent over GF(2).
   Let BBCH be a basis of the lattice {y ∈ ZN : (PBCH )y ≡ 0 (mod 2)}. Such a basis can be easily
constructed in polynomial time by duality (see the preliminaries in [25]). The next lemma states some
properties of this lattice.

                          T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                                  524
                     T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

Lemma 4.3. Every nonzero vector in L(BBCH ) either has at least d nonzero coordinates or all of its
coordinates are even. Also, for any r ≥ 1 it is possible to find in polynomial time with probability at least
99/100 a vector s ∈ {0, 1}N , such that there are at least
                                                            
                                                     1      N
                                                   100 · 2h r

distinct lattice vectors z ∈ L(BBCH ) satisfying that z − s is a {0, 1} vector with exactly r coordinates equal
to 1.

Proof. Let y ∈ L(BBCH ) be a nonzero lattice vector. Observe that if y has an odd coordinate then its odd
coordinates correspond to column vectors of PBCH that sum to the zero vector over GF(2). Therefore,
their number must be at least d. This proves the first statement.
    We now prove the second statement. Consider the set L(BBCH ) ∩ {0, 1}N whose size is 2N−h (since
as a subset of GF(2)N it is the kernel of PBCH whose dimension is N − h). In order to choose s we first
uniformly pick a vector in this set and then we uniformly pick r of its coordinates and flip them. For a
vector s ∈ {0, 1}N let As denote the number of ways in the above process to obtain s among the 2N−h · Nr
possible ways. The probability that the chosen s satisfies
                                                                                      1    N
                                                                                         
                     1       N                                   As                      h        1
              As ≤              is                  ∑                   ≤2   N
                                                                                  · 100·2 Nr  ≤     ,
                          h
                   100 · 2 r                                   N−h N                2N−h r       100
                                          s s.t.           ( )2
                                                   As ≤ 1 h Nr
                                                       100·2
                                                                   r


thus with probability at least 99/100 we obtain an s that satisfies
                                                            
                                                    1       N
                                             As >        h
                                                               .
                                                  100 · 2 r

It remains to notice that such an s also satisfies the requirement in the statement of the lemma (since from
each vector in {0, 1}N of Hamming distance r from s we can obtain z ∈ L(BBCH ) as in the statement by
simply adding 2 to a subset of its coordinates).

     We now construct the intermediate lattice generated by a basis matrix Bint (see Figure 1). Let η be a
sufficiently small constant, say 1/128. Let r = (3/4 + η)d and choose s as in Lemma 4.3. We choose the
parameters of BBCH to be N = d 2/η , d, and h = (d/2) log2 N. Consider a matrix whose upper left block
is 2 · BCVP , whose lower right block is BBCH , and whose other entries are zeros. Adding to this matrix the
column given by the concatenation of 2 · t and s, we obtain the basis matrix Bint of the intermediate lattice.
     The following two lemmas describe the properties of L(Bint ). The first one states that if the CVP
instance is a YES instance then L(Bint ) contains many short vectors. Define                      1/2
                                                                               √ γ = (3/4 + 5η) < 1. A
nonzero lattice vector of L(Bint ) is called good if it has `2 norm at most γ · d, has {0, 1, 2} coordinates,
and has at least one coordinate equal to 1.

Lemma 4.4. If the CVP instance    is a YES instance and the vector s has the property from Lemma 4.3,
                          1
then there are at least 100·2  N
                             h r  good lattice vectors in L(Bint ).


                         T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                               525
                                          I SHAY H AVIV AND O DED R EGEV



                                         n0        2BCVP                 2t




                                         N                   BBCH         s




                                                      Figure 1: Bint

Proof. Assume that the CVP instance is a YES instance. By Lemma 4.1, this implies that there exists y
such that BCVP y −t is a {0, 1} vector and has exactly ηd coordinates equal to 1. Let s be as in Lemma 4.3,
                        1    N
so there are at least 100·2h r distinct choices of x for which (BBCH )x − s is a {0, 1} vector with exactly r
coordinates equal to 1. For every such x, the lattice vector2
                                                                                 
                             Bint (y ◦ x ◦ (−1)) = 2(BCVP y − t) ◦ ((BBCH )x − s)
                                                                                      √               √
has {0, 1, 2} coordinates, has at least one coordinate equal to 1, and has `2 norm 4ηd + r = γ · d, as
required.

     The second lemma shows that if the CVP instance is a NO instance then L(Bint ) contains few vectors
that do not have the property from Theorem 3.3, Item 2. We call such vectors annoying. In more detail, a
lattice vector of L(Bint ) is annoying if it satisfies all of the following:
    • The number of its nonzero coordinates is smaller than d.

    • Either it contains an odd coordinate or the number of its nonzero coordinates is smaller than d/4.

    • Either it contains an odd coordinate or it has norm smaller than d.
                                                                                   0
Lemma 4.5. If the CVP instance is a NO instance, then there are at most d d/4 · N+n
                                                                                d/4 annoying lattice
vectors in L(Bint ).
Proof. Assume that the CVP instance is a NO instance and let Bint x be an annoying vector with coefficient
vector x = y ◦ z ◦ ( j0 ). We have

                                     Bint x = 2(BCVP y + j0t) ◦ (BBCH z + j0 s) .

By Lemma 4.1, if j0 6= 0 then the vector BCVP y + j0t has at least d nonzero coordinates, so it is not an
annoying vector. Thus we can assume that j0 = 0 and therefore Bint x = 2(BCVP y) ◦ (BBCH z).
  2 We use ◦ to denote concatenation of vectors.



                           T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                           526
                    T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

    Since Bint x is annoying we know that it has fewer than d nonzero coordinates, so by Lemma 4.3 we
get that all coordinates of Bint x are even. Again by the definition of an annoying vector, we conclude that
fewer than d/4 of the coordinates of Bint x are nonzero and all of them have absolute value smaller than d.
                                       0
Thus, we get a bound of d d/4 · N+n d/4 on the number of possible choices for Bint x, and this completes the
proof of the lemma.

4.2.3   Step 3
In the third step we construct the final SVP instance as claimed in Theorem 3.3. By Lemma 4.4, the
number of good vectors in the YES case is at least
                                                                 N (3/4+η)d       N (1/4+η)d
                                                       
             1       N             1               N
                        =                                   ≥                   =            =: G .
                 h
          100 · 2 r       100 · 2(d log2 N)/2 (3/4 + η)d      100 · d d · N d/2    100 · d d
                                                                  0
By Lemma 4.5, in the NO case there are at most A := d d/4 · N+nd/4 annoying vectors. By our choice of N
and the fact that n0 = O(d), for sufficiently large d we have n0 ≤ N and hence
                                      A ≤ d d/4 · (2N)d/4 ≤ 10−5 · G .
                                                                     0
    Choose a prime q in the interval [100A, G/100] and let w ∈ Zn +N be a vector whose coordinates are
chosen randomly and uniformly from the range {0, . . . , q − 1}. The final output of the reduction is a basis
B of the lattice {x ∈ L(Bint ) : hw, xi ≡ 0 (mod q)}.
Lemma 4.6. If the CVP instance is a YES instance and the vector s has the property from Lemma 4.3,
                         √ 99/100 over the choice of the vector w, there exists a lattice vector in L(B)
then with probability at least
with `2 norm at most γ · d.
Proof. If the CVP instance is a YES instance and s has the property from Lemma 4.3, then√ by Lemma 4.4
there are at least G good vectors in L(Bint ), i. e., vectors with `2 norm at most γ · d, coordinates
from {0, 1, 2}, and at least one coordinate equal to 1. For each good vector x, consider the event that
hw, xi ≡ 0 (mod q). Since a good vector is nonzero, we clearly have that each such event occurs with
probability 1/q. Moreover, observe that these vectors are pairwise linearly independent modulo q and
therefore these events are pairwise independent. Therefore, using Chebyshev’s Inequality, with probability
at least 1 − q/G ≥ 99/100, at least one of these events happens, and we are done.

Lemma 4.7. If the CVP instance is a NO instance, then with probability at least 99/100 over the choice
of the vector w, for every nonzero lattice vector v ∈ L(B),
    • v has at least d nonzero coordinates, or
    • all coordinates of v are even and at least d/4 of them are nonzero, or
    • all coordinates of v are even and kvk2 ≥ d.
Proof. The probability that a nonzero lattice vector x ∈ L(Bint ) satisfies hw, xi ≡ 0 (mod q) is 1/q. By
the union bound, the probability that at least one of the annoying vectors of L(Bint ) belongs to L(B) is at
most A/q ≤ 1/100. Therefore, with probability at least 99/100, no lattice vector in L(B) is annoying,
and the lemma follows.

                        T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                              527
                                    I SHAY H AVIV AND O DED R EGEV

    Lemmas 4.6 and 4.7 imply Theorem 3.3.

Acknowledgements
We thank Daniele Micciancio and Mario Szegedy for useful comments. We also thank an anonymous
referee for suggesting to avoid the use of huge coordinates in Khot’s proof, which in turn made Claim 3.5
simpler.


References
 [1] D ORIT A HARONOV AND O DED R EGEV: Lattice problems in NP ∩ coNP. J. ACM, 52(5):749–765,
     2005. Preliminary version in FOCS’04. [doi:10.1145/1089023.1089025] 515, 516

 [2] M IKLÓS A JTAI: The shortest vector problem in L2 is NP-hard for randomized reductions. In Proc.
     30th STOC, pp. 10–19. ACM Press, 1998. ECCC. [doi:10.1145/276698.276705] 514

 [3] M IKLÓS A JTAI: Generating hard instances of lattice problems. In Complexity of Computations
     and Proofs, volume 13 of Quad. Mat., pp. 1–32. Dept. Math., Seconda Univ. Napoli, Caserta, 2004.
     Preliminary version in STOC’96. 514

 [4] M IKLÓS A JTAI , R AVI K UMAR , AND D. S IVAKUMAR: A sieve algorithm for the shortest lattice
     vector problem. In Proc. 33rd STOC, pp. 601–610. ACM Press, 2001. [doi:10.1145/380752.380857]
     514

 [5] N OGA A LON AND J OEL H. S PENCER: The Probabilistic Method. Wiley-Interscience Series
     in Discrete Mathematics and Optimization. Wiley-Interscience, New York, second edition, 2000.
     [doi:10.1002/0471722154] 524

 [6] P ER AUSTRIN AND S UBHASH K HOT: A simple deterministic reduction for the gap minimum
     distance of code problem. In Proc. 38th Internat. Colloq. on Automata, Languages and Programming
     (ICALP’11), pp. 474–485. Springer, 2011. [doi:10.1007/978-3-642-22006-7_40] 517

 [7] L ÁSZLÓ BABAI: On Lovász’ lattice reduction and the nearest lattice point problem. Combinatorica,
     6(1):1–13, 1986. Preliminary version in STACS’85. [doi:10.1007/BF02579403] 514

 [8] M IHIR B ELLARE , S HAFI G OLDWASSER , C ARSTEN L UND , AND A LEX RUSSELL: Efficient
     probabilistically checkable proofs and applications to approximations. In Proc. 25th STOC, pp.
     294–304. ACM Press, 1993. [doi:10.1145/167088.167174] 524

 [9] R AJENDRA B HATIA: Matrix Analysis. Springer, 1997. 522

[10] J IN -Y I C AI AND A JAY N ERURKAR: Approximating the SVP to within a factor (1+1/dimε ) is
     NP-hard under randomized reductions. J. Comput. System Sci., 59(2):221–239, 1999. Preliminary
     version at CCC’98. [doi:10.1006/jcss.1999.1649] 514

                       T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                           528
                   T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

[11] Q I C HENG AND DAQING WAN: A deterministic reduction for the gap minimum distance problem.
     In Proc. 41st STOC, pp. 33–38. ACM Press, 2009. [doi:10.1145/1536414.1536421] 517

[12] H ENRI C OHEN: A Course in Computational Algebraic Number Theory. Volume 138 of Graduate
     Texts in Mathematics. Springer-Verlag, Berlin, 1993. 517

[13] E HUD DE S HALIT AND O RI PARZANCHEVSKI: On tensor products of semistable lattices, 2006.
     Preprint available at author’s home page. 516, 522

[14] I RIT D INUR: Approximating SVP∞ to within almost-polynomial factors is NP-hard. Theoret. Com-
     put. Sci., 285(1):55–71, 2002. Preliminary version in CIAC’00. [doi:10.1016/S0304-3975(01)00290-
     0] 514, 515

[15] I RIT D INUR , G UY K INDLER , R AN R AZ , AND S HMUEL S AFRA: Approximating CVP to within
     almost-polynomial factors is NP-hard. Combinatorica, 23(2):205–243, 2003. Preliminary version
     in FOCS’98. [doi:10.1007/s00493-003-0019-y] 515, 516

[16] I LYA D UMER , DANIELE M ICCIANCIO , AND M ADHU S UDAN: Hardness of approximating the
     minimum distance of a linear code. IEEE Trans. Inform. Theory, 49(1):22–37, 2003. Preliminary
     version in FOCS’99. [doi:10.1109/TIT.2002.806118] 515, 516

[17] O DED G OLDREICH AND S HAFI G OLDWASSER: On the limits of nonapproximability of lat-
     tice problems. J. Comput. System Sci., 60(3):540–563, 2000. Preliminary version in STOC’98.
     [doi:10.1006/jcss.1999.1686] 515, 516

[18] R AVI K ANNAN: Minkowski’s convex body theorem and integer programming. Math. Oper. Res.,
     12(3):415–440, 1987. [doi:10.1287/moor.12.3.415] 514

[19] S UBHASH K HOT: Hardness of approximating the shortest vector problem in lattices. J. ACM,
     52(5):789–808, 2005. Preliminary version in FOCS’04. [doi:10.1145/1089023.1089027] 514, 515,
     516, 517, 520, 523

[20] A RJEN K. L ENSTRA , H ENDRIK W. L ENSTRA , J R ., AND L ÁSZLÓ L OVÁSZ: Factoring
     polynomials with rational coefficients. Mathematische Annalen, 261(4):515–534, 1982.
     [doi:10.1007/BF01457454] 514

[21] DANIELE M ICCIANCIO: The shortest vector in a lattice is hard to approximate to within
     some constant. SIAM J. Comput., 30(6):2008–2035, 2001. Preliminary version in FOCS’98.
     [doi:10.1137/S0097539700373039] 514, 516

[22] DANIELE M ICCIANCIO: Efficient reductions among lattice problems.  In Proc. 19th
     Ann. ACM-SIAM Symp. on Discrete Algorithms (SODA’08), pp. 84–93. ACM Press, 2008.
     [ACM:1347082.1347092] 524

[23] DANIELE M ICCIANCIO: Inapproximability of the shortest vector problem: Toward a deterministic
     reduction. Theory of Computing, 8(22):487–512, 2012. ECCC. [doi:10.4086/toc.2012.v008a022]
     517

                      T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                        529
                                   I SHAY H AVIV AND O DED R EGEV

[24] DANIELE M ICCIANCIO AND S HAFI G OLDWASSER: Complexity of Lattice Problems: A Crypto-
     graphic Perspective. Volume 671 of The Kluwer International Series in Engineering and Computer
     Science. Kluwer Academic Publishers, Boston, MA, 2002. 518

[25] DANIELE M ICCIANCIO AND O DED R EGEV: Lattice-based cryptography. In D. J. B ERN -
     STEIN AND J. B UCHMANN , editors, Post-Quantum Cryptography, pp. 147–191. Springer, 2009.
     [doi:10.1007/978-3-540-88702-7_5] 524

[26] DANIELE M ICCIANCIO AND PANAGIOTIS VOULGARIS: A deterministic single exponential time
     algorithm for most lattice problems based on Voronoi cell computations. In Proc. 42nd STOC, pp.
     351–358. ACM Press, 2010. ECCC. [ACM:1806689.1806738] 514

[27] J OHN W. M ILNOR AND DALE H USEMÖLLER: Symmetric Bilinear Forms. Springer, Berlin, 1973.
     519

[28] H ERMANN M INKOWSKI: Geometrie der Zahlen. I. B. G. Teubner, Leipzig, 1896. 518

[29] C HRISTOS H. PAPADIMITRIOU: Computational Complexity. Addison Wesley Longman, 1994. 523

[30] O DED R EGEV AND R ICKY ROSEN: Lattice problems and norm embeddings. In Proc. 38th STOC,
     pp. 447–456. ACM Press, 2006. [doi:10.1145/1132516.1132581] 515, 520

[31] C LAUS -P ETER S CHNORR: A hierarchy of polynomial time lattice basis reduction algorithms.
     Theoret. Comput. Sci., 53(2-3):201–224, 1987. [doi:10.1016/0304-3975(87)90064-8] 514

[32] P ETER VAN E MDE B OAS: Another NP-complete partition problem and the complexity of computing
     short vectors in a lattice. Technical Report 81-04, Math Inst., University of Amsterdam, Amsterdam,
     1981. Available at author’s home page. 514


AUTHORS

      Ishay Haviv
      School of Computer Science, The Academic College of Tel Aviv—Yaffo
      Tel Aviv, Israel
      havivish tau ac il




                       T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                          530
                 T ENSOR - BASED H ARDNESS OF THE S HORTEST V ECTOR P ROBLEM

    Oded Regev
    Professor
    Blavatnik School of Computer Science, Tel Aviv University,
      and
    CNRS, ENS Paris
    regev di ens fr
    http://www.cs.tau.ac.il/~odedr


ABOUT THE AUTHORS

    I SHAY H AVIV graduated from Tel Aviv University in 2011 under the supervision of Oded
        Regev. His research interests include computational aspects of lattices, coding theory,
        and other topics in theoretical computer science.


    O DED R EGEV graduated from Tel Aviv University in 2001 under the supervision of Yossi
       Azar. He spent two years as a postdoc at the Institute for Advanced Study, Princeton, and
       one year at the University of California, Berkeley. He is currently with the cryptography
       group at the École Normale Supérieure, Paris. His research interests include quantum
       computation, computational aspects of lattices, and other topics in theoretical computer
       science. He also enjoys photography, especially of his baby girl.




                     T HEORY OF C OMPUTING, Volume 8 (2012), pp. 513–531                           531