Authors Eli Ben-Sasson, Michael Viderman,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255
www.theoryofcomputing.org
Tensor Products of Weakly Smooth
Codes are Robust∗
Eli Ben-Sasson† Michael Viderman†
Received: October 2, 2008; published: December 3, 2009.
Abstract: We continue the study of robustly testable tensor codes and expand the class of
base codes that can be used as a starting point for the construction of locally testable codes
via robustly testable tensor products. In particular, we show that all unique-neighbor ex-
pander codes and all locally correctable codes, when tensored with any other good-distance
code, are robustly testable and hence can be used to construct locally testable codes. Pre-
vious work by Dinur et al. (2006) required stronger expansion properties to obtain locally
testable codes.
Our proofs follow by defining the notion of weakly smooth codes that generalize the
smooth codes of Dinur et al. We show that weakly smooth codes are sufficient for con-
structing robustly testable tensor codes. Using the weaker definition, we are able to expand
the family of base codes to include the aforementioned ones.
ACM Classification: E.4
AMS Classification: 68Q99
Key words and phrases: Linear code, tensor code, expander code
1 Introduction
A linear code over a finite field F is a linear subspace C ⊆ F n . A code is locally testable if given a word
x ∈ F n one can verify whether x ∈ C by reading only a few (randomly chosen) symbols from x. More
precisely such a code has a tester, which is a randomized algorithm with oracle access to the received
∗ A preliminary version of this paper appeared in the Proceedings of APPROX-RANDOM 2008 [3].
† Research of both authors supported in part by a European Community International Reintegration Grant, an Alon Fellow-
ship, and grants by the Israeli Science Foundation (grant number 679/06) and by the US-Israel Binational Science Foundation
(grant number 2006104).
2009 Eli Ben-Sasson and Michael Viderman
Licensed under a Creative Commons Attribution License DOI: 10.4086/toc.2009.v005a012
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
word x. The tester reads at most q symbols from x and based on this “local view” decides if x ∈ C or not.
It should accept codewords with probability one, and reject words that are “far” (in Hamming distance)
with “noticeable” probability.
Locally Testable Codes (LTCs) are intimately related to PCPs and were implicit already in [1] (cf. [9,
Sec. 2.4]). This connection was explicitly studied by Goldreich and Sudan [11]. Since then, several con-
structions of LTCs have been suggested. (See [9] for an extensive survey of those constructions.) All
known efficient constructions of LTCs, i. e., those which achieve subexponential (i. e., exp(o(n))) code
length, rely on some form of “composition” of two (or more) codes. One of the simplest ways to com-
pose codes for the construction of LTCs is by use of the tensor product, as suggested by Ben-Sasson and
Sudan [2]. They introduced the notion of robust LTCs: An LTC is called robustly testable if whenever
the received word is far from the code, the “view” of the tester is far from an accepting view with no-
ticeable probability (see Definition 2.1). Ben-Sasson and Sudan showed in [2] that a code obtained by
tensoring three or more codes is robustly testable when the distances of the codes are big enough, and
used this result to construct LTCs. Then they considered the tensor product of two codes. Given two
linear codes R,C their tensor product R ⊗C consists of all matrices whose rows are codewords of R and
whose columns are codewords of C. If R and C are locally testable, we would like R ⊗ C to be locally
testable. [2] suggested using the following test for the testing of the tensor product R ⊗ C and asked
whether the tensor product was robustly testable.
Test for R ⊗C:
• flip a coin
• if “heads,” select a random row; else select a random column
• accept if the row (column) belongs to R (or C, respectively).
Paul Valiant [16] showed a surprising example of two linear codes R and C for which the test above
is not robust, by exhibiting a word x that is far from R ⊗C but such that the rows of x are very close to
R and the columns of x are very close to C. Additional examples give a code whose tensor product with
itself is not robust [5] and two good codes (with constant rate) whose tensor product is not robust [10].
Despite these examples, Dinur et al. showed in [6] that the above test is robust as long as one of
the base codes is smooth, according to a definition of the term introduced there (see Definition 5.1).
The family of smooth codes includes locally testable codes and certain codes constructed from expander
graphs with very good expansion properties. In this work we continue this line of research and enlarge
the family of base codes that result in robustly testable tensor codes and do this by working with a weaker
definition of smoothness (Definition 3.4). Using the weaker definition, we still manage to get similar
results as in [6] and do this using the same proof strategy as there. We are not aware of codes that are
weakly smooth but not smooth, although we conjecture such codes do exist. However, our weaker defi-
nition allows us to argue—in what we view as the main technical contributions of this paper (Section 6
and Section 7)—that a larger family of codes is suitable for forming robustly testable tensor codes. One
notable example is that our definition allows us to argue that any expander code with unique-neighbor
expansion (i. e., with expansion parameter γ < 1/2 as per Definition 2.3) is also weakly smooth, hence
can be used to construct robustly testable tensor products. We stress that unique-neighbor expansion is
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 240
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
the minimal requirement in terms of expansion needed to argue an expander code has good (i. e., con-
stant relative) distance, using currently known techniques, so our work shows all “combinatorially good”
expander codes1 can be used for the construction of robustly testable tensor products. In comparison, [6]
required stronger expansion parameters (γ < 1/4) of the kind needed to ensure an expander code is not
merely good in terms of its distance, but can also be decoded in linear time [15].
Another family of codes shown here to result in robustly testable tensor products of pairs of codes is
the family of locally correctable codes (LCCs), see Definition 7.1.
We end this section by pointing out that recently, tensor codes have played a role in the combinatorial
construction by Meir [13] of quasilinear length locally testable codes. Better base codes may result in
LTCs with improved rate, hence the importance in broadening the class of base codes that can be used
to construct robustly testable tensor codes.
Organization of the paper.
In the following section we provide the now-standard definitions regarding robustly testable tensor
codes. In Section 3 we formally define weakly smooth codes and state our main results. In Section 4
we prove that weakly smooth codes form robustly testable tensor codes. Section 5 shows the smooth
codes of [6] are also weakly smooth. The last two sections prove that unique-neighbor expander codes
and locally correctable codes are weakly smooth.
2 Preliminary Definitions
Throughout this paper, F is a finite field and C, R ⊆ F n are linear codes over F, i. e., linear subspaces of
F n . For x ∈ F n let supp(x) = {i | xi 6= 0} and wt(x) = | supp(x)|. We define the distance between two
words x, y ∈ F n to be d(x, y) = wt(x − y) and the relative distance to be δ (x, y) = d(x, y)/n. The distance
of a code is denoted d(C) and defined to be the minimal value of d(x, y) for two distinct codewords
x, y ∈ C. Similarly, the relative distance of the code is denoted δ (C) = d(C)/n. For x ∈ F n and C ⊆
F n , let δC (x) = miny∈C {δ (x, y)} denote the relative distance of x from code C. We note that d(C) =
minc∈C\{0} {wt(c)}. We let dim(C) denote the dimension of C. The vector inner product between u1 and
u2 is denoted by hu1 , u2 i. For a code C let
C⊥ = {u ∈ F n | ∀c ∈ C : hu, ci = 0}
be its dual code and let
Ct⊥ = u ∈ C⊥ | wt(u) = t .
In a similar way we define
⊥
= u ∈ C⊥ | wt(u) < t ⊥
= u ∈ C⊥ | wt(u) ≤ t .
C<t and C≤t
For w ∈ F n and S = { j1 , j2 , . . . , jm } ⊆ [n], where j1 < j2 < . . . < jm , we let w|S = (w j1 , w j2 , . . . , w jm ) be
the restriction of w to the subset S. We let C|S = {c|S | c ∈ C} denote the projection of the code C to the
coordinates corresponding to S.
1 Clearly, there exist non-unique-neighbor expander codes with good distance. However, the distance of these codes cannot
be argued merely using the combinatorial structure of the underlying parity check matrix.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 241
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
2.1 Tensor Product of Codes
The definitions appearing here are standard in the literature on tensor-based LTCs.
For x ∈ F m and y ∈ F n we let x ⊗ y denote tensor product of x and y (i. e., the matrix M with entries
M(i, j) = xi · y j where (i, j) ∈ [m] × [n]). Let R ⊆ F m and C ⊆ F n be linear codes. We define the tensor
product code R ⊗C to be the linear subspace spanned by words r ⊗ c ∈ F n×m for r ∈ R and c ∈ C. Some
immediate facts:
• The code R ⊗C consists of all n × m matrices over F whose rows belong to R and whose columns
belong to C.
• dim(R ⊗C) = dim(R) · dim(C).
• δ (R ⊗C) = δ (R) · δ (C).
Note that the tensor product F m ⊗ F n is the tensor product of codes F m and F n . Let M ∈ F m ⊗ F n
and let δ (M) = δR⊗C (M). Let δ row (M) = δR⊗F n (M) denote the distance from the space of matrices
whose rows are codewords of R. This is the expected distance of a random row in x from R. Similarly
let δ col (M) = δF m ⊗C (M).
2.2 Robust Locally Testable Codes
Definition 2.1 (Robustness). Let M be a candidate codeword for R ⊗C. The robustness of M is defined
as ρ(M) = (δ row (M) + δ col (M))/2, i. e., it is the average distance of “views” of the codeword. The code
R ⊗C is robustly testable if there exists a constant α > 0 such that ρ(M)/δ (M) ≥ α for every M ∈ / R ⊗C.
The robustness of a Tester T is defined as
ρ(M)
ρ T = min .
M ∈R⊗C
/ δR⊗C (M)
For further information and the motivation for the notion of robustness see [2, Section 2].
2.3 Low Density Parity Check (LDPC) Codes
Binary as well as q-ary LDPC codes were introduced by Gallager more than four decades ago [7, 8].
They have been studied extensively in information theory (cf. [4]). Binary LDPC codes motivated Mar-
gulis’ explicit construction of graphs of large girth [12], the precursor of his construction of Ramanujan
graphs. The celebrated expander codes of Sipser and Spielman [14] are binary LDPC codes. In the
context of local testability, q-ary LDPC codes were studied in [6].
Definition 2.2 (Check graphs). A check graph ([n], [m], E, e) consists of a bipartite graph ([n], [m], E)
(E ⊆ [n] × [m] is the set of edges) and a function e : E → F \ {0}. This check graph defines the code
C ⊆ F n via the rule that for all x ∈ F n
!
x ∈ C ⇐⇒ (∀ j ∈ [m]) ∑ xi · e(i, j) = 0 ,
i∈N( j)
where N( j) denotes the set of neighbors of j in the graph.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 242
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
Clearly, any linear code C ⊆ F has a corresponding check graph. The code C is called a “low-density
parity-check code” over F if C admits a “low-density” check graph. Note that if C⊥ = span(C≤d ⊥
) then
without loss of generality every right hand node j ∈ [m] has degree at most d, guaranteeing “low density”
if d is “small.”
Definition 2.3 (Expander graphs). Let c, d ∈ N and let γ, δ ∈ (0, 1). Define a (c, d)-regular (γ, δ )-
expander to be a bipartite graph (L, R, E) with vertex sets L, R such that all vertices in L have degree c,
and all vertices in R have degree d; and the additional property that every set L0 ⊂ L of vertices such that
|L0 | ≤ δ |L| has at least (1 − γ)c|L0 | neighbors.
We say that a code C is an (c, d, γ, δ )-expander code if it has a check graph that is a (c, d)-regular
(γ, δ )-expander.
It is well known that if γ < 1/2 then the graph has unique-neighbor expansion. Recall that unique-
neighbor expansion means that for every subset L0 ⊆ L such that 0 < |L0 | ≤ δ |L| there exists a vertex
v ∈ R which is a neighbor of exactly one vertex in L0 . Thus, from here on we refer to (γ, δ )-expanders,
where γ < 1/2, as unique-neighbor expanders. The following well-known observation (the proof of
which is included for the sake of completeness) shows that unique-neighbor expansion of G is sufficient
to guarantee that the code whose check graph is G has large distance.
Proposition 2.4. If C is a (c, d, γ, δ )-expander code over F and γ < 12 then δ (C) > δ .
Proof. We prove that every non-zero word in C must have weight more than δ n. Indeed, let (L, R, E, e)
be a check graph of C that is a (c, d)-regular (γ, δ )-expander. The proposition follows by examining the
unique neighbor structure of the graph. Let x ∈ C be such that 0 < wt(x) ≤ δ n and let L0 = supp(x) ⊆ L.
But then L0 has at least (1 − γ)c|L0 | > 2c |L0 | neighbors in R. At least one of these sees only one element
of L0 , so the check by this element (corresponding dual word) will give xi · e(i, j) when xi 6= 0, e(i, j) 6= 0
and thus xi · e(i, j) 6= 0, violating the corresponding constraint and contradicting x ∈ C.
3 Main Results
Our first main result says that codes obtained by the tensor product of a code with constant relative
distance and a unique-neighbor expander code are robustly testable.
Theorem 3.1 (Unique-Neighbor Expander codes). Let R ⊆ F m be a code of distance at least δR > 0.
Let C ⊆ F n be a (c, d, γ, δ )-expander code for some c, d ∈ N, δ > 0, and 0 < γ < 1/2. Then,
δ · δR
ρT ≥
5.2d ∗
where d ≤ d ∗ < d k , k = (log(0.5+γ) 0.05) + 1.
The above theorem extends the result of [6] where a similar result was proved for expanders with the
stronger requirement γ < 1/6. Notice the difference between γ < 1/6 and unique-neighbor expansion
(γ < 1/2) is qualitative, not merely quantitative. This is because expansion γ < 1/4 is required to
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 243
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
guarantee efficient decoding algorithms, as shown by Sipser and Spielman in [15], whereas γ < 1/2 is
sufficient for claiming the code has large distance, but does not necessarily guarantee efficient decoding.
Our next result extends [6] in a different direction by showing that locally correctable codes can
also be used to construct robustly testable tensor codes. Informally, locally correctable codes allow to
recover each entry of a codeword with high probability by reading only a few entries of the codeword
even if a large fraction of it is adversarially corrupted (see Definition 7.1).
Theorem 3.2 (Locally correctable codes). Let R ⊆ F m be a code of distance at least δR > 0. Let C ⊆ F n
be a (ε, δ , q)-locally correctable code with ε > 0. Then,
0.5δ · δR
ρT ≥ .
2(q + 1)
To prove both theorems we first define weakly smooth codes and prove that the tensor of a weakly
smooth code and another code with constant relative distance is robustly testable. Then we show that
smooth codes are also weakly smooth. Finally we show in Claims 6.6 and 7.2 that all unique-neighbor
expander codes (with γ < 1/2) and all locally correctable codes are weakly smooth. This will prove
Theorems 3.1 and 3.2.
Remark 3.3. The proofs of Claims 6.6 and 7.2 are similar and rely on the following property shared
by both families of codes. For any small subset I ⊂ [n], most elements i ∈ I have a low-weight dual
constraint ui such that supp(ui ) ∩ I = {i}, i. e., a large fraction of I has unique neighbors.
3.1 Weakly Smooth codes
We are coming now to the central definition of the paper, that of a weakly smooth code. This defini-
tion allows us to generalize the work of [6] using essentially the same proof as there. In particular, in
Section 5 we show that every code that is smooth according to [6] is also weakly smooth as per Def-
inition 3.4. Furthermore, using our definition we get robustly testable tensor products from a broader
family of base codes.
Both the smooth codes of [6] and our weakly smooth codes require the code to retain large distance
even after a portion of its coordinates and constraints have been removed. The are, however, two subtle
differences between the two notions.
1. In the smooth codes setting an adversary is allowed to remove an arbitrary small fraction of
constraints. In the weakly smooth setting the adversary is further limited to removing a small
fraction of constraints that must touch a small fraction of indices. This extra limitation on the
sets of constraints that can be removed makes it much easier to prove that a given code is weakly
smooth. This difference also accounts for our ability to show that both unique-neighbor expander
codes and locally correctable codes are weakly smooth (neither of the two families of codes is
known to be smooth).
2. In the smooth codes setting we work with a predefined set of low-weight constraints coming from
a regular bipartite graph. Our Definition 3.4 does not assume any graph, nor does it require any
regularity of degrees. This slackness and nonregularity will be crucial in arguing that unique-
neighbor expanders are weakly smooth.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 244
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
Definition 3.4 (Weakly smooth codes). Let 0 ≤ α1 ≤ α10 < 1, 0 < α2 < 1, d ∗ be constants. The code C
is (α1 , α10 , α2 , d ∗ )-weakly smooth if for all subsets I ⊆ [n] of size |I| < α1 n, letting
⊥
Constr(I) = u ∈ C≤d ∗ | supp(u) ∩ I = 0
/
and C0 = (Constr(I) )⊥ , there exists I 0 ⊂ [n], I ⊆ I 0 , |I 0 | < α10 n such that d(C0 |[n]\I 0 ) ≥ α2 n.
The following is the main technical lemma we use to show that weakly smooth codes can be used to
construct robustly testable tensor products. Its proof, which follows [6], appears in the next section.
Lemma 3.5 (Main Lemma). Let R ⊆ F m and C ⊆ F n be codes of relative distance δR and δC , respec-
tively. Assume C is (α1 , α10 , α2 , d ∗ )-weakly smooth, where α10 < δC /2, and let M ∈ F m ⊗ F n . If
α1 δR δR α2
ρ(M) < min ,
2d ∗ 2
then δ (M) ≤ 8ρ(M).
4 Weakly smooth codes—Proof of Lemma 3.5
We follow the proof of the Main Lemma in [6], but attend to the required modifications needed to carry
out the proof with the weaker requirement of smoothness. The main place where we use the weakly
smooth property is Proposition 4.2.
Proof of Lemma 3.5. For row i ∈ [n], let ri ∈ R denote a codeword of R closest to the i-th row of M. For
column j ∈ [m], let c( j) ∈ C denote a codeword of C closest to the j-th column of M. Let MR denote
the n × m matrix whose i-th row is ri , and let MC denote the matrix whose j-th column is c( j) . Let
E = MR − MC .
In what follows, the matrices MR , MC and (especially) E will be the central objects of attention. We
refer to E as the error matrix. Note that δ (M, MR ) = δ row (M) and δ (M, MC ) = δ col (M) and with some
abuse of notation let wt(E) be the relative weight of E, so
wt(E) = δ (MR , MC ) ≤ δ (M, MR ) + δ (M, MC )
= δ row (M) + δ col (M) = 2ρ(M) . (4.1)
Our proof strategy is to show that the error matrix E is actually very structured. We do this in two
steps. First we show that its columns satisfy most low-weight constraints of the column code. Then we
show that E contains a large submatrix which is all zeroes. Finally using this structure of E we show
that M is close to some codeword in R ⊗C. The following is from [6, Proposition 4]; we give the proof
for the sake of completeness.
Proposition 4.1. Let u ∈ Cd⊥ be a constraint of C with supp(u) = {i1 , . . . , id }. Let ei denote the i-th row
of E. Suppose wt(ei j ) < δR /d for every j ∈ [d]. Then uT · E = 0.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 245
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
Proof. Note that for all c ∈ C we have hc, ui = 0. Let ci denote the i-th row of the matrix MC . (Recall that
the rows of MC are not necessarily codewords of any nice code—it is only the columns of MC that are
codewords of C). For every column j, we have h(MC ) j , ui = 0 (since the columns of MC are codewords
of C).
Thus we conclude that uT · MC = 0 as a vector. Clearly, uT · MR ∈ R since each one of the rows of
MR is a codeword of R. But this implies
uT · E = uT · (MR − MC ) = uT · MR − uT · MC = uT · MR − 0 ∈ R .
Now we use the fact that the ei j s have small weight for i j ∈ [d]. This implies that
wt(uT · E) < wt(u) · (δR /d) = δR .
But R is a code of minimum distance δR so the only word of weight less than δR in it is the zero codeword,
yielding uT · E = 0.
Proposition 4.2. There exist subsets U ⊆ [m] and V ⊆ [n] with |U|/m < δR and |V |/n < δC /2 such that
letting V̄ = [n] \V and Ū = [m] \U we have for all i ∈ V̄ , j ∈ Ū that E(i, j) = 0.
Proof. Let V1 ⊆ [n] be the set of indices corresponding to rows of the error matrix E with weight at least
δR /d ∗ , i. e.,
V1 = {i ∈ [n] | wt(ei ) ≥ δR /d ∗ } .
Clearly, |V1 | < α1 n, since
|V1 | δR
· ≤ wt(E) ≤ 2ρ(M)
n d∗
and thus
|V1 | 2ρ(M)
≤ < α1
n δR /d ∗
where the last inequality follows from the assumption ρ(M) < α2d1 δ∗R . Let
⊥
Constr(V1 ) = u ∈ C≤d ∗ | supp(u) ∩V1 = 0
/
and let C0 = (Constr(V1 ) )⊥ . Proposition 4.1 implies that ∀u ∈ Constr(V1 ) we have uT · E = 0, i. e., every
column of E, denoted by E j , satisfies constraint u and therefore E j ∈ C0 .
Recall that C is a (α1 , α10 , α2 , d ∗ )-weakly smooth, where α10 < δC /2. Associate the set V1 with I from
Definition 3.4. Following this definition, there exists a set I 0 (let V = I 0 ), |V | = |I 0 | < α10 n, such that
0
d(C[n]\I 0 ) = d(C[n]\V ) ≥ α2 n .
We notice that for every column of E, denoted by E j , we have (E j )|[n]\I 0 ∈ C[n]\V . Thus E j is either zero
outside V or has at least α2 n non-zero elements outside V .
Let U be the set of indices corresponding to the “heavy columns” of E that have α2 n or more non-
zero elements in the rows outside V . We conclude that every column of E that is not zero outside V
is located in U. We argue that for each (i, j) ∈ V̄ × Ū we have E(i, j) = 0. This is true since after we
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 246
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
remove the rows corresponding to V , all nonzero columns have weight at least α2 n. It follows that all
nonzero columns are located in U. Hence all columns of V̄ × Ū are zero columns.
Clearly, |U|/m < δR , since
|U|
· α2 ≤ wt(E) ≤ 2ρ(M)
m
and thus
|U| 2ρ(M)
≤ < δR ,
m α2
where the last inequality follows from the assumption ρ(M) < δR α2 /2.
Proposition 6 of [6] asserts that under our conditions, M is close to R ⊗C. The proof first shows that
MR and MC are close to R ⊗C and then uses this to estimate the distance of M to R ⊗C. For the sake of
completeness we reproduce the proof from [6].
Proposition 4.3 ([6]). Assume there exist sets U ⊆ [m] and V ⊆ [n], |U|/m < δR and |V |/n < δC /2, such
that MR (i, j) 6= MC (i, j) implies j ∈ U or i ∈ V . Then δ (M) ≤ 8ρ(M).
Proof. Let V̄ = [n] \ V and Ū = [m] \ U. First we note that there exists a matrix N ∈ R ⊗ C that agrees
with MR and MC on V̄ × Ū (see [2, Proposition 3]). Recall also that δ (M, MR ) = δ row ≤ 2ρ(M). So it
suffices to show δ (MR , N) ≤ 6ρ(M). We do so in two steps. First we show that δ (MR , N) ≤ 2ρ(MR ).
We then show that ρ(MR ) ≤ 3ρ(M) concluding the proof.
For the first part we start by noting that MR and N agree on every row in V̄ . This is the case since
both rows (assume r1 , r2 ) are codewords of R which may disagree only on entries from the columns of
U, i. e., supp(r1 − r2 ) ⊆ U, but |U| < δR m and thus d(r1 , r2 ) < δR m that means r1 = r2 . Next we claim
that for every column j ∈ [m] the closest codeword of C to MR (·, j), the j-th column of MR , is N(·, j),
the j-th column of N. This is true since MR (i, j) 6= N(i, j) implies i ∈ V , where |V | < δC n/2 and so the
number of such i is less than δC n/2. Thus for every j, we have N(·, j) is the (unique) decoding of the
j-th column of MR .
Averaging over j, we get that δ col (MR ) = δ (MR , N). In turn this yields
ρ(MR ) ≥ δ (MR )/2 = δ (MR , N)/2 .
This yields the first of the two desired inequalities.
Now to bound ρ(MR ), note that for any pair of matrices M1 and M2 we have
ρ(M1 ) ≤ ρ(M2 ) + δ (M1 , M2 ) . (4.2)
Indeed it is the case that δ row (M1 ) ≤ δ row (M2 ) + δ (M1 , M2 ) and δ col (M1 ) ≤ δ col (M2 ) + δ (M1 , M2 ).
When the above two arguments are combined it yields (4.2). Applying this inequality to M1 = MR and
M2 = M we get
ρ(MR ) ≤ ρ(M) + δ (MR , M) ≤ 3ρ(M) .
This yields the second inequality and thus the Proposition.
The Main Lemma (Lemma 3.5) follows immediately from the last two propositions.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 247
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
5 Smooth codes are also weakly smooth
We now show that our Definition 3.4 is indeed a generalization of smooth codes of Dinur et al. [6]. In
what follows F2 denotes the two-element field and C(R0 ) is a code defined by constraints in R \ R0 . We
recall the definition of smooth code.
Definition 5.1 (Smooth Codes). A code C ⊆ Fn2 is (d, α, β , δ )-smooth if it has a parity check graph
B = (L, R, E) where all the right vertices have degree d, the left vertices have degree c = d|R|/|L|, and
for every set R0 ⊆ R such that |R0 | ≤ α|R|, there exist a set L0 ⊆ L, |L0 | ≤ β |L| such that the code
C(R0 )|[n]\L0 has distance at least δ .
Claim 5.2. If C ⊆ Fn2 is a (d, α, β , δ )-smooth code then it is (α1 , α10 , α2 , d ∗ )-weakly smooth with α1 =
α/d, α10 = β , α2 = δ , d ∗ = d.
Proof. Let R be a set of constraints of degree d and let I ⊆ [n], |I| < α1 n = αn/d be the index set from
Definition 3.4. Remove all d-constraints that touch at least one index in I. Let R0 be a set of removed
constraints from R. We have left degree c = d|R|/n, so, we removed at most c · α1 n = d|R|α1 = α|R|
constraints. Let
Constr(I) = u ∈ Cd⊥ | supp(u) ∩ I = 0/
be the set of constraints in R \ R0 (low-weight dual words). We notice that C(R0 ) = (Constr(I) )⊥ . Let
I 0 ⊆ [n], |I 0 | < β n = α10 n be the index set from the definition of smooth codes (Definition 5.1) that needs
to be removed in order to maintain good distance, i. e.,
d(C(R0 )|[n]\I 0 ) ≥ δ n = α2 n .
Clearly I ⊆ I 0 as otherwise d(C(R0 )|[n]\I 0 ) = 1. Thus from the definition of smoothness, letting
C0 = (Constr(I) )⊥
we have d(C0 |[n]\I 0 ) ≥ α2 n, which proves that C is (α1 , α10 , α2 , d ∗ )-weakly smooth.
6 Unique-Neighbor Expander Codes are weakly smooth
As explained in Section 3.1, Dinur et al. [6] showed that expander codes with γ < 1/6 are smooth and
thus result in robustly testable tensor products. In this section we show that it is possible to obtain
robustly testable tensor codes from expander codes under the weaker assumption γ < 1/2. We first
define the gap property (Definition 6.1) and prove that it implies weak smoothness. Then we show that
unique-neighbor expander codes have the gap property.
Definition 6.1 (Gap property). Code C has the (α, δ , d)-gap property if for all subsets J ⊆ [n], |J| < αn,
letting
⊥
and C0 = (Constr(J) )⊥ ,
Constr(J) = u ∈ C≤d | supp(u) ∩ J = 0/
we have that for all vectors c ∈ C0 |[n]\J either wt(c) < 0.1δ n or wt(c) > 0.8δ n.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 248
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
The next claim generalizes an idea from the proof of [6, Lemma 3].
Claim 6.2. If C has the (α, δ , d)-gap property then it is (α, α + 0.3δ , 0.5δ , d)-weakly smooth.
Proof. Clearly, C has no codewords of weight between 0.1δ n and 0.8δ n. To see this take J = 0/ and then
the gap property implies that for all words w ∈ F n if 0.1δ n ≤ wt(w) ≤ 0.8δ n then hw, ui 6= 0 for some
⊥ .
u ∈ C≤d
For A ⊆ C let JA = c∈A supp(c). Let
S
S = {c ∈ C | 0 < wt(c) < 0.1δ n}
be the set of all non-zero low-weight codewords. We show that |JS | < 0.3δ n.
Assume the contrary, i. e., |JS | ≥ δ · 0.3n. Then there exists S0 ⊆ S such that 0.2δ n < |JS0 | < 0.3δ n.
To see this, remove low-weight words one by one from S, each time decreasing S at most by 0.1δ n.
Consider a random linear combination of codewords from S0 . The expected weight of this linear
combination is more than 0.1δ n but cannot exceed 0.3δ n, thus there exists such a linear combination
of low-weight codewords that produces a codeword with weight more than 0.1δ n but less than 0.3δ n.
Contradiction.
Thus for the rest of the proof we assume |JS | < 0.3δ n. We are ready to show that the code C is
(α, α + 0.3δ n, 0.5δ n, d)-weakly smooth.
Let I ⊂ [n], |I| < αn be an arbitrarily chosen set. Let
⊥
and C0 = (Constr(I) )⊥ .
Constr(I) = u ∈ C≤d | supp(u) ∩ I = 0/
From the definition of the gap property and from the above it follows that for all c ∈ C0 |[n]\I either
wt(c) < 0.1δ n and thus supp(c) ⊆ JS or wt(c) > 0.8δ n.
Let I 0 = JS ∪ I. We have |I 0 | ≤ |JS | + |I| < αn + 0.3δ n. We claim that
d(C0 |[n]\(I∪JS ) ) = d(C0 |[n]\(I 0 ) ) ≥ 0.5δ n .
To see this assume c0 ∈ C0 |[n]\I , c00 = c0 |[n]\(I∪JS ) , c00 ∈ C0 |[n]\(I∪JS ) such that 0 < wt(c00 ) < 0.5δ n. But then,
0 < wt(c00 ) ≤ wt(c0 ) ≤ |JS | + wt(c00 ) < 0.8δ n
and thus c0 is a low-weight word. Therefore supp(c0 ) ⊆ JS . Hence c00 = c0 |[n]\(I∪JS ) = 0, contradicting
wt(c00 ) > 0.
Proposition 6.3. Let C be a linear code over F. If u1 ∈ C< ⊥ and u ∈ C ⊥ and i ∈ supp(u ) ∩ supp(u )
f 2 <g 1 2
⊥
then there exists u3 ∈ C< f +g such that supp(u3 ) ⊆ (supp(u1 ) ∪ supp(u2 )) \ {i}.
Proof. Let a ∈ F be the i-th entry of u1 and b ∈ F be the i-th entry of u2 . Then u3 = a−1 u1 + b−1 u2 ∈
⊥
C< f +g has the desired properties.
The next claim shows that expander codes with γ < 1/2 have specific low-weight constraint struc-
ture. We use this claim later to argue that expander codes with γ < 1/2 have the gap property (Defini-
tion 6.1) and thus are weakly smooth.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 249
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
Claim 6.4. Let C be a (c, d, γ, δ )-expander code over F with constant γ < 1/2. Let I ⊆ [n] such
⊥ ∗ k
that 0 < |I| < δ n. Then at least a 0.95-fraction of indices i ∈ I satisfy ui ∈ C<d ∗ where d < d ,
k = (log0.5+γ (0.05)) + 1 such that supp(ui ) ∩ I = {i}.
Proof. Fix I ⊆ [n] with |I| < δ n. Let (L, R, E, e) be a check graph of C that is a (c, d)-regular (γ, δ )-
expander; here L = [n] and R = C≤q ⊥ . The Claim follows by examining the unique neighbor structure of
the graph. For j = 1, . . . , k we construct, inductively, sets I j satisfying
• I1 = I, I j+1 ⊂ I j
• |I j+1 | ≤ ( 12 + γ)|I j |
⊥
• for all i ∈ I j \ I j+1 there exists ui ∈ C≤d j with supp(ui ) ∩ I = {i}.
We then conclude ( 21 + γ)k < 0.05. Therefore Ik ⊂ I, |Ik | < 0.05 · |I| and for all i ∈ I \ Ik there exists
⊥
ui ∈ C<d k with supp(ui ) ∩ I = {i}. This will complete the proof of the Claim.
For the base case let I1 = I. Since C is an expander and |I1 | ≤ δ n, I1 has at least
c
(1 − γ)c|I1 | = + (0.5 − γ)c |I1 |
2
neighbors in R. Each index i ∈ I1 has c neighbors in R. So the number of constraints in R that involve
at least 2 indices from I1 is bounded from above by (c/2)|I1 |. Hence there are at least ((1/2 − γ)c)|I1 |
unique neighbors in R. Since a single index cannot have more than c unique neighbors in R, the number
of indices in I1 having a unique neighbor is at least (1/2 − γ)|I1 |. This means that at least a (1/2 − γ)-
fraction of all indices in I1 have a unique neighbor with support d = d 1 . Let I2 ⊂ I1 be the subset of all
indices i ∈ I1 that have no unique neighbor of weight at most d 1 . We constructed sets I1 , I2 such that
• I1 = I, I2 ⊂ I1
• |I2 | ≤ ( 12 + γ)|I1 |
⊥
• for all i ∈ I1 \ I2 there exists ui ∈ C≤d 1 with supp(ui ) ∩ I = {i}.
This completes the base case.
Assume correctness up to j − 1 and let us prove it for j. Consider I j , |I j | ≤ |I1 | ≤ δ n. We say
that u ∈ Cd⊥ is an I j -restricted unique neighbor of the index i ∈ I j if supp(u) ∩ I j = {i}. By the unique
neighbor expansion, at least a (1/2 − γ)-fraction of indices i ∈ I j have I j -restricted unique neighbors.
Let I j+1 ⊂ I j be the set of indices i ∈ I j that have no I j -restricted unique neighbor. It follows that
|I j+1 | ≤ (1/2 + γ)|I j |.
Fix i ∈ I j \ I j+1 arbitrarily. There exists ui ∈ Cd⊥ such that supp(ui ) ∩ I j = {i}. Every index ` ∈
supp(ui ), ` 6= i is located either in [n] \ I1 or in I1 \ I j . We handle all ` ∈ I1 \ I j using linear combination
according to Proposition 6.3 to obtain a constraint u0i ∈ C≤d ⊥ such that supp(u0 ) ∩ I = {i}. This is possible
j i
since every ` ∈ I1 \ I j is located in some I f for 1 ≤ f < j and therefore, by the inductive hypothesis,
⊥ 0 ⊥ ⊥
satisfies u` ∈ C≤d j−1 such that supp(u` ) ∩ I = {`}. Since wt(ui ) ≤ d we obtain ui ∈ C≤d j−1 ·d = C≤d j such
0
that supp(ui ) ∩ I = {i}. So we have shown
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 250
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
• I j+1 ⊂ I j
• |I j+1 | ≤ ( 12 + γ)|I j |
⊥
• for all i ∈ I j \ I j+1 there exists ui ∈ C≤d j with supp(ui ) ∩ I = {i}.
This completes the induction and the proof of Claim 6.4.
Corollary 6.5. If C is a (c, d, γ, δ )-expander code with γ < 12 then C has the (0.5δ , 0.5δ , d ∗ )-gap prop-
erty where d ∗ < d k , k = (log(0.5+γ) 0.05) + 1.
Proof. Let J ⊂ [n], |J| < 0.5δ be arbitrarily chosen. Let
⊥
and C0 = (Constr(J) )⊥ .
Constr(J) = u ∈ C<d k | supp(u) ∩ J = 0
/
0
Assume by contradiction that there exists w ∈ C[n]\J such that
0 < 0.1 · (0.5δ )n ≤ wt(w) ≤ 0.8 · (0.5δ )n .
It follows that there is no u ∈ Constr(J) such that | supp(u) ∩ supp(w)| = 1.
Let
I = J ∪ supp(w) and |I| ≤ |J| + wt(w) < 0.5δ n + 0.4δ n < δ n .
⊥ such
We notice that supp(w) ∩ J = 0/ and | supp(w)| > 0.05 · |I|. So, by Claim 6.4, there exists u ∈ C<d k
that | supp(u) ∩ supp(w)| = 1 and | supp(u) ∩ I| = | supp(u) ∩ supp(w)| = 1. Hence u ∈ Constr(J) and
| supp(u) ∩ supp(w)| = 1. Contradiction.
Claim 6.6. If C is a (c, d, γ, δ )-expander code with γ < 12 then C is (0.5δ , 0.65δ , 0.25δ , d ∗ )-weakly
smooth where d ∗ < d k , k = (log(0.5+γ) 0.05) + 1.
Proof. Follows immediately from Corollary 6.5 and Claim 6.2. Corollary 6.5 implies that C has the
(0.5δ , 0.5δ , d ∗ )-gap property where d ∗ < d k , k = (log(0.5+γ) 0.05) + 1. Claim 6.2 implies that C is
(0.5δ , 0.5δ + 0.15δ , 0.25δ , d ∗ )-weakly smooth.
Proof of Theorem 3.1. It is sufficient to prove that
T δ · δR δ · δR 1
ρ ≥ min , ,
5.2d ∗ 10.4 8
because δR , δ ≤ 1 and d ∗ ≥ d > 1 and thus
1 δ · δR δ · δR
≥ ≥ .
8 10.4 5.2d ∗
Let R ⊆ F m and C ⊆ F n be codes of distance δR and δC , resp. By Proposition 2.4, δC > δ . Clearly,
C is a (c, d, γ, δ 0 )-expander code, where δ 0 = 0.5δ /0.65. Let M ∈ F m ⊗ F n . Claim 6.6 implies that C is
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 251
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
(0.5δ 0 , 0.65δ 0 , 0.25δ 0 , d ∗ )-weakly smooth where 0.65δ 0 = 0.5δ , d ≤ d ∗ < d k , k = (log(0.5+γ) 0.05) + 1.
The Main Lemma (Lemma 3.5) implies that if
(0.5δ 0 ) · δR δR · (0.25δ 0 )
ρ(M) < min ,
2d ∗ 2
then δ (M) ≤ 8ρ(M).
T δ · δR δ · δR 1
We conclude that ρ ≥ min , , .
5.2d ∗ 10.4 8
7 Locally correctable codes are weakly smooth
Definition 7.1 (Locally Correctable Code). A code C ⊆ F n is called a (q, ε, δ )-locally correctable code
if there exists a randomized decoder (D) that reads at most q entries and the following holds.
• For all c ∈ C, i ∈ [n] we have Pr[Dc [i] = ci ] = 1.
1
• For all c ∈ C, i ∈ [n] and for all ĉ ∈ F n such that d(c, ĉ) ≤ δ n we have Pr[Dĉ [i] = ci ] ≥ + ε,
|F|
1
i. e., with probability at least |F| + ε, entry ci will be recovered correctly.
Without loss of generality we assume that given ĉ ∈ F n , the “correction” of entry i (recovering the
⊥
original ci ) is done by choosing an arbitrary u ∈ C≤q+1 such that i ∈ supp(u). Formally, assume the i-th
entry of u is ui . Let
uproj = u|[n]\{i} and ĉproj = ĉ|[n]\{i} .
Then ci is recovered by setting
hu pro j , ĉ pro j i
Dĉ [i] = .
ui
Notice that ui 6= 0. It can be readily verified that if the “correction” of entry i is not done in the way
described then there exists c ∈ C such that Pr[Dc [i] = ci ] < 1.
The next claim holds for variable ε > 0 (e. g., ε = o(1)) whereas locally correctable codes are usually
defined with a fixed ε.
Claim 7.2. If C is an (ε, δ , q)-locally correctable code with ε > 0 then it is (0.5δ , 0.5δ , 0.5δ , q + 1)-
weakly smooth and its relative distance is greater than δ .
Proof. We first show that for all sets I ⊆ [n], |I| ≤ δ n, and for all i ∈ I, we have ui ∈ C≤q+1 ⊥ with
⊥
supp(ui ) ∩ I = {i}. Assume the contrary and fix I ⊆ [n], |I| ≤ δ n and i ∈ I. So, for all ui ∈ C≤q+1 with
i ∈ supp(ui ) ∩ I, we have | supp(ui ) ∩ I| ≥ 2. Consider an adversary that takes c ∈ C and sets c j to a
random element from F for each j ∈ I, producing the vector ĉ. Clearly, the original value of ci will
be recovered with probability at most |F| 1
since for every u(i) ∈ C≤q+1 ⊥ such that i ∈ supp(u(i) ) the inner
product
h(u(i) )|[n]\{i} , c|[n]\{i} i
will produce a uniformly distributed random value in F.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 252
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
We next show that d(C) > δ n. To see this assume c ∈ C such that 0 < wt(c) ≤ δ n. Let I = supp(c),
⊥
|I| ≤ δ n and i ∈ I. There exists u ∈ C≤q+1 with supp(u) ∩ supp(c) = {i} and thus hu, wi = 6 0 implies
c∈/ C.
We finally show the weak smoothness of C. Let I ⊂ [n], |I| < 0.5δ n be the set chosen by the adversary
and let I 0 = I. Let
⊥
and C0 = (Constr(I) )⊥ .
Constr(I) = u ∈ C≤q+1 | supp(u) ∩ I = 0/
We claim that d(C0 |[n]\I ) ≥ 0.5δ n. This is true, since otherwise there exists c0 ∈ C0 , c0[n]\I ∈ C0 |[n]\I such
that 0 < wt(c0[n]\I ) < 0.5δ n. But then 0 < wt(c0 ) < 0.5δ n + |I| ≤ δ n and so there exists u ∈ Constr(I)
such that | supp(u) ∩ supp(c0 )| = 1 which implies hu, c0 i =6 0 and c0 ∈
/ C0 . Contradiction, proving that C
is (0.5δ , 0.5δ , 0.5δ , q + 1)-weakly smooth.
Proof of Theorem 3.2. It is sufficient to show that
T 0.5δ · δR 1
ρ ≥ min ,
2(q + 1) 8
because q ≥ 1, δ ≤ 1 and δR ≤ 1. Let R ⊆ F m and C ⊆ F n be linear codes such that δ (R) ≥ δR . Let
M ∈ F m ⊗ F n . Claim 7.2 implies that C is (0.5δ , 0.5δ , 0.5δ , q + 1)-weakly smooth and δ (C) > δ . The
Main Lemma (Lemma 3.5) implies that if
(0.5δ ) · δR δR · (0.5δ ) (0.5δ ) · δR
ρ(M) < min , =
2(q + 1) 2 2(q + 1)
then δ (M) ≤ 8ρ(M).
Acknowledgements.
We thank Madhu Sudan for helpful discussions and the anonymous referees for their valuable comments.
References
[1] L ÁSZL Ó BABAI , L ANCE F ORTNOW, L EONID A. L EVIN , AND M ARIO S ZEGEDY: Checking
computations in polylogarithmic time. In Proc. 23rd STOC, pp. 21–31. ACM Press, 1991.
[STOC:10.1145/103418.103428]. 240
[2] E LI B EN -S ASSON AND M ADHU S UDAN: Robust locally testable codes and products of codes.
In K LAUS JANSEN , S ANJEEV K HANNA , J OS É D. P. ROLIM , AND DANA RON, editors, Proc.
8th Intern. Workshop on Randomization and Comput. (RANDOM’04), volume 3122 of LNCS, pp.
286–297. Springer, 2004. [doi:10.1007/b99805, ECCC:TR04-046]. 240, 242, 247
[3] E LI B EN -S ASSON AND M ICHAEL V IDERMAN: Tensor products of weakly smooth codes are
robust. In A SHISH G OEL , K LAUS JANSEN , J OS É D. P. ROLIM , AND RONITT RUBINFELD,
editors, Proc. 12th Intern. Workshop on Randomization and Comput. (RANDOM’08), volume 5171
of LNCS, pp. 290–302. Springer, 2008. [doi:10.1007/978-3-540-85363-3 24]. 239
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 253
E LI B EN -S ASSON AND M ICHAEL V IDERMAN
[4] A MIR B ENNATAN AND DAVID B URSHTEIN: On the application of LDPC codes to ar-
bitrary discrete-memoryless channels. IEEE Trans. Inform. Theory, 50(3):417–438, 2004.
[IEEE:10.1109/TIT.2004.824917]. 242
[5] D ON C OPPERSMITH AND ATRI RUDRA: On the robust testability of product of codes. Technical
Report 104, Elect. Colloq. Comput. Complex. (ECCC), 2005. [ECCC:TR05-104]. 240
[6] I RIT D INUR , M ADHU S UDAN , AND AVI W IGDERSON: Robust local testability of tensor products
of LDPC codes. In J OSEP D ÍAZ , K LAUS JANSEN , J OS É D. P. ROLIM , AND U RI Z WICK, editors,
Proc. 10th Intern. Workshop on Randomization and Comput. (RANDOM’06), volume 4110 of
LNCS, pp. 304–315. Springer, 2006. [doi:10.1007/11830924 29, ECCC:TR06-118]. 240, 241,
242, 243, 244, 245, 247, 248, 249
[7] R. G. G ALLAGER: Low-density Parity Check Codes. MIT Press, 1963. 242
[8] R. G. G ALLAGER: Information Theory and Reliable Communication. Wiley, New York, 1968.
242
[9] O DED G OLDREICH: Short locally testable codes and proofs (survey). Technical Report 014, Elect.
Colloq. Comput. Complex. (ECCC), 2005. [ECCC:TR05-014]. 240
[10] O DED G OLDREICH AND O R M EIR: The tensor product of two good codes is not necessar-
ily robustly testable. Technical Report 062, Elect. Colloq. Comput. Complex. (ECCC), 2007.
[ECCC:TR07-062]. 240
[11] O DED G OLDREICH AND M ADHU S UDAN: Locally testable codes and PCPs of almost-
linear length. In Proc. 43rd FOCS, pp. 13–22. IEEE Comp. Soc. Press, 2002.
[FOCS:10.1109/SFCS.2002.1181878, ECCC:TR02-050]. 240
[12] G RIGORI Ĭ A. M ARGULIS: Explicit constructions of graphs without short cycles and low density
codes. Combinatorica, 2(1):71–78, 1982. [doi:10.1007/BF02579283]. 242
[13] O R M EIR: Combinatorial construction of locally testable codes. In R ICHARD E. L AD -
NER AND C YNTHIA DWORK , editors, Proc. 40th STOC, pp. 285–294. ACM Press, 2008.
[STOC:1374376.1374419]. 241
[14] M ICHAEL S IPSER AND DANIEL A. S PIELMAN: Expander codes. IEEE Trans. Inform. Theory,
42(6):1710–1722, 1996. Preliminary version appeared in FOCS 1994. [IEEE:10.1109/18.556667].
242
[15] DANIEL A. S PIELMAN: Linear-time encodable and decodable error-correcting codes. IEEE
Trans. Inform. Theory, 42(6):1723–1731, 1996. Preliminary version appeared in STOC 1995.
[IEEE:10.1109/18.556668]. 241, 244
[16] PAUL VALIANT: The tensor product of two codes is not necessarily robustly testable. In C HANDRA
C HEKURI , K LAUS JANSEN , J OS É D. P. ROLIM , AND L UCA T REVISAN, editors, Proc. 9th Intern.
Workshop on Randomization and Comput. (RANDOM’05), volume 3624 of LNCS, pp. 472–481.
Springer, 2005. [doi:10.1007/11538462 40]. 240
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 254
T ENSOR P RODUCTS OF W EAKLY S MOOTH C ODES ARE ROBUST
AUTHORS
Eli Ben-Sasson
Computer Science Department
Technion – Israel Institute of Technology, Israel
eli cs technion ac il
http://www.cs.technion.ac.il/∼eli
Michael Viderman
Computer Science Department
Technion – Israel Institute of Technology, Israel
viderman cs technion ac il
http://www.cs.technion.ac.il/people/viderman
ABOUT THE AUTHORS
E LI B EN -S ASSON has too little time for his hobbies: skiing, hiking and general outdoor
activities. When not writing about himself (as he is doing right now), he enjoys spending
time with his two daughters and son: Hallel, Nitzan, and Yair.
M ICHAEL V IDERMAN is a Ph. D. student, and enjoys spending time in the gym and in the
swimming pool with his friends.
T HEORY OF C OMPUTING, Volume 5 (2009), pp. 239–255 255