Authors Zeev Dvir, Amir Shpilka,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18
www.theoryofcomputing.org
Noisy Interpolating Sets
for Low-Degree Polynomials
Zeev Dvir∗ Amir Shpilka†
Received: September 4, 2009; published: February 12, 2011.
Abstract: A Noisy Interpolating Set (NIS) for degree-d polynomials is a set S ⊆ Fn , where
F is a finite field, such that any degree-d polynomial q ∈ F[x1 , . . . , xn ] can be efficiently
interpolated from its values on S, even if an adversary corrupts a constant fraction of the
values. In this paper we construct an explicit NIS for every field F p of prime order p and
any degree d. Our sets are of size O(nd ) and have efficient interpolation algorithms that can
recover q in the presence of a fraction exp(−Ω(d)) of errors.
Our construction is based on a theorem which roughly states that if S is a NIS for
degree-1 polynomials then d · S = {a1 + · · · + ad | ai ∈ S} is a NIS for degree-d polynomials.
Furthermore, given an efficient interpolation algorithm for S, we show how to use it in a
black-box manner to build an efficient interpolation algorithm for d · S.
As a corollary we obtain an explicit family of punctured Reed-Muller codes (codes that
are restrictions of a Reed-Muller code to a subset of the coordinates) which are good codes
and have an efficient decoding algorithm against a constant fraction of errors. To the best of
our knowledge, even the existence of punctured Reed-Muller codes that are good codes was
not previously known.
ACM Classification: F.2.2
AMS Classification: 68P30, 68Q25
Key words and phrases: polynomial interpolation, error-correcting codes
∗ Research supported by Binational Science Foundation (BSF) grant, by Israel Science Foundation (ISF) grant and by
Minerva Foundation grant.
† Research partially supported by the Israel Science Foundation (grant number 339/10).
2011 Zeev Dvir and Amir Shpilka
Licensed under a Creative Commons Attribution License DOI: 10.4086/toc.2011.v007a001
Z EEV DVIR AND A MIR S HPILKA
1 Introduction
An interpolating set for degree-d polynomials is a set of points S ⊆ Fn such that if q(x1 , . . . , xn ) is a
degree-d polynomial over F and we are given the set of values (q(s))s∈S then we can reconstruct q. That
is, we can find all the coefficients of q. A set S is a noisy interpolating set for degree-d polynomials
if we can reconstruct q even if an adversary corrupts an ε fraction of the values in (q(s))s∈S . It is not
difficult to prove that a random set of size O(nd ) is a noisy interpolating set. In this paper we study the
problem of giving an explicit construction of a small noisy interpolating set for degree-d polynomials
that has an efficient interpolation algorithm against a constant fraction of errors. Besides being a very
natural problem, this question is related to two other interesting problems: giving explicit constructions
of punctured Reed-Muller codes that are good codes and the problem of constructing pseudorandom
generators against low-degree polynomials. Before stating our results we describe the connection to these
two problems.
Reed-Muller codes are an important family of error-correcting codes. They are obtained by evaluating
low-degree polynomials on all possible inputs taken from a finite field F. More precisely, in the code
RM(n, d) the messages correspond to polynomials q(x1 , . . . , xn ) over F of total degree d, and the encoding
is the vector of evaluations (q(ᾱ))ᾱ∈Fn . The well-known Hadamard code, obtained from evaluating
linear functions, is the code RM(n, 1). (For more information on Reed-Muller codes see [10].) Although
Reed-Muller codes are not good codes (they do not have constant rate) they play an important role both
in practice and in theory. For example Reed-Muller codes were used in the early proofs of the PCP
theorem [2, 1], in constructions of pseudorandom generators [3, 7, 12] and extractors [13, 11], and more.
It is thus an important goal to better understand these codes and their properties.
A puncturing of a code is the operation of throwing away some of the coordinates of every codeword.
For example, every binary linear code (without repeated coordinates) is a punctured Hadamard code. In
particular, a punctured Reed-Muller code is a mapping that sends a degree-d polynomial q to its vector of
values on some subset S ⊆ Fn . As Reed-Muller codes are not good codes themselves, it is an interesting
question to find a puncturing that yields good codes. As before, it is not difficult to see that a random
subset of Fn of size O(nd ) gives a good punctured Reed-Muller code. An even more interesting question
is to find a puncturing that yields good codes that have efficient encoding and decoding algorithms.
(For a random S we don’t have an efficient decoding algorithm.) It is not hard to see that S is a noisy
interpolating set (that has an efficient interpolation algorithm) if and only if the mapping q → (q(ᾱ))ᾱ∈S
is a code of linear minimal distance (that has an efficient decoding algorithm). Our noisy interpolating
sets are of size O(nd ) and therefore they give a construction of good punctured Reed-Muller codes that
can be decoded efficiently. To the best of our knowledge no such construction was known prior to our
result.
Another problem that is related to our results is the problem of constructing pseudorandom generators
for low-degree polynomials. A mapping G : Fs → Fn is an ε-Pseudorandom Generator (PRG) against
degree-d polynomials if it fools every degree d polynomial in n variables. That is, the two distributions
q(Un ), where Un is chosen uniformly from Fn , and q(G(Us )), where Us is chosen uniformly from Fs ,
have statistical distance at most ε, for every degree-d polynomial q. PRGs and low-degree polynomials
have many application in theoretical computer science, and so it is an important question to give explicit
constructions of PRGs for low degree polynomials, that have good parameters (i. e., s as small as possible).
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 2
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
For the case of degree-1 polynomials, it is not hard to see that any noisy interpolating set implies a PRG
and vice versa. That is, if G : Fs → Fn is a PRG for polynomials of degree 1 then the image of G is a
noisy interpolating set. Similarly if S ⊆ Fn is a noisy interpolating set for degree-1 polynomials then any
one to one mapping whose image is S is a PRG for such polynomials. For larger values of d there is no
such strong correspondence between PRGs and noisy interpolating sets. As before, given a PRG we get
a noisy interpolating set (without an obvious efficient interpolation algorithm), however it is not clear
how to get a PRG from a noisy interpolating set. Constructions of PRGs against low degree polynomials
over very large fields were given in [5] using tools from algebraic geometry. Independently from our
work, Viola [14], building on the works of [6, 9], showed (over any field) that d · S is PRG for degree-d
polynomials whenever S is PRG for degree-1 polynomials. In particular this result implies that d · S is a
noisy interpolating set. However, [14] did not consider the problem of giving a noisy interpolating set
and in particular did not give an efficient interpolation algorithm. One can view our work as saying that
the construction analyzed in [6, 9, 14] with respect to minimal distance also has an efficient decoding
algorithm.
In the next section we consider two types of noisy interpolating sets. The first one allows for repetition
of points. That is, we allow S to be a multiset. This corresponds to repeating coordinates in an error-
correcting code. In the second variant (proper NIS) we do not allow such repetitions, and require S
to be a simple set (and not a multiset). This is what allows us to get punctured Reed-Muller codes.
For each of these types we prove a composition theorem, saying that the sumset of a NIS for degree-1
polynomials is a NIS for higher-degree polynomials. (The non-multiset version of this theorem requires
an additional condition on the initial NIS.) We then combine these theorems with known constructions of
error-correcting codes to obtain our final results.
1.1 Our results
In the following F will always denote a finite field of prime order.1 We denote by F[x1 , . . . , xn ] the space
of all polynomials in the variables x1 , . . . , xn with coefficients in F. We also denote by Fd [x1 , . . . , xn ] the
set of all polynomials of degree ≤ d in F[x1 , . . . , xn ]. Since we are interested in polynomials only as
functions over Fn we will assume in the following that the individual degree of each variable is smaller
than p = |F|. This is without loss of generality since for every a ∈ F we have a p = a. For example, over
F2 , the field of two elements, we will only consider multilinear polynomials. We denote the Hamming
distance between two strings x and y as dist(x, y).
Definition 1.1 (Noisy Interpolating Set (NIS)). Let S = (a1 , . . . , am ) ∈ (Fn )m be a list of points (not
necessarily distinct) in Fn . We say that S is an (n, d, ε)-Noisy Interpolating Set (NIS) if there exists an
algorithm AS such that for every q ∈ Fd [x1 , . . . , xn ] and for every vector e = (e1 , . . . , em ) ∈ Fm such that
|{i ∈ [m] | ei 6= 0}| ≤ ε · m ,
the algorithm AS , when given as input the list of values (q(a1 ) + e1 , . . . , q(am ) + em ), outputs the polyno-
mial q (as a list of its coefficients). We say that S is a proper NIS if the points a1 , . . . , am are distinct. If S
is a proper NIS we can treat it as a subset S ⊆ Fn .
1 It is possible to extend our results to other finite fields (see Section 5); however, it requires more notation and can easily be
obtained from the results for fields of prime order.
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 3
Z EEV DVIR AND A MIR S HPILKA
Let S = S(n) n∈N be a sequence such that for every n ∈ N we have that S(n) is an (n, d, ε)-NIS (d and
ε are fixed for all n). We say that S has an efficient interpolation algorithm if there exists a polynomial
time algorithm M(n, L) that takes as input an integer n and a list L of values in F such that the restriction
M(n, ·) has the same behavior as the algorithm AS(n) described above (in places where there is no risk of
confusion we will omit the superscript (n) and simply say that S has an efficient interpolation algorithm).
1.1.1 Multiset NIS
Our first theorem shows how to use a NIS for degree-1 polynomials in order to create a NIS for higher-
degree polynomials. In order to state the theorem we need the following notation. For two lists2 of points
S = (a1 , . . . , am ) ∈ (Fn )m and T = (b1 , . . . , b` ) ∈ (Fn )` we define the sum S T to be the list
(ai + b j )i∈[m], j∈[`] ∈ (Fn )m·` .
The reason for the notation is to make a clear distinction between sets and multi-sets. In particular we
shall use the notation S + T to denote set addition. That is, for two sets S and T we define
S + T = {s + t | s ∈ S, t ∈ T } .
Theorem 1.2. Let 0 < ε1 < 1/2 be a real number. Let S1 be an (n, 1, ε1 )-NIS and for each d > 1 let
Sd = Sd−1 S1 . Then for every d > 1 the set Sd is an (n, d, εd )- NIS with εd = (ε1 /2)d . Moreover, if S1
has an efficient interpolation algorithm, then so does Sd .
Combining Theorem 1.2 with known constructions of error-correcting codes (in order to define S1 )
we get the following corollary:
Corollary 1.3. For every field F of prime order and for every d > 0, there exists an ε > 0 and a
family S = S(n) n∈N such that for every n ∈ N , S(n) is an (n, d, ε)-NIS,and such that S has an efficient
interpolation algorithm. Moreover, for each n ∈ N we have S(n) = O(nd ) and it is possible to generate
the set S(n) in time poly(n).
1.1.2 Proper NIS
We are able to prove an analog of Theorem 1.2 for proper NIS. We would like to show that if S1 is a
proper (n, 1, ε1 )-NIS then the sets Sd = Sd−1 + S1 (this time we use the operation of set addition) are
proper (n, d, εd )-NISs. To show this we will require the initial set S1 to satisfy a certain natural condition
on the intersections of different ‘shifts’ of the sets Sd−1 . Roughly speaking, this condition will guarantee
that there is no small set of errors in Sd = Sd−1 + S1 that has a large intersection with many shifts of Sd−1
(one can see from the proof of Theorem 1.2 why this is useful). The operation of set addition is defined
as follows: Let A and B be two subsets of Fn . We define A + B = {a + b | a ∈ A, b ∈ B}. For an element
c ∈ Fn and for a subset A ⊆ Fn we denote A + c = {a + c | a ∈ A}.
2 Instead of a list one can have a multi-set in mind, however it is slightly easier to describe and prove our results using the
notion of lists.
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 4
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
Definition 1.4 (Condition3 Fk ). Let S ⊆ Fn . Let S0 = {0} and for each d ≥ 1 let Sd = Sd−1 + S. Let
k > 0 be an integer. For x ∈ Sd we let Nd (x) = |{b ∈ S | x ∈ Sd−1 + b}|. We say that S satisfies condition
Fk if for every 0 < d ≤ k we have
|{x ∈ Sd | Nd (x) > d}| ≤ |Sd−2 | .
We are now ready to state the analog of Theorem 1.2 for proper NISs.
Theorem 1.5. Let 0 < ε1 < 1/2 be a real number and let k > 0 an integer. There exists a constant C0 ,
depending only on ε and k, such that for all n > C0 the following holds: Let S1 be a proper (n, 1, ε1 )-NIS
and for each d > 1 let Sd = Sd−1 + S1 . Suppose S1 satisfies condition Fk . Then, for every 1 < d ≤ k the
set Sd is a (proper) (n, d, εd )-NIS with
1 ε1 d
εd = · .
d! 3
Moreover, if S1 has an efficient interpolation algorithm, then so does Sd .
As before, combining Theorem 1.5 with known constructions of error-correcting codes gives the
following corollary:
Corollary 1.6. For every field F of prime order and for every d > 0, there exists an ε > 0 and a family
S = S(n) n∈N such that for every n ∈ N , S(n) is a proper (n, d, ε)-NIS with an efficient interpolation
algorithm. Moreover, for each n ∈ N we have S(n) = O(nd ) and it is possible to generate the set S(n) in
time poly(n).
It is easy to see that any (n, d, ε)-proper NIS induces a puncturing of the Reed-Muller code RM(n, d)
that has an efficient decoding from a fraction ε of errors. (We keep the coordinates of the RM-code
corresponding to the points in the NIS.) The following corollary is therefore immediate. (See Section 2.2
for the definition of a good family of codes.)
Corollary 1.7. For every field F of prime order and for every d > 0, there exists an ε > 0 and a family
{Cn } of good codes such that Cn is a punctured RM(n, d)-code. Furthermore, the family {Cn } has an
efficient encoding and decoding algorithms from a constant fraction of errors.
1.2 Organization
In Section 2 we give preliminary results that we will use in the rest of the paper. In Section 3 we prove
Theorem 1.2 and Corollary 1.3. In Section 4, we prove Theorem 1.5 and Corollary 1.6. Section 5 contains
a sketch of an argument showing how to extend our results from fields of prime order to any finite field.
Section 6 describes a construction of a family of error-correcting codes with certain special properties
that is used in Section 4.
3 Pronounced “Star k.”
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 5
Z EEV DVIR AND A MIR S HPILKA
2 Preliminaries
Let q ∈ F[x1 , . . . , xn ] be a polynomial. We denote by deg(q) the total degree of q. The i-th homogeneous
part of q is the sum of all monomials in q of degree i, and is denoted with qi . We denote q≤i = ∑ij=0 q j .
We now give some useful facts and definitions regarding partial derivatives of polynomials and error-
correcting codes.
2.1 Partial and directional derivatives
In this section we define partial and directional derivatives of polynomials over finite fields and discuss
some of their properties.
We start by defining the partial derivatives of a monomial. Let M(x) = x1c1 · . . . · xncn be a monomial in
n variables so that ci ≥ 0 for all i. The partial derivative of M(x) with respect to xi is zero if ci = 0 and
∂M
(x) = ci · x1c1 · . . . · xici −1 · . . . · xncn
∂ xi
if ci > 0. The definition extends to general polynomials by linearity. It is easy to verify that if deg(q) = d
then all of the partial derivatives of q are of degree at most d − 1. Also, since we assume that the individual
degrees are smaller than the characteristic of the field we have that ∂∂xqi (x) ≡ 0 iff xi does not appear in
q(x). We denote by
∂q ∂q
∆q (x) = (x), . . . , (x)
∂ x1 ∂ xn
the partial derivative vector of q. The directional derivative of a polynomial q ∈ F[x1 , . . . , xn ] in direction
a ∈ Fn is denoted by ∂q (x, a) and is defined as
n
∂q
∂q (x, a) = ∑ ai · (x) .
i=1 ∂ xi
That is, ∂q (x, a) is the inner product of a and ∆q (x).
The following lemma gives a more “geometric” view of the partial derivatives of q by connecting
them to difference of q along certain directions.
Lemma 2.1. Let q ∈ Fd [x1 , . . . , xn ] and let qd be its homogeneous part of degree d. Let a, b ∈ Fn . Then
q(x + a) − q(x + b) = ∂qd (x, a − b) + E(x) ,
where deg(E) ≤ d − 2 (we use the convention that deg(0) = −∞). That is, the directional derivative of qd
in direction a − b is given by the homogeneous part of degree d − 1 in the difference q(x + a) − q(x + b).
Proof. It is enough to prove the lemma for the case that q is a monomial of degree d. The result will then
follow from linearity and from the fact that the derivatives of all monomials in q of degree smaller than
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 6
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
d are of degree at most d − 2. Let M(x) = x1c1 · . . . · xncn be a monomial of degree d. We observe that the
expression M(x + a) can be written as
n
M(x + a) = ∏(xi + ai )c i
i=1
n
∂M
= M(x) + ∑ ai · (x) + E1 (x) ,
i=1 ∂ xi
where deg(E1 ) ≤ d − 2. Now, subtracting M(x + b) from M(x + a), the term M(x) cancels out and we are
left with the desired expression.
The next easy lemma shows that a homogeneous polynomial q can be reconstructed from its partial
derivatives.
Lemma 2.2. Let q ∈ Fd [x1 , . . . , xn ] be a homogeneous polynomial of degree at least 1. Given the vector
of partial derivatives ∆q (x), it is possible to reconstruct q in polynomial time.
Proof. We go over all monomials of degree ≤ d and find out the coefficient they have in q using the
following method. For a monomial M let i be the first index such that xi appears in the monomial M with
positive degree. Consider ∂∂xqi (x) and check whether the coefficient of the derivative of that monomial is
zero or not. For example, if we want to “decode” the monomial x15 · x23 we will look at the coefficient of
x14 · x23 in ∂∂xq1 (x). To get the coefficient in q we have to divide it by the degree of xi in the monomial, that
is by 5 (remember that the individual degrees are smaller than p = |F| = char(F)).
2.2 Error-correcting codes
We start with the basic definitions of error-correcting codes. Let F be a field and let E : Fn → Fm . Denote
by C = E(Fn ) the image of E. Then C is called an [m, n, k]-code if for any two codewords E(v), E(u) ∈ C,
where u 6= v, we have that dist(E(u), E(v)) ≥ k (“dist” denotes the Hamming distance). We refer to m as
the block length of C and to k as the minimal distance of C. We denote by R = n/m the relative rate of C
and by δ = k/m the relative minimal distance of C, and say that C is an [R, δ ]-code. When E is a linear
mapping we say that C is a linear code and call the m × n matrix computing E the generator matrix of C.
A map D : Fm → Fn can correct t errors (with respect to E) if for any v ∈ Fn and any w ∈ Fm such that
dist(E(v), w) ≤ t we have that D(w) = v. Such a D is called a decoding algorithm for C.
A family of codes {Ci }, where Ci is an [Ri , δi ]-code of block length mi , has constant rate if there
exists a constant R > 0 such that for all codes in the family it holds that Ri ≥ R. The family has a linear
distance if there exists a constant δ > 0 such that for all codes in the family we have δi ≥ δ . In such a
case we say that the family is a family of [R, δ ]-codes. If a family of codes as above has limi→∞ mi = ∞, a
constant rate and a linear minimal distance then we say that the family is a family of good codes and that
the codes in the family are good. Similarly, we say that the family of codes has a decoding algorithm for
a fraction τ of errors if for each Ci there is a decoding algorithm Di that can correct a τ fraction of errors.
When C is a linear code we define the dual of C in the following way:
C⊥ , {y ∈ Fm :| ∀x ∈ C hy, xi = 0} .
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 7
Z EEV DVIR AND A MIR S HPILKA
Lemma 2.3. Let C be an [m, n, k]-code over F such that C has an efficient decoding algorithm that can
correct an α fraction of errors. For i ∈ [m] let ai ∈ Fn be the i-th row of the generator matrix of C. Then,
1. Let S0 = (a1 , . . . , am , 0̄, . . . , 0̄) ∈ (Fn )2m . Then S0 is an (n, 1, α/2)-NIS with an efficient interpolation
algorithm.
2. Let S = (a1 , . . . , am ) ∈ (Fn )m and suppose that the maximal Hamming weight of a codeword in C is
smaller than (1 − 2α) · m. Then S is an (n, 1, α)-NIS with an efficient interpolation algorithm.
Proof.
1. Let q be a degree-1 polynomial. The interpolation algorithm for S0 will work as follows: we first
look at the values of q(x) on the last m points (the zeros). The majority of these values will be q(0)
which will give us the constant term in q. Let q1 (x) = q(x)−q(0) be the linear part of q. Subtracting
q(0) from the values in the first m coordinates we reduce ourselves to the problem of recovering
the homogeneous linear function q1 from its values on (a1 , . . . , am ) with at most α · m errors. This
task is achieved using the decoding algorithm for C, since the vector (q1 (a1 ), . . . , q1 (am )) is just
the encoding of the vector of coefficients of q1 with the code C.
2. Here we cannot decipher the constant term of q(x) so easily. Let (v1 , . . . , vm ) ∈ F be the list of
input values given to us. (vi = q(ai ) for a 1 − α fraction of the values i.) What we will do is we
will go over all p = |F| possible choices for q(0) and for each “guess” c ∈ F do the following:
Subtract c from the values (v1 , . . . , vm ) and then use the decoding algorithm of C to decode the
vector Vc = (v1 − c, . . . , vm − c). It is clear that for c = q(0), this procedure will give us the list of
coefficients of q(x) (without the constant term). Our task is then to figure out which invocation of
the decoding algorithm is the correct one.
Say that the decoding algorithm, on input Vc returned a linear polynomial qc (x) (there is no constant
term). We can then check to see whether qc (ai ) + c is indeed equal to vi for a 1 − α fraction of the
values i. If we could show that this test will succeed only for a single c ∈ F then we are done and
the lemma will follow. Suppose for contradiction that there are two linear polynomials qc (x) and
qc0 (x) such that both agree with a fraction 1 − α of the input values. This means that there exist
two codewords Wc ,Wc0 ∈ Fm (the encodings of qc and qc0 ) in C such that dist(Vc ,Wc ) ≤ α · m and
dist(Vc0 ,Wc0 ) ≤ α · m. Therefore
dist(Vc −Vc0 ,Wc −Wc0 ) ≤ 2α · m.
The vector Vc −Vc0 has the value c0 − c 6= 0 in all of its coordinates and so we get that the Hamming
weight of the codeword Wc −Wc0 is at least (1 − 2α) · m, contradicting the properties of C.
Lemma 2.4. Let C be an [m, n, k]-code over F such that the dual C⊥ has minimal distance > r. Let
S ⊆ Fn be the set of points corresponding to the rows of the generator matrix of C. Then S satisfies
condition Fs for s ≥ r/2.
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 8
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
Proof. Let 0 < d ≤ r/2 be an integer and let Sd = S + . . . + S , d times. We will prove the lemma by
showing that
{x ∈ Sd | Nd (x) > d} ⊆ Sd−2 ,
(the reader is encouraged to review Definition 1.4 for the definition of Nd (x)). Let x ∈ Sd . We can thus
write
x = a1 + · · · + ad ,
with a1 , . . . , ad ∈ S. Suppose now that Nd (x) > d. This means that there exists a way to write x as a sum
of elements in S as follows
x = b1 + · · · + bd−1 + c ,
with c 6∈ {a1 , . . . , ad }. Combining the two ways to write x we get
a1 + · · · + ad − b1 − · · · − bd−1 − c = 0 .
Since the distance of the dual is larger than 2d the above equality must be a trivial one. That is, every
vector must appear with a coefficient that is a multiple of p (the characteristic). Since c ∈
/ {a1 , . . . , ad },
we infer that c must appear as one of the values bi (otherwise it would have coefficient −1). This means
that x is in Sd−2 .
3 Constructing a NIS
In this section we prove Theorem 1.2 and Corollary 1.3. We start with the proof of Corollary 1.3, which
is a simple application of known results in coding theory.
Proof of Corollary 1.3. To prove the corollary, all we need to do is to construct, for all n ∈ N, an (n, 1, ε)-
(n)
NIS S1 with an efficient interpolation algorithm and ε which does not depend on n. The corollary will
then follow by applying Theorem 1.2.
To construct S1 we take a good family of linear codes {Cn }, where Cn is an [mn , n, kn ]-code over F
(see Section 2.2) that has an efficient decoding algorithm that can decode a constant fraction of errors and
such that the generator matrix of Cn can be found in polynomial time (such families of codes are known
(n)
to exists, see [10, Chap. 10]). Let a1 , . . . , amn ∈ Fn be the rows of its generator matrix. We define S1 to
(n)
be the list of points (a1 , . . . , amn , b1 , . . . , bmn ), where for each j ∈ [mn ] we set b j = 0. That is, S1 contains
the rows of the generator matrix of a good code, together with the points 0, taken with multiplicity mn .
(n)
Section 2.3 now shows that S1 satisfies all the required conditions.
3.1 Proof of Theorem 1.2
Let S1 = (a1 , . . . , am ) ∈ (Fn )m be an (n, 1, ε1 )-NIS of size |S1 | = m. Let Sd−1 = (b1 , . . . , br ) ∈ (Fn )r be
an (n, d − 1, εd−1 )-NIS of size |Sd−1 | = r. Let Ad−1 and A1 be interpolation algorithms for Sd−1 and S1
respectively, as described in Definition 1.1. Let Sd = Sd−1 S1 . We will prove the theorem by showing
that Sd has an interpolation algorithm that makes at most a polynomial number of calls to Ad−1 and to A1
and can “correct” a fraction
ε1 · εd−1
εd =
2
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 9
Z EEV DVIR AND A MIR S HPILKA
of errors. We will describe the algorithm Ad and prove its correctness simultaneously (the number of calls
to Ad−1 and A1 will clearly be polynomial and so we will not count them explicitly).
Fix q ∈ Fd [x1 , . . . , xn ] to be some degree-d polynomial and let qd be its homogeneous part of degree
d. Let us denote Sd = (c1 , . . . , cmr ), where each ci is in Fn . We also denote by e = (e1 , . . . , emr ) ∈ Fmr the
list of “errors,” so that |{i ∈ [mr] | ei 6= 0}| ≤ εd · mr. The list Sd can be partitioned in a natural way into
all the “shifts” of the list Sd−1 . Let us define for each i ∈ [m] the list T (i) = (b1 + ai , . . . , br + ai ) ∈ (Fn )r .
We thus have that Sd is the concatenation of T (1) , . . . , T (m) (the order does not matter here). We can also
partition the list of errors in a similar way into m lists e(1) , . . . , e(m) , each of length r, such that e(i) is the
list of errors corresponding to the points in T (i) .
We say that an index i ∈ [m] is good if the fraction of errors in T (i) is at most εd−1 /2, otherwise we
say that i is bad. That is, i is good if
(i)
|{ j ∈ [r] | e j 6= 0}| ≤ (εd−1 /2) · |T (i) | = (εd−1 /2) · r .
Let E = {i ∈ [m] | i is bad}. From the bound on the total number of errors we get that
|E| ≤ ε1 · m . (3.1)
Overview The algorithm is divided into three steps. In the first step we look at all pairs (T (i) , T ( j) ) and
from each one attempt to reconstruct, using Ad−1 , the directional derivative ∂qd (x, ai − a j ). We will claim
that this step gives the correct output for most pairs (T (i) , T ( j) ) (namely, for all pairs in which both i and j
are good). In the second step we take all the directional derivatives obtained in the previous step (some of
them erroneous) and from them reconstruct, using A1 , the vector of derivatives ∆qd (x) and so also recover
qd (x). In the last step of the algorithm we recover the polynomial q≤d−1 (x) = q(x) − qd (x), again using
Ad−1 . This will give us q(x) = q≤d−1 (x) + qd (x).
Step I: Let i 6= j ∈ [m] be two good indices as defined above. We will show how to reconstruct
∂qd (x, ai − a j ) from the values in T (i) , T ( j) . Recall that we have at our disposal two lists of values
(i) (i)
Li = q(b1 + ai ) + e1 , . . . , q(br + ai ) + er
and
( j) ( j)
L j = q(b1 + a j ) + e1 , . . . , q(br + a j ) + er .
Taking the difference of the two lists we obtain the list
(i) ( j) (i) ( j)
Li j = q(b1 + ai ) − q(b1 + a j ) + e1 − e1 , . . . , q(br + ai ) − q(br + a j ) + er − er .
(i) ( j)
Observe that since i and j are both good we have that the term e` − e` is non zero for at most
εd−1 · r values of ` ∈ [r]. Therefore, we can use algorithm Ad−1 to recover the degree-(d − 1) polynomial
Qi j (x) = q(x +ai )−q(x +a j ) from the list Li j . From Section 2.1 we see that throwing away all monomials
of degree less than d − 1 in Qi j leaves us with ∂qd (x, ai − a j ).
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 10
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
We carry out this first step for all pairs (T (i) , T ( j) ) and obtain m2 homogeneous polynomials of
degree d − 1, let us denote them by Ri j (x). We know that if i and j are both good then
Ri j (x) = ∂qd (x, ai − a j ) .
If either i or j are bad, we do not know anything about Ri j (x) (we can assume without loss of generality
that it is homogeneous of degree d − 1 since if it is not then clearly the pair is bad and we can modify it
to any polynomial we wish, without increasing the total number of errors).
Step II: In this step we take the polynomials Ri j obtained in the first step and recover from them
the polynomial ∆qd (x) (then, using Section 2.2, we will also have qd (x)). We start by setting up some
notations: The set of degree-(d − 1) monomials is indexed by the set
Id−1 = {(α1 , . . . , αn ) | 0 ≤ αi ≤ p − 1 , α1 + . . . + αn = d − 1} .
We denote by xα = ∏i xiαi and also denote by coef (xα , h) the coefficient of the monomial xα in a
polynomial h(x). Let α ∈ Id−1 and define the degree-1 polynomial
n
∂ qd
Uα (y1 , . . . , yn ) = ∑ coef xα , · y` .
`=1 ∂ x`
Observe that
n
∂ qd
∂qd (x, ai − a j ) = ∑ (ai − a j )` · ∂ x` (x)
`=1
= ∑ xα ·Uα (ai − a j ) .
α∈Id−1
Therefore, for each pair i, j such that both i and j are good we can get the (correct) values Uα (ai − a j ) for
all α ∈ Id−1 by observing the coefficients of Ri j .
Fix some α ∈ Id−1 . Using the procedure implied above for all pairs i 6= j ∈ [m], we get m2 values
ui j ∈ F such that if i and j are good then
ui j = Uα (ai − a j ) .
We now show how to recover Uα from the values ui j . Repeating this procedure for all α ∈ Id−1 will give
us ∆qd (x).
Since we fixed α let us denote by U(y) = Uα (y). We have at out disposal a list of values (ui j )i, j∈[m]
such that there exists a set E = {i ∈ [m] | i is bad} of size |E| ≤ ε1 · m such that if i and j are not
in E then ui j = U(ai − a j ). Let us partition the list (ui j ) according to the index j into m disjoint
lists: B j = (u1 j , . . . , um j ). If j 6∈ E than the list B j contains the values of the degree-1 polynomial
U j (y) = U(y − a j ) on the set S1 with at most ε1 · m errors (the errors will correspond to indices i ∈ E).
Therefore, we can use A1 in order to reconstruct U j , and from it U. The “problem” is that we do not
know which values j are good. This problem can be easily solved by applying the above procedure for all
j ∈ [m] and then taking the majority vote. Since all the good values of j will return the correct U(y) we
will have a clear majority of at least a 1 − ε1 fraction. Combining all of the above gives us the polynomial
qd (x) and so the second step is complete.
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 11
Z EEV DVIR AND A MIR S HPILKA
Step III: Having recovered qd in the last step we now subtract the value qd (ci ) from the input list of
values (these are the values of q(x) on Sd , with ≤ εd fraction of errors). This reduces us to the problem of
recovering the degree-(d − 1) polynomial q≤d−1 (x) = q(x) − qd (x) from its values in Sd with a fraction
εd of errors. Solving this problem is easy to do by using the algorithm Ad−1 on the values in each list T ( j)
(defined in the beginning of this section) and then taking the majority. Since for a good value j, the list
T ( j) contains at most εd−1 · r errors, and since more than half of the values j are good, we will get a clear
majority and so be able to recover q≤d−1 .
4 Constructing a proper NIS
In this section we prove Theorem 1.5 and Corollary 1.6. As before we start with the proof of Corollary 1.6
and then go on to prove the theorem.
Proof of Corollary 1.6. This will be similar to the proof of Corollary 1.3. All we need to do is find, for
each n, a proper (n, 1, ε)-NIS that satisfies condition Fd . In order to do so we will need a good family of
codes {Cn }, such that each Cn is an [mn , n, kn ]-code, which satisfies the following three conditions:
1. {Cn } has an efficient decoding algorithm that can decode a constant fraction α = Ω(1) of errors.
2. The minimal distance of the dual Cn⊥ is larger than 2 · d.
3. Each codeword in Cn has Hamming weight at most (1 − 2α) · mn .
Such a family of codes can be constructed explicitly using standard techniques (see Section 6).
The points in S1 are given by the rows of the generator matrix of the code Cn . The rows are all distinct
since otherwise we would have a codeword of weight 2 in the dual. From Section 2.4 we get that S1
satisfies condition Fd and from item 2. of Section 2.3 we get that S1 is an (n, 1, α)-proper NIS with an
efficient interpolation algorithm.
4.1 Proof of Theorem 1.5
The proof of Theorem 1.5 uses the same arguments as the proof of Theorem 1.2. The only difference
is that the “shifts” of Sd−1 (denoted by Ti , i = 1, . . . , m) might intersect. However, using the fact that S1
satisfies condition Fk for k ≥ d, we will be able control the effect of these intersections. To be more
precise, the only place where the proof will differ from the proof of Theorem 1.2 is where we claim that
there are not too many “bad” indices i ∈ [m] (see below for details). The rest of the proof will be identical
and so we will not repeat the parts that we do not have to.
We start by setting the notations for the proof. Let S1 ⊆ Fn be a proper (n, 1, ε1 )-NIS such that
|S1 | = m and let Sd−1 ⊆ Fn be a proper (n, d − 1, εd−1 )-NIS obtained by summing S1 with itself d − 1
times. Let Ad−1 and A1 be interpolation algorithms for Sd−1 and S1 respectively. Let Sd = Sd−1 + S1 .
As before, the theorem will follow by giving an interpolation algorithm for Sd that makes at most a
polynomial number of calls to Ad−1 and to A1 and can “correct” a fraction
εd−1 · ε1
εd = (4.1)
3d
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 12
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
of errors (the expression for εd appearing in the theorem can be deduced from this equation by a simple
induction).
Fix a degree-d polynomial q ∈ Fd [x1 , . . . , xn ] and let qd be its homogeneous part of degree d. Write
S1 = {a1 , . . . , am } (the ai are distinct). The set Sd can be divided into m (possibly overlapping) sets
T1 , . . . , Tm such that [
Ti = Sd−1 + ai and Sd = Ti .
i∈[m]
Our algorithm is given a list of values of q(x) on the set Sd with at most εd · |Sd | errors. Let B ⊆ Sd be the
set of points in Sd such that the value of q(x) on these points contains an error. We therefore have
|B| ≤ εd · |Sd | . (4.2)
We say that an index i ∈ [m] is bad if
|Ti ∩ B| > (εd−1 /2) · |Ti | = (εd−1 /2) · |Sd−1 | .
The next claim bounds the number of bad values i.
Claim 4.1. Let E = {i ∈ [m] | i is bad}. Then
|E| ≤ ε1 · m .
If we can prove Claim 4.1 then the rest of the proof of Theorem 1.5 will be identical to the proof of
Theorem 1.2 (starting from the part titled “Step I”), as the reader can easily verify. Indeed Claim 4.1 is
the analog of Equation (3.1) for the case of proper sets. We therefore end this section with a proof of
Claim 4.1.
4.1.1 Proof of Claim 4.1
In order to prove Claim 4.1 we will require the following auxiliary claim on the growth rate of the sumsets
of S.
Claim 4.2. Let S ⊆ Fn be of size m such that S satisfies condition Fk and let the sets S0 , S1 , . . . , Sk be
as in Definition 1.4 (the sumsets of S). Then there exists a constant C1 depending only on k such that if
|S| > C1 then for every d ≤ k we have
1
|Sd | ≥ · |Sd−1 | · |S| . (4.3)
2d
Proof. The proof is by induction on d. For d = 1 the bound in (4.3) is trivial. Suppose the bound holds
for d − 1 and lets prove it for d. Let us denote S = {a1 , . . . , am } and Ti = Sd−1 + ai . Clearly Sd = m
S
i=1 Ti .
We will lower bound the number of points in Sd in the following way: For each i we will take all points
x ∈ Ti such that Nd (x) ≤ d and than divide the number of points we got by d. Since there are at most
|Sd−2 | elements x ∈ Sd with Nd (x) > d, we get the bound
1 m
|Sd | ≥ · ∑ (|Ti | − |Sd−2 |)
d i=1
1 1
= · |S| · |Sd−1 | − · |S| · |Sd−2 | .
d d
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 13
Z EEV DVIR AND A MIR S HPILKA
Using the inductive hypothesis on the second term yields
1 2d − 2
|Sd | ≥ · |S| · |Sd−1 | − · |Sd−1 |
d d
1
≥ · |S| · |Sd−1 | ,
2d
where the last inequality holds for sufficiently large |S| (as a function of d).
Proof of Claim 4.1. Let t = |E| and suppose in contradiction that t > ε1 · m = ε1 · |S|. This means that
there are at least t sets Ti1 , . . . , Tit such that |B ∩ Ti j | ≥ (εd−1 /2) · |Sd−1 |. From condition Fd we know
that, apart from a set of size at most |Sd−2 |, each “bad” element appears in at most d sets Ti j . Therefore
1 ε
d−1
εd · |Sd | ≥ |B| ≥ · ε1 · |S| · · |Sd−1 | − |Sd−2 |
d 2
εd−1 · ε1 ε1
= · |Sd−1 | · |S| − · |S| · |Sd−2 |
2d d
εd−1 · ε1 ε1
≥ · |Sd | − · |S| · |Sd−2 | .
2d d
Using Claim 4.2 twice we get that
|Sd |
|S| · |Sd−2 | ≤ 2(d − 1) · |Sd−1 | ≤ 4d(d − 1) · .
|S|
Plugging this inequality into the previous one we get that
εd−1 · ε1 |Sd |
εd · |Sd | ≥ · |Sd | − ε1 · 4 · (d − 1) · ,
2d |S|
which after division by |Sd | gives
εd−1 · ε1 ε1 · 4 · (d − 1)
εd ≥ −
2d |S|
εd−1 · ε1
> ,
3d
(when m = |S| is sufficiently large, as a function of ε1 and d) contradicting Equation (4.2).
5 Other finite fields
We briefly sketch how one can extend Theorems 1.2 and 1.5 to non-prime finite fields.
The main difference is that over extension fields, one has to consider the weight-degree instead of the
degree. Specifically, let F be a field of order pr for a prime p. Given an integer d < pr let d = ∑r−1 i
i=0 ai p ,
r−1
0 ≤ ai < p, be its base-p expansion. The weight of d, denoted wt(d), is defined as wt(d) = ∑i=0 ai . For
example, the weight of 14 over F9 is 4 as 14 = 1 · 9 + 1 · 3 + 2 and 1 + 1 + 2 = 4. The weight-degree of a
monomial xd is simply the weight of d. Similarly the weight-degree of a multivariate monomial ∏ni=1 xidi
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 14
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
is ∑ni=1 wt(di ). The weight-degree of a polynomial q(x1 , . . . , xn ) is the maximum of the weight-degrees of
i
its monomials. Intuitively, the weight-degree is the correct notion of degree as the polynomial x p is in
i
fact a linear map over F, and so xai p “behaves” like a polynomial of degree ai .
It is not hard to see that polynomials of weight-degree d in F pr [x1 , . . . , xn ] that map (F pr )n to F
correspond to degree-d polynomials in nr variables over F p and vice versa (see, e. g., [4]).
Now, given a polynomial q of weight-degree d over F pr and any α ∈ F pr , let qα (x1 , . . . , xn ) be defined
as qα = Tr(α · q(x1 , . . . , xn )), where Tr is the trace operator; Tr(x) = ∑r−1 pi
i=0 x . It is easy to see that qα is
of weight-degree at most d and that qα : (F pr )n → F. It is also not hard to see that if one can learn all the
values qα then one can easily reconstruct q (by a simple linear transformation).
The argument above shows that in order to construct NIS for polynomials of weight-degree d over
F pr it is sufficient to construct NIS for polynomials of degree d in nr variables. This explains how
Theorems 1.2 and 1.5 can be extended to non-prime finite fields. We thus obtain the following Theorem
that enables an immediate translation of the results stated in Theorems 1.2 and 1.5 and Corollaries 1.3,
1.6 and 1.7 to the context of fields of prime power order.
Theorem 5.1. Let p be a prime number, n, r and d integers, and ε > 0 a real number. Fix a basis for F pr
over F p . If Sd is an (nr, d, ε)-NIS over F p then, when viewed as a collection of n-tuples over F pr according
to the fixed basis, Sd is an (n, d, ε)-NIS over F pr (i. e., it is a NIS for polynomials of weight-degree d
over F pr ). Moreover, if Sd has an interpolation algorithm that runs in time T over F p then it has such
an algorithm also when viewed as a NIS over F pr . The algorithm can handle the same amount of errors
and its running time is O(rT ) + poly(n). In addition, if Sd is a proper NIS over F p then it is a proper NIS
over F pr .
We leave the exact details of the proof to the reader.
6 A family of codes with special properties
In the proof of Corollary 1.6 we used a family of good linear error-correcting codes {Cn }, over a field F
of prime order p, such that Cn has message length n and such that the following three properties
1. The family {Cn } has an efficient encoding/decoding algorithms from a constant fraction of errors.
2. The dual distance of Cn is ω(1).
3. If the block length of Cn is mn then every codeword in Cn has Hamming weight at most (1 − δ ) · mn
for some constant δ > 0.
Efficient codes with good dual distance are known to exists over any finite field. Results of this type can
be found, e. g., in [10]. It is possible that some of these codes already posses property (3) above, however,
we could not find a proof for that anywhere. Since we only require the dual distance to be super-constant
we can give a simple self-contained construction satisfying all three conditions above. We sketch the
argument below without giving a complete proof.
Say we wish to construct a code as above for messages of length n. We pick an integer ` such that
` · p` = θ (n) (more accurately, we pick ` such that, say, ` · p` is at most 0.9n and at least 0.8n/p) . Notice
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 15
Z EEV DVIR AND A MIR S HPILKA
that ` < log(n). We construct a field E of order p` (this can be done deterministically in time poly(n)). We
now take a Reed-Solomon code R over E, of block length p` , whose messages are univariate polynomials
of degree α · p` , for some constant α. Notice that this code has minimal distance (1 − α) · p` (over
E), relative rate α, and its dual distance is at least α · p` . Let ϕ : E → F` be the natural representation
` `
of E as `-tuples over F. We can extend ϕ in the natural way to be a mapping from E p to (F` ) p . In
particular we can think of ϕ(R) as a code over F of the same relative rate and of at least the same minimal
distance. It is not hard to see that there exists an invertible linear transformation A : F` → F` , such that
v = (v1 , . . . , v p` ) ∈ R⊥ if and only if4 (Aϕ(v1 ), . . . , Aϕ(v p` )) ∈ ϕ(R)⊥ (there is a slight abuse of notation
here, but the meaning is clear). In particular we get that the minimal distance of ϕ(R)⊥ is the same as the
minimal distance of R⊥ .
p`
Now, for every 1 ≤ i ≤ p` we pick a code C̃i : F` → Fmi , such that mi = O(`) and such that ∑i=1 mi = n.
Moreover, we would like the codes C̃i to satisfy the three properties stated above. It is not hard to play
a little with the parameters to get that such C̃i and mi exist. We also note that the C̃i can be constructed
recursively. We now “concatenate” the code5 ϕ(R) with the C̃i such that Ci is applied on the i-th block, of
length `, of ϕ(R). Denote the resulting code with Cn . It is now easy to verify that indeed the block length
of Cn is n, that it has constant rate and linear minimal distance, and that Cn⊥ has minimal distance ω(1).
Acknowledgement
We would like to thank Parikshit Gopalan for bringing the problem to our attention, Emanuele Viola for
sharing the manuscript of [14] with us, and Shachar Lovett and Chris Umans for helpful discussions. We
also thank the anonymous reviewers for comments that improved the presentations of the results. We are
thankful to Laci Babai for helpful comments.
References
[1] S ANJEEV A RORA , C ARSTEN L UND , R AJEEV M OTWANI , M ADHU S UDAN , AND M ARIO
S ZEGEDY: Proof verification and the hardness of approximation problems. J. ACM, 45(3):501–555,
1998. [doi:10.1145/278298.278306] 2
[2] S ANJEEV A RORA AND S HMUEL S AFRA: Probabilistic checking of proofs: A new characterization
of NP. J. ACM, 45(1):70–122, 1998. [doi:10.1145/273865.273901] 2
[3] L ÁSZL Ó BABAI , L ANCE F ORTNOW, N OAM N ISAN , AND AVI W IGDERSON: BPP has subexponen-
tial time simulations unless EXPTIME has publishable proofs. Comput. Complexity, 3(4):307–318,
1993. [doi:10.1007/BF01275486] 2
[4] E LI B EN -S ASSON AND M ADHU S UDAN: Limits on the rate of locally testable affine-invariant
codes. Electron. Colloq. on Comput. Complexity (ECCC), 17:108, 2010. [ECCC:TR10-108] 15
4 We would like to have ϕ(R)⊥ = ϕ(R⊥ ), however this is not true, so we have A that “fixes” this.
5 This is not the usual definition of code concatenation, however it is similar in spirit to the construction of Justesen codes [8].
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 16
N OISY I NTERPOLATING S ETS FOR L OW-D EGREE P OLYNOMIALS
[5] A NDREJ B OGDANOV: Pseudorandom generators for low degree polynomials. In Proc. 37th STOC,
pp. 21–30. ACM Press, 2005. [doi:10.1145/1060590.1060594] 3
[6] A NDREJ B OGDANOV AND E MANUELE V IOLA: Pseudorandom bits for polynomials. SIAM J.
Comput., 39(6):2464–2486, 2010. [doi:10.1137/070712109] 3
[7] RUSSELL I MPAGLIAZZO AND AVI W IGDERSON: P=BPP unless E has subexponential cir-
cuits: Derandomizing the XOR lemma. In Proc. 29th STOC, pp. 220–229. ACM Press, 1997.
[doi:10.1145/258533.258590] 2
[8] J. J USTESEN: A class of constructive asymptotically good algebraic codes. IEEE Trans. Inform.
Theory, 18(5):652–656, 1972. [doi:10.1109/TIT.1972.1054893] 16
[9] S HACHAR L OVETT: Unconditional pseudorandom generators for low degree polynomials. Theory
of Computing, 5(1):69–82, 2009. [doi:10.4086/toc.2009.v005a003] 3
[10] F. J. M AC W ILLIAMS AND N. J. A. S LOANE: The Theory of Error-Correcting Codes, Part II.
North-Holland, 1977. 2, 9, 15
[11] RONEN S HALTIEL AND C HRISTOPHER U MANS: Simple extractors for all min-entropies and a new
pseudorandom generator. J. ACM, 52(2):172–216, 2005. [doi:10.1145/1059513.1059516] 2
[12] M ADHU S UDAN , L UCA T REVISAN , AND S ALIL VADHAN: Pseudorandom generators without the
XOR lemma. J. Comput. System Sci., 62(2):236–266, 2001. [doi:10.1006/jcss.2000.1730] 2
[13] A MNON TA -S HMA , DAVID Z UCKERMAN , AND S HMUEL S AFRA: Extractors from Reed-Muller
codes. J. Comput. System Sci., 72(5):786–812, 2006. [doi:10.1016/j.jcss.2005.05.010] 2
[14] E MANUELE V IOLA: The sum of d small-bias generators fools polynomials of degree d. Comput.
Complexity, 18(2):209–217, 2009. [doi:10.1007/s00037-009-0273-5] 3, 16
AUTHORS
Zeev Dvir
post-doctoral researcher
Princeton University, Princeton, NJ
zeev dvir gmail com
http://www.cs.princeton.edu/~zdvir/
Amir Shpilka
associate professor
Technion – Israel Institute of Technology, Haifa, Israel
shpilka cs technion ac il
http://www.cs.technion.ac.il/~shpilka/
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 17
Z EEV DVIR AND A MIR S HPILKA
ABOUT THE AUTHORS
Z EEV DVIR is currently a postdoc in the Computer Science department at Princeton Univer-
sity. He received his Ph. D. from the Weizmann Institute in Israel in 2008. His advisors
were Ran Raz and Amir Shpilka. He is interested mainly in questions in computational
complexity that have a connection with classical mathematics.
A MIR S HPILKA obtained his Ph. D. in Computer Science and Mathematics from the Hebrew
University in Jerusalem in 2001 under the supervision of Avi Wigderson. As of 2005 he
is a CS faculty member at the Technion. He is married to Carmit and has two children.
His research interests lie in Complexity Theory, mainly in Arithmetic Circuit Complexity.
When not working or enjoying his family he likes to read and play chess.
T HEORY OF C OMPUTING, Volume 7 (2011), pp. 1–18 18