Authors Dmitry Gavinsky, Alexander A. Sherstov,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245
www.theoryofcomputing.org
A Separation of NP and coNP in Multiparty
Communication Complexity
Dmitry Gavinsky Alexander A. Sherstov
Received: January 4, 2010; published: November 16, 2010.
Abstract: We prove that coNP * MA in the number-on-forehead model of multiparty
communication complexity for up to k = (1 − ε) log n players, where ε > 0 is any con-
stant. Specifically, we construct an explicit function F : ({0, 1}n )k → {0, 1} with co-
nondeterministic complexity O(log n) and Merlin-Arthur complexity nΩ(1) . The problem
was open for k ⩾ 3. As a corollary, we obtain an explicit separation of NP and coNP for up
to k = (1 − ε) log n players, complementing an independent result by Beame et al. (2010)
who separate these classes nonconstructively for up to k = 2(1−ε)n players.
ACM Classification: F.1.3, F.2.3
AMS Classification: 68Q17, 68Q15
Key words and phrases: multiparty communication complexity, nondeterminism, Merlin-Arthur com-
putations, separations and lower bounds
1 Introduction
The number-on-forehead model of multiparty communication complexity, introduced by Chandra, Furst,
and Lipton [5], features k communicating players whose goal is to compute a given distributed func-
tion. More precisely, one considers a Boolean function F : ({0, 1}n )k → {−1, +1} whose arguments
x1 , . . . , xk ∈ {0, 1}n are placed on the foreheads of players 1 through k, respectively. Thus, player i
sees all the arguments except for xi . The players communicate by writing bits on a shared blackboard,
visible to all. Their goal is to compute F(x1 , . . . , xk ) with minimum communication. The multiparty
model has found a variety of applications, including circuit complexity, pseudorandomness, and proof
complexity [2, 30, 13, 25, 4]. This model draws its richness from the overlap in the players’ inputs, which
makes it challenging to prove lower bounds for explicit functions. Several fundamental questions in the
multiparty model remain open despite much research.
2010 Dmitry Gavinsky and Alexander A. Sherstov
Licensed under a Creative Commons Attribution License DOI: 10.4086/toc.2010.v006a010
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
1.1 Previous work and our results
In a seminal paper, Babai, Frankl, and Simon [1] defined analogues of computational complexity classes in
communication and initiated their systematic study. In particular, the k-party number-on-forehead model
gives rise to the complexity classes NPcc cc cc cc
k , coNPk , BPPk , and MAk , corresponding to communication
n k
problems F : ({0, 1} ) → {−1, +1} with efficient nondeterministic, co-nondeterministic, randomized,
and Merlin-Arthur protocols, respectively. An efficient protocol is one with communication cost logO(1) n.
Determining the exact relationships among these classes is a natural goal in complexity theory.
For example, it had been open to show that nondeterministic protocols can be more powerful than
randomized, for k ⩾ 3 players. This problem was recently solved by Lee and Shraibman [18] and
Chattopadhyay and Ada [7] for up to k = (1 − o(1)) log2 log2 n players, and later strengthened by David
and Pitassi [10] to k = (1 − ε) log2 n players, where ε > 0 is any given constant. An explicit separation
for the latter case was obtained by David, Pitassi, and Viola [11].
The contribution of this paper is to relate the power of nondeterministic, co-nondeterministic, and
Merlin-Arthur protocols. For k = 2 players, the relations among these models are well understood:
Papadimitriou and Sipser [20] showed that coNPcc cc
2 6= NP2 , and Klauck [16] proved that additionally
coNP2 * MA2 . Starting at k = 3, however, it has been open even to separate NPcc
cc cc cc
k and coNPk . Our
cc cc
main result is that coNPk * MAk for up to k = (1 −ε) log2 n players, where ε > 0 is an arbitrary constant.
The separation is by an explicitly given function. In particular, our work shows that NPcc cc
k 6= coNPk and
cc cc cc cc
also subsumes the separation in [10, 11], since NPk ⊆ MAk and BPPk ⊆ MAk . Let the symbols N(F),
N(−F), and MA(F) denote the nondeterministic, co-nondeterministic, and Merlin-Arthur complexity of
F in the k-party number-on-forehead model.
Theorem 1.1 (Main Result). Let k ⩽ (1 − ε) log2 n, where ε > 0 is any given constant. Then there is an
(explicitly given) function F : ({0, 1}n )k → {−1, +1} with
N(−F) = O(log n)
and
MA(F) = nΩ(1) .
In particular, coNPcc cc cc cc
k * MAk and NPk 6= coNPk .
Independently of our work, Beame, David, Pitassi, and Woelfel [3] proved nonconstructively that
NPcc cc
k 6= coNPk for k ⩽ 2
(1−ε)n . An advantage of Theorem 1.1 is that it gives an explicit separation and
additionally applies to Merlin-Arthur complexity. Theorem 1.1 is state-of-the-art with respect to the
number of players: Babai, Nisan, and Szegedy [2] obtained the first strong lower bounds for multiparty
communication complexity with up to k = (1 − ε) log2 n players, and it has since been an open problem
to exhibit a function with nontrivial multiparty complexity for k ⩾ log2 n.
The proof of Theorem 1.1, described below, is based on Sherstov’s pattern matrix method [28, 27]
and its multiparty generalization in [10, 11]. In the final section of this paper, we revisit several other
multiparty generalizations [6, 18, 7] of the pattern matrix method. By applying our techniques in these
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 228
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
other settings, we are able to obtain similar exponential separations for smaller k, by functions as simple
as constant-depth circuits.
1.2 Previous techniques
Perhaps the best-known method for lower bounds on communication complexity, both in the number-on-
forehead multiparty model and various two-party models, is the discrepancy method. To our knowledge,
this technique was introduced by Chor and Goldreich [8] in the context of two-party communication
and later generalized to multiple parties by Babai, Nisan, and Szegedy [2]; see [17, pp. 36–38] for a
detailed overview. The discrepancy method consists in exhibiting a distribution P with respect to which
the function F of interest has negligible discrepancy, in other words, has negligible correlation with all
low-cost protocols. A more powerful technique is the generalized discrepancy method, introduced by
Klauck [15] and Razborov [24]. This method consists in exhibiting a distribution P and a function H
such that, on the one hand, the function F of interest is well-correlated with H with respect to P, but on
the other hand, H has negligible discrepancy with respect to P.
In practice, considerable effort is required to find suitable P and H and to analyze the resulting
discrepancies. In particular, no strong bounds were available on the discrepancy or generalized discrepancy
of constant-depth circuits AC0 . The pattern matrix method, introduced recently in [28, 27], solves this
problem for AC0 and a large family of other matrices. More specifically, the method uses standard
analytic properties of Boolean functions (such as approximate degree or threshold degree) to determine
the discrepancy and generalized discrepancy of the associated communication problems.
Originally formulated in [28, 27] for the two-party model, the pattern matrix method has been adapted
to the multiparty model by several authors [6, 18, 7, 10, 11]. The first adaptation of the method to the
multiparty model gave improved lower bounds for the multiparty disjointness function [18, 7]. This line
of work was combined in [10, 11] with probabilistic arguments to separate the classes NPcc k and BPPk
cc
for up to k = (1 − ε) log2 n players, by an explicit function. Further details on this body of research and
on other duality-based approaches [29] can be found in the survey article [26].
1.3 Our approach
To obtain our main result, we combine the work in [10, 11] with several new ideas. First, we derive a
new criterion for high nondeterministic communication complexity, inspired by the Klauck-Razborov
generalized discrepancy method [15, 24]. Similar to Klauck-Razborov, we also look for a hard function
H that is well-correlated with the function F of interest, but we additionally quantify the agreement of
H and F on the set F −1 (−1). This agreement ensures that F −1 (−1) does not have a small cover by
cylinder intersections, thus placing F outside NPcc
k . To handle the more powerful Merlin-Arthur model,
we combine this development with an earlier technique due to Klauck [16] for proving lower bounds
against two-party Merlin-Arthur protocols.
In keeping with the philosophy of the pattern matrix method, we then reformulate the agreement
requirement for H and F as a suitable analytic property of the underlying Boolean function f and prove
this property directly, using linear programming duality. The function f in question happens to be OR.
Finally, we apply our program to the specific function F constructed in [11] for the purpose of
separating NPcc cc
k and BPPk . Since F has small nondeterministic complexity by design, the proof of our
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 229
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
main result is complete once we apply our machinery to −F and derive a lower bound on MA(−F).
1.4 Organization
We start in Section 2 with relevant technical preliminaries and standard background on multiparty
communication complexity. In Section 3, we review the original discrepancy method, the generalized
discrepancy method, and the pattern matrix method. In Section 4, we derive the new criterion for high
nondeterministic and Merlin-Arthur communication complexity. The proof of Theorem 1.1 comes next,
in Section 5. In the final section of the paper, we explore some implications of this work in the light of
other multiparty papers [6, 18, 7].
2 Preliminaries
We view Boolean functions as mappings X → {−1, +1}, where X is a finite set such as X = {0, 1}n
or X = {0, 1}n × {0, 1}n . We identify −1 and +1 with “true” and “false,” respectively. The notation
[N]
[n] stands for the set {1, 2, . . . , n}. For integers N, n with N ⩾ n, the symbol n denotes the family
of all size-n subsets of {1, 2, . . . , N}. For a string x ∈ {−1, +1}N and a set S ∈ [N]
n , we define x|S =
(xi1 , xi2 , . . . , xin ) ∈ {−1, +1}n , where i1 < i2 < · · · < in are the elements of S. For x ∈ {0, 1}n , we write
|x| = x1 + · · · + xn . Throughout this manuscript, “log” refers to the logarithm to base 2. For a function
f : X → R, where X is an arbitrary finite set, we write k f k∞ = maxx∈X | f (x)|.
We will need the following observation regarding discrete probability distributions on the hypercube,
cf. [28].
Proposition 2.1. Let µ(x) be a probability distribution on {0, 1}n . Fix i1 , . . . , in ∈ {1, 2, . . . , n}. Then
∑ µ(xi1 , . . . , xin ) ⩽ 2n−|{i1 ,...,in }| .
x∈{0,1}n
For functions f , g : X1 × · · · × Xk → R (where Xi is a finite set, i = 1, 2, . . . , k), we define h f , gi =
∑(x1 ,...,xk ) f (x1 , . . . , xk )g(x1 , . . . , xk ). When f and g are vectors or matrices, this is the standard definition
of inner product. The Hadamard product of f and g is the tensor f ◦ g : X1 × · · · × Xk → R given by
( f ◦ g)(x1 , . . . , xk ) = f (x1 , . . . , xk )g(x1 , . . . , xk ).
The symbol Rm×n refers to the family of all m × n matrices with real entries. The (i, j)th entry of a
matrix A is denoted by Ai j . In most matrices that arise in this work, the exact ordering of the columns
(and rows) is irrelevant. In such cases, we describe a matrix using the notation [F(i, j)]i∈I, j∈J , where I
and J are some index sets.
We conclude with a review of the Fourier transform over Zn2 (cf. [12] for more details). Consider the
vector space of functions {0, 1}n → R. For S ⊆ [n], define χS : {0, 1}n → {−1, +1} by χS (x) = (−1)∑i∈S xi .
Then every function f : {0, 1}n → R has a unique representation of the form f = ∑S⊆[n] fˆ(S) χS , where
fˆ(S) = 2−n ∑x∈{0,1}n f (x)χS (x). The reals fˆ(S) are called the Fourier coefficients of f .
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 230
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
Communication complexity
An excellent reference on communication complexity is the monograph by Kushilevitz and Nisan [17].
In this overview, we will limit ourselves to key definitions and notation. The simplest model of commu-
nication in this work is the two-party randomized model. Consider a function F : X ×Y → {−1, +1},
where X and Y are finite sets. Alice receives an input x ∈ X, Bob receives y ∈ Y , and their objective
is to predict F(x, y) with high accuracy. To this end, Alice and Bob share a communication channel
and have an unlimited supply of shared random bits. Alice and Bob’s protocol is said to have error ε if
on every input (x, y), the computed output differs from the correct answer F(x, y) with probability no
greater than ε. The cost of a given protocol is the maximum number of bits exchanged on any input. The
randomized communication complexity of F, denoted Rε (F), is the least cost of an ε-error protocol for F.
It is standard practice to use the shorthand R(F) = R1/3 (F). Recall that the error probability of a protocol
can be decreased from 1/3 to any other positive constant at the expense of increasing the communication
cost by a constant factor. We will use this fact in our proofs without further mention.
A generalization of two-party communication is the multiparty number-on-forehead model of com-
munication. Here one considers a function F : X1 × · · · × Xk → {−1, +1} for some finite sets X1 , . . . , Xk .
There are k players. A given input (x1 , . . . , xk ) ∈ X1 × · · · × Xk is distributed among the players by placing
xi on the forehead of player i (for i = 1, . . . , k). In other words, player i knows x1 , . . . , xi−1 , xi+1 , . . . , xk but
not xi . The players communicate by writing bits on a shared blackboard, visible to all. They additionally
have access to a shared source of random bits. Their goal is to devise a communication protocol that will
allow them to accurately predict the value of F on every input. Analogous to the two-party case, the
randomized communication complexity Rε (F) is the least cost of an ε-error communication protocol for
F in this model, and R(F) = R1/3 (F).
Another model in this paper is the number-on-forehead nondeterministic model. As before, one
considers a function F : X1 × · · · × Xk → {−1, +1} for some finite sets X1 , . . . , Xk . An input from
X1 × · · · × Xk is distributed among the k players as before. At the start of the protocol, c1 nondeterministic
bits appear on the shared blackboard. Given the values of those bits, the players behave deterministically,
exchanging an additional c2 bits by writing them on the blackboard. A nondeterministic protocol
for F must output the correct answer for at least one nondeterministic choice of the c1 bits when
F(x1 , . . . , xk ) = −1 and for all possible choices when F(x1 , . . . , xk ) = +1. The cost of a nondeterministic
protocol is defined as c1 + c2 . The nondeterministic communication complexity of F, denoted N(F), is
the least cost of a nondeterministic protocol for F. The co-nondeterministic communication complexity
of F is the quantity N(−F).
The number-on-forehead Merlin-Arthur model combines the power of randomized and nondeter-
ministic models. Similar to the nondeterministic case, the protocol starts with a nondeterministic guess
of c1 bits, followed by c2 bits of communication. However, the communication can be randomized,
and the requirement is that the error probability be at most ε for at least one nondeterministic choice
when F(x1 , . . . , xk ) = −1 and for all possible nondeterministic choices when F(x1 , . . . , xk ) = +1. The
cost of a protocol is defined as c1 + c2 . The Merlin-Arthur communication complexity of F, denoted
MAε (F), is the least cost of an ε-error Merlin-Arthur protocol for F. We put MA(F) = MA1/3 (F). Clearly,
MA(F) ⩽ min{N(F), R(F)} for every F.
Babai, Frankl, and Simon [1] defined analogues of computational complexity classes in commu-
nication. We will only study a few of these communication classes, namely, those corresponding to
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 231
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
efficient randomized, nondeterministic, co-nondeterministic, and Merlin-Arthur protocols. For a given
number of players k = k(n), fix a family of k-party communication problems Fn : ({0, 1}n )k → {−1, +1},
n = 1, 2, 3, . . . . The family {Fn } is said to belong to the class BPPcc
k if the randomized communication
c
complexity of Fn is bounded by (log n) for some constant c > 1 and all n > c. Analogously, the family
{Fn } is said to belong to NPcc cc cc
k , coNPk , MAk if the communication complexity of Fn in the nondetermin-
istic, co-nondeterministic, and Merlin-Arthur model, respectively, is at most (log n)c for some constant
c > 1 and all n > c.
3 Generalized discrepancy and pattern matrices
A common tool for proving communication lower bounds is the discrepancy method. Given a function
F : X ×Y → {−1, +1} and a distribution µ on X ×Y , the discrepancy of F with respect to µ is defined as
discµ (F) = max ∑ ∑ µ(x, y)F(x, y) .
S⊆X, x∈S y∈T
T ⊆Y
This definition generalizes to the multiparty case as follows. Consider a function F : X1 × · · · × Xk →
{−1, +1} and a distribution µ on X1 × · · · × Xk . The discrepancy of F with respect to µ is defined as
discµ (F) = max ∑ µ(x1 , . . . , xk )F(x1 , . . . , xk )χ(x1 , . . . , xk ) ,
χ
(x1 ,...,xk )
∈X1 ×···×Xk
where the maximum ranges over functions χ : X1 × · · · × Xk → {0, 1} of the form
k
χ(x1 , . . . , xk ) = ∏ φi (x1 , . . . , xi−1 , xi+1 , . . . , xk ) (3.1)
i=1
for some φi : X1 × · · · Xi−1 × Xi+1 × · · · Xk → {0, 1}, i = 1, 2, . . . , k. A function χ of the form (3.1) is called
a rectangle for k = 2 and a cylinder intersection for k ⩾ 3, the latter notion introduced by Babai, Nisan,
and Szegedy [2]. Note that for k = 2, the multiparty definition of discrepancy agrees with the one given
earlier for the two-party model. Put
disc(F) = min discµ (F) .
µ
Discrepancy is difficult to analyze as defined. Typically, one uses the following estimate from the
pioneering work in [2], derived by repeated applications of the Cauchy-Schwarz inequality.
Theorem 3.1 ([2, 9, 22]). Fix F : X1 × · · · × Xk → {−1, +1} and a distribution µ on X1 × · · · × Xk . Put
ψ(x1 , . . . , xk ) = F(x1 , . . . , xk )µ(x1 , . . . , xk ). Then
k−1
discµ (F) 2
zk−1
|X1 | · · · |Xk |
⩽ E ··· E
x10 ∈X1 0 ∈X
xk−1
E
x ∈Xk
k−1 k
∏ k−1 ψ(x1z1 , . . . , xk−1 , xk ) .
z∈{0,1}
x11 ∈X1 1 ∈X
xk−1 k−1
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 232
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
To our knowledge, no alternate technique has been discovered for bounding the discrepancy of explicit
multiparty functions. In the case of k = 2 parties, there are other ways to estimate the discrepancy,
including the spectral norm of a matrix (see for example [27]).
µ
For a function F : X1 × · · · × Xk → {−1, +1} and a distribution µ over X1 × · · · × Xk , let Dε (F)
denote the least cost of a deterministic protocol for F whose probability of error with respect to µ is at
most ε. This quantity is known as the µ-distributional complexity of F. Since a randomized protocol
can be viewed as a probability distribution over deterministic protocols, we immediately have that
µ
Rε (F) ⩾ maxµ Dε (F). We are now ready to state the discrepancy method, which was introduced by
Chor and Goldreich [8] in the context of two-party communication and generalized to multiple parties by
Babai, Nisan, and Szegedy [2].
Theorem 3.2 (Discrepancy method [8, 2]; see also [17, pp. 36–38]). For every F : X1 × · · · × Xk →
{−1, +1}, every distribution µ on X1 × · · · × Xk , and 0 < γ ⩽ 1,
µ γ
R1/2−γ/2 (F) ⩾ D1/2−γ/2 (F) ⩾ log .
discµ (F)
In words, a function with small discrepancy is hard to compute to any nontrivial advantage over random
guessing, let alone compute it to high accuracy.
3.1 Generalized discrepancy method
The discrepancy method is particularly strong in that it gives communication lower bounds not only
for bounded-error protocols but also for protocols with error vanishingly close to 1/2. This strength
of the discrepancy method is at once a weakness. For example, the disjointness function DISJ(x, y) =
Wn
i=1 (xi ∧ yi ) has a randomized protocol with error 1/2 − Ω (1/n) and communication O(log n). As a
result, the disjointness function has high discrepancy, and no strong lower bounds can be obtained for it via
the discrepancy method. Yet it is well-known that DISJ and ¬DISJ have communication complexity Θ(n)
√
in the randomized model [14, 23], and ¬DISJ additionally has complexity Ω( n) in the Merlin-Arthur
model [16].
The generalized discrepancy method is an extension of the traditional discrepancy method that avoids
the difficulty just cited. This technique was first applied by Klauck [15] and reformulated in its current
form by Razborov [24]. The development in [15, 24] takes place in the quantum model of communication.
However, the same idea works in a variety of models, as illustrated in [27]. The version of the generalized
discrepancy method for the two-party randomized model is as follows.
Theorem 3.3 ([27, §2.4]). Fix a function F : X ×Y → {−1, +1} and 0 ⩽ ε < 1/2. Then for all functions
H : X ×Y → {−1, +1} and all probability distributions P on X ×Y ,
hF, H ◦ Pi − 2ε
Rε (F) ⩾ log .
discP (H)
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 233
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
The usefulness of Theorem 3.3 stems from its applicability to functions that have efficient protocols
with error close to random guessing, such as 1/2 − Ω (1/n) for the disjointness function. Note that one
recovers Theorem 3.2, the ordinary discrepancy method, by setting H = F in Theorem 3.3.
Proof of Theorem 3.3 (adapted from [27], pp. 88–89). Put c = Rε (F). A public-coin protocol with cost c
can be thought of as a probability distribution on deterministic protocols with cost at most c. In particular,
there are random variables χ1 , χ2 , . . . , χ2c : X ×Y → {0, 1}, each a rectangle, as well as random variables
σ1 , σ2 , . . . , σ2c ∈ {−1, +1}, such that
h i
F − E ∑ σi χi ⩽ 2ε .
∞
Therefore,
D h i E
F − E ∑ σi χi , H ◦ P ⩽ 2ε .
On the other hand,
D h i E
F − E ∑ σi χi , H ◦ P ⩾ hF, H ◦ Pi − 2c discP (H)
by the definition of discrepancy. The theorem follows at once from the last two inequalities.
Theorem 3.3 extends word-for-word to the multiparty model, as follows:
Theorem 3.4 ([18, 7]). Fix a function F : X → {−1, +1} and ε ∈ [0, 1/2), where X = X1 × · · · × Xk .
Then for all functions H : X → {−1, +1} and all probability distributions P on X,
hF, H ◦ Pi − 2ε
Rε (F) ⩾ log .
discP (H)
Proof. Identical to the two-party case (Theorem 3.3), with the word “rectangles” replaced by “cylinder
intersections.”
3.2 Pattern matrix method
To apply the generalized discrepancy method to a given Boolean function F, one needs to identify a
Boolean function H which is well correlated with F under some distribution P but has low discrepancy
with respect to P. The pattern matrix method [28, 27] is a systematic technique for finding such H and F.
To simplify the exposition of our main results, we will now review this method and sketch its proof.
Recall that the ε-approximate degree of a function f : {0, 1}n → R, denoted degε ( f ), is the least
degree of a polynomial p with k f − pk∞ ⩽ ε. A starting point in the pattern matrix method is the following
dual formulation of the approximate degree.
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 234
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
Fact 3.5. Fix ε ⩾ 0. Let f : {0, 1}n → R be given with d = degε ( f ) ⩾ 1. Then there is a function
ψ : {0, 1}n → R such that:
ψ̂(S) = 0 for |S| < d,
∑ |ψ(z)| = 1,
z∈{0,1}n
∑ ψ(z) f (z) > ε .
z∈{0,1}n
See [27] for a proof of this fact using linear programming duality. The crux of the method is the following
theorem.
Theorem 3.6 ([28]). Fix a function h : {0, 1}n → {−1, +1} and a probability distribution µ on {0, 1}n
such that
◦ µ(S) = 0
hd for |S| < d.
Let N be a given integer. Define
−1
N−N+n
H = [h(x|V )]x,V , P=2 [µ(x|V )]x,V ,
n
where the rows are indexed by x ∈ {0, 1}N and columns by V ∈ [N]
n . Then
d/2
4en2
discP (H) ⩽ .
Nd
At last, we are ready to state the (two-party) pattern matrix method.
Theorem 3.7 ([27]). Let f : {0, 1}n → {−1, +1} be a given function, d = deg1/3 ( f ). Let N be a given
integer. Define F = [ f (x|V )]x,V , where the rows are indexed by x ∈ {0, 1}N and columns by V ∈ [N]
n . If
N ⩾ 16en2 /d, then
Nd
R(F) = Ω d log .
4en2
Proof (adapted from [27]). Let ε = 1/10. By Fact 3.5, there exists a function h : {0, 1}n → {−1, +1}
and a probability distribution µ on {0, 1}n such that
◦ µ(S) = 0 ,
hd |S| < d, (3.2)
and
1
∑ f (z)µ(z)h(z) > . (3.3)
z∈{0,1}n
3
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 235
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
−1
Letting H = [h(x|V )]x,V and P = 2−N+n Nn [µ(x|V )]x,V , we obtain from (3.2) and Theorem 3.6 that
d/2
4en2
discP (H) ⩽ . (3.4)
Nd
At the same time, one sees from (3.3) that
1
hF, H ◦ Pi > . (3.5)
3
The theorem now follows from (3.4) and (3.5) in view of the generalized discrepancy method, Theorem 3.3.
Remark. Presented above is a weaker, combinatorial version of the pattern matrix method. The communi-
cation lower bounds in Theorems 3.6 and 3.7 were improved to optimal in [27] using matrix-analytic
techniques. Unlike the combinatorial argument above, however, the matrix-analytic proof is not known to
extend to the multiparty model and is not used in the follow-up multiparty papers [6, 18, 7, 10, 11] or our
work.
An alternate technique based on Fact 3.5 is the block-composition method of Shi and Zhu [29],
developed independently of the pattern matrix method. See [26, §5.3] for a comparative discussion.
4 A new criterion for nondeterministic and Merlin-Arthur complexity
In this section, we derive a new criterion for high communication complexity in the nondeterministic and
Merlin-Arthur models. This criterion, inspired by the generalized discrepancy method, will allow us to
obtain our main result.
Theorem 4.1. Let F : X → {−1, +1} be given, where X = X1 ×· · ·×Xk . Fix a function H : X → {−1, +1}
and a probability distribution P on X. Put
α = P(F −1 (−1) ∩ H −1 (−1)) ,
β = P(F −1 (−1) ∩ H −1 (+1)) ,
α
Q = log .
β + discP (H)
Then
N(F) ⩾ Q (4.1)
and
p Q
MA(F) ⩾ min Ω( Q), Ω . (4.2)
log{2/α}
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 236
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
Proof. Put c = N(F). It is well-known [2, 17] that there is a cover of F −1 (−1) by 2c cylinder intersections,
each contained in F −1 (−1). Fix one such cover, χ1 , χ2 , . . . , χ2c : X → {0, 1}. By the definition of
discrepancy,
h∑ χi , −H ◦ Pi ⩽ 2c discP (H) .
On the other hand, ∑ χi ranges between 1 and 2c on F −1 (−1) and vanishes on F −1 (+1). Therefore,
h∑ χi , −H ◦ Pi ⩾ α − 2c β .
These two inequalities force (4.1).
We now turn to the Merlin-Arthur model. Let c = MA(F) and δ = α2−c−1 . The first step is to
improve the error probability of the Merlin-Arthur protocol by repetition from 1/3 to δ . Specifically,
following Klauck [16] we observe that there exist randomized protocols F1 , . . . , F2c : X → {0, 1}, each a
random variable of the coin tosses and each having communication cost c0 = O(c log{1/δ }), such that
the sum ∑ E[Fi ] ranges in [1 − δ , 2c ] on F −1 (−1) and in [0, δ 2c ] on F −1 (+1). As a result,
∑ E[Fi ], −H ◦ P ⩾ α(1 − δ ) − β 2c − (1 − α − β )δ 2c ] . (4.3)
At the same time,
2c
0 0
∑ E[Fi ], −H ◦ P ⩽ ∑ 2c discP (H) = 2c+c discP (H) . (4.4)
i=1
The bounds in (4.3) and (4.4) force (4.2).
Since sign tensors H and −H have the same discrepancy under any given distribution, we have the
following alternate form of Theorem 4.1.
Corollary 4.2. Let F : X → {−1, +1} be given, where X = X1 × · · · × Xk . Fix a function H : X →
{−1, +1} and a probability distribution P on X. Put
α = P(F −1 (+1) ∩ H −1 (+1)) ,
β = P(F −1 (+1) ∩ H −1 (−1)) ,
α
Q = log .
β + discP (H)
Then
N(−F) ⩾ Q ,
p Q
MA(−F) ⩾ min Ω( Q), Ω .
log{2/α}
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 237
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
At first glance, it is unclear how the nondeterministic bound of Theorem 4.1 and its counterpart
Corollary 4.2 relate to the generalized discrepancy method. We now pause to make this relationship quite
explicit. Nondeterminism and randomized computation are related in that a nondeterministic protocol
with cost c for a function F gives a cost-c randomized protocol with error probability at most 1 − 2−c on
F −1 (−1) and error probability 0 everywhere else. This is the setting of Theorem 4.1. The generalized
discrepancy method, on the other hand, has a single error parameter ε for all inputs. To best convey
this distinction between the two methods, we formulate a more general criterion yet, which allows for
different errors on each input.
Theorem 4.3. Let F : X → {−1, +1} be given, where X = X1 × · · · × Xk . Let c be the least cost of a
public-coin protocol for F with error probability E(x) on x ∈ X, for some E : X → [0, 1]. Then for all
functions H : X → {−1, +1} and all probability distributions P on X,
hF, H ◦ Pi − 2hP, Ei
2c ⩾ .
discP (H)
Proof. A public-coin protocol with cost c is a probability distribution on deterministic protocols with cost
at most c. Then by hypothesis, there are random variables χ1 , χ2 , . . . , χ2c : X → {0, 1}, each a cylinder
intersection, and random variables σ1 , σ2 , . . . , σ2c ∈ {−1, +1}, such that
h i
F(x) − E ∑ σi χi (x) ⩽ 2E(x) for x ∈ X.
Therefore, D h i E
F − E ∑ σi χi , H ◦ P ⩽ 2hP, Ei .
On the other hand,
D h i E
F − E ∑ σi χi , H ◦ P ⩾ hF, H ◦ Pi − 2c discP (H)
by the definition of discrepancy. The theorem follows at once from the last two inequalities.
5 Main result
We now prove the claimed separations of nondeterministic, co-nondeterministic, and Merlin-Arthur
communication complexity. It will be easier to first obtain these separations by a probabilistic argument
and only then sketch an explicit construction.
We start by deriving a suitable analytic property of the OR function.
Theorem 5.1. There is a function ψ : {0, 1}m → R such that:
∑ |ψ(z)| = 1, (5.1)
z∈{0,1}m
√
ψ̂(S) = 0 for |S| ⩽ Θ( m), (5.2)
1
ψ(0) > . (5.3)
6
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 238
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
Proof. Let f : {0, 1}m → {−1, +1} be given by f (z) = 1 ⇔ z = 0m . It is well-known [19, 21] that
√
deg1/3 ( f ) ⩾ Ω( m). By Fact 3.5, there is a function ψ : {0, 1}m → R that obeys (5.1), (5.2), and
additionally satisfies
1
∑ ψ(z) f (z) > .
z∈{0,1}m
3
Finally,
1
2ψ(0) = ∑ ψ(z){ f (z) + 1} = ∑ ψ(z) f (z) > ,
z∈{0,1}m z∈{0,1}m
3
/ = 0.
where the second equality follows from ψ̂(0)
For the remainder of this section, it will be convenient to establish some additional notation
following David and Pitassi [10]. Fix integers n, m with n > m. Let ψ : {0, 1}m → R be a given
function with ∑z∈{0,1}m |ψ(z)| = 1. Let d denote the least order of a nonzero Fourier coefficient
of ψ. Fix a Boolean function h : {0, 1}m → {−1, +1} and the distribution µ on {0, 1}m such that
ψ(z) ≡ h(z)µ(z). For a mapping α : ({0, 1}n )k → [n]
m , define a (k + 1)-party communication prob-
lem Hα : ({0, 1}n )k+1 → {−1, +1} by Hα (x, y1 , . . . , yk ) = h(x|α(y1 ,...,yk ) ). Define a distribution Pα on
({0, 1}n )k+1 by Pα (x, y1 , . . . , yk ) = 2−(k+1)n+m µ(x|α(y1 ,...,yk ) ). The following theorem combines the pat-
tern matrix method with a probabilistic argument.
Theorem 5.2 ([10]). Assume that n ⩾ 16em2 2k . Then for a uniformly random choice of α : ({0, 1}n )k →
[n]
m , h i
k k
E discPα (Hα )2 ⩽ 2−n/2 + 2−d2 +1 .
α
For completeness, we include a detailed proof of this result.
Proof (reproduced from the survey article [26], pp. 88–89). By Theorem 3.1,
k k
discPα (Hα )2 ⩽ 2m2 E |Γ(Y )| , (5.4)
Y
where we put Y = (y01 , y11 , . . . , y0k , y1k ) ∈ ({0, 1}n )2k and
" #
Γ(Y ) = E ∏ ψ x|α (yz1 ,yz2 ,...,yzk ) .
x 1 2 k
z∈{0,1}k
For a fixed choice of α and Y , we will use the shorthand Sz = α(yz11 , . . . , yzkk ). To analyze Γ(Y ), one proves
two key claims analogous to those in the two-party Theorem 3.6 (see [28, 26] for more detail).
> m2k − d2k−1 . Then Γ(Y ) = 0.
S
Claim 5.3. Assume that z∈{0,1}k Sz
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 239
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
Proof. If | Sz | > m2k − d2k−1 , then some Sz must feature more than m − d elements that do not occur in
S
S
u6=z Su . Since the Fourier transform of ψ is supported on characters of order d and higher, we conclude
that the 2k -fold product in the definition of Γ(Y ) has zero for its constant Fourier coefficient. This
conclusion is of course equivalent to Γ(Y ) = 0.
Claim 5.4. For every Y , |Γ(Y )| ⩽ 2−|∪Sz | .
k
Proof. Immediate from Proposition 2.1, by considering the distribution µ × · · · × µ on ({0, 1}m )2 .
In view of (5.4) and Claims 5.3 and 5.4, we have
h k
i m2k −m h[ i
E discPα (Hα )2 ⩽ ∑ 2i P Sz = m2k − i .
α Y,α
i=d2k−1
It remains to bound the probabilities in the last expression. With probability at least 1 − k2−n over the
choice of Y , we have y0j 6= y1j for each j = 1, 2, . . . , k. Conditioning on this event, the fact that α is chosen
uniformly at random means that the 2k sets Sz are distributed independently and uniformly over [n]
m . A
calculation now reveals that
k k i
h[ i
−n m2 m2
P k
Sz = m2 − i ⩽ k2 + ⩽ k2−n + 8−i ,
Y,α i n
which completes the proof after summing over i.
We are ready to prove our main result. It may be helpful to contrast the proof to follow with the proof
of the pattern matrix method (Theorem 3.7).
Theorem 5.5. Let k ⩽ (1 − ε) log n, where ε > 0 is any given constant. Then there exists a function
Fα : ({0, 1}n )k+1 → {−1, +1} such that:
N(Fα ) = O(log n) (5.5)
and
MA(−Fα ) = nΩ(1) . (5.6)
In particular, coNPcc cc cc cc
k * MAk and NPk 6= coNPk .
Proof. Let m = bnδ c for a sufficiently small constant δ = δ (ε) > 0. As usual, define ORm : {0, 1}m →
{−1, +1} by ORm (z) = 1 ⇔z = 0m . Let ψ : {0, 1}m → R be as guaranteed by Theorem 5.1. For a
mapping α : ({0, 1}n )k → [n]
m , let Hα and Pα be defined in terms of ψ as described earlier in this section.
Then Theorem 5.2 shows the existence of α such that
√
discPα (Hα ) ⩽ 2−Ω( m) . (5.7)
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 240
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
Define Fα : ({0, 1}n )k+1 → {−1, +1} by Fα (x, y1 , . . . , yk ) = ORm (x|α(y1 ,...,yk ) ). It is immediate from the
properties of ψ that
1
Pα (Fα−1 (+1) ∩ Hα−1 (+1)) > , (5.8)
6
Pα (Fα−1 (+1) ∩ Hα−1 (−1)) = 0 . (5.9)
The sought lower bound in (5.6) now follows from (5.7)–(5.9) and Corollary 4.2.
On the other hand, as observed in [10], the function Fα has an efficient nondeterministic protocol.
Namely, player 1 (who knows y1 , . . . , yk ) nondeterministically selects an element i ∈ α(y1 , . . . , yk ) and
writes i on the shared blackboard. Player 2 (who knows x) then announces xi as the output of the protocol.
This yields the desired upper bound in (5.5).
As promised, we will now sketch an explicit construction of the function whose existence has
just been proven. For this, it suffices to invoke previous work by David, Pitassi, and Viola [11], who
derandomized the choice of α in Theorem 5.2. More precisely, instead of working with a family {Hα } of
functions, each given by Hα (x, y1 , . . . , yk ) = h(x|α(y1 ,...,yk ) ), the authors of [11] posited a single function
H(α, x, y1 , . . . , yk ) = h(x|α(y1 ,...,yk ) ), where the new argument α is known to all players and ranges over a
small, explicitly given subset A of all mappings ({0, 1}n )k → [n]
m . By choosing A to be pseudorandom,
the authors of [11] forced the same qualitative conclusion in Theorem 5.2. This development carries over
unchanged to our setting, and we obtain our main result.
Theorem 1.1 (Restated from p. 228). Let k ⩽ (1 − ε) log2 n, where ε > 0 is any given constant. Then
there is an (explicitly given) function F : ({0, 1}n )k → {−1, +1} with
N(−F) = O(log n)
and
MA(F) = nΩ(1) .
In particular, coNPcc cc cc cc
k * MAk and NPk 6= coNPk .
Proof. Identical to Theorem 5.5, with the described derandomization of α.
6 A separation by the disjointness function
In this section, we revisit recent multiparty analyses of the disjointness function [6, 18, 7]. We will see
that the program of the previous sections applies here essentially unchanged.
We start with some notation. Fix a function φ : {0, 1}m → R and an integer N with m | N. Define the
k−1
(k, N, m, φ )-pattern tensor as the k-argument function A : {0, 1}m(N/m) × [N/m]m × · · · × [N/m]m → R
given by A(x,V1 , . . . ,Vk−1 ) = φ (x|V1 ,...,Vk−1 ), where
x|V1 ,...,Vk−1 = x1,V1 [1],...,Vk−1 [1] , . . . , xm,V1 [m],...,Vk−1 [m] ∈ {0, 1}m
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 241
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
and V j [i] denotes the ith element of the m-dimensional vector V j . (Note that we index the string x by
viewing it as a k-dimensional array of m × (N/m) × · · · × (N/m) = m(N/m)k−1 bits.) This definition
extends pattern matrices [28, 27] to higher dimensions. The two-party Theorem 3.6 has been adapted as
follows to k ⩾ 3 players.
Theorem 6.1 ([6, 18, 7]). Fix a function h : {0, 1}m → {−1, +1} and a probability distribution µ on
{0, 1}m such that
hd◦ µ(S) = 0 , |S| < d.
k−1 +m
Let N be a given integer, m | N. Let P be the (k, N, m, 2−m(N/m) (N/m)−m(k−1) µ)-tensor. Let H be
k−1
the (k, N, m, h)-pattern tensor. If N ⩾ 4em2 (k − 1)22 /d, then
k−1
discP (F) ⩽ 2−d/2 .
A proof of this exact formulation is available in the survey article [26], pp. 85–86. We are now prepared
to apply our techniques to the disjointness function.
Theorem 6.2. Let N be a given integer, m | N. Let F be the (k, N, m, ORm )-pattern tensor. If N ⩾
k−1
4em2 (k − 1)22 /d, then
√ √ 4
m m
N(−F) ⩾ Ω , MA(−F) ⩾ Ω k/2 .
2k 2
Proof. Let ψ : {0, 1}m → R be as guaranteed by Theorem 5.1. Fix a function h : {0, 1}m → {−1, +1}
and a distribution µ on {0, 1}m such that ψ(z) ≡ h(z)µ(z). Let H be the (k, N, m, h)-pattern tensor. Let P
k−1
be the (k, N, m, 2−m(N/m) +m (N/m)−m(k−1) µ)-pattern tensor, which is a probability distribution. Then
by Theorem 6.1,
√ k
discP (H) ⩽ 2−Ω( m/2 ) . (6.1)
On the other hand, it is clear from the properties of ψ that
1
P(F −1 (+1) ∩ H −1 (+1)) > , (6.2)
6
P(F −1 (+1) ∩ H −1 (−1)) = 0 . (6.3)
In view of (6.1)–(6.3) and Corollary 4.2, the proof is complete.
The function F in Theorem 6.2 is a subfunction of the multiparty disjointness function DISJ :
({0, 1}n )k → {−1, +1}, where n = m(N/m)k−1 and
n ^
_ k
DISJ (x1 , . . . , xk ) = xi j .
j=1 i=1
Recall that disjointness has trivial nondeterministic complexity, O(log n). In particular, Theorem 6.2
shows that the disjointness function separates NPcc cc cc
k from coNPk and witnesses that coNPk * MAk for
cc
up to k = Θ(log log n) players.
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 242
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
References
[1] L ÁSZL Ó BABAI , P ÉTER F RANKL , AND JANOS S IMON: Complexity classes in communica-
tion complexity theory. In Proc. 27th FOCS, pp. 337–347. IEEE Comp. Soc. Press, 1986.
[doi:10.1109/SFCS.1986.15] 228, 231
[2] L ÁSZL Ó BABAI , N OAM N ISAN , AND M ÁRI Ó S ZEGEDY: Multiparty protocols, pseudorandom
generators for logspace, and time-space trade-offs. J. Computer and System Sciences, 45(2):204–232,
1992. [doi:10.1016/0022-0000(92)90047-M] 227, 228, 229, 232, 233, 237
[3] PAUL B EAME , M ATEI DAVID , T ONIANN P ITASSI , AND P HILIPP W OELFEL: Separating determin-
istic from randomized multiparty communication complexity. Theory of Computing, 6:201–225,
2010. [doi:10.4086/toc.2010.v006a009] 228
[4] PAUL B EAME , T ONIANN P ITASSI , AND NATHAN S EGERLIND: Lower bounds for Lovász-
Schrijver systems and beyond follow from multiparty communication complexity. SIAM J. Comput.,
37(3):845–869, 2007. [doi:10.1137/060654645] 227
[5] A SHOK K. C HANDRA , M ERRICK L. F URST, AND R ICHARD J. L IPTON: Multi-party protocols. In
Proc. 15th STOC, pp. 94–99. ACM Press, 1983. [doi:10.1145/800061.808737] 227
[6] A RKADEV C HATTOPADHYAY: Discrepancy and the power of bottom fan-in in depth-three circuits.
In Proc. 48th FOCS, pp. 449–458. IEEE Comp. Soc. Press, 2007. [doi:10.1109/FOCS.2007.30]
228, 229, 230, 236, 241, 242
[7] A RKADEV C HATTOPADHYAY AND A NIL A DA: Multiparty communication complexity of dis-
jointness. In Electron. Colloq. on Comput. Complexity (ECCC), January 2008. Report TR08-002.
[ECCC:TR08-002] 228, 229, 230, 234, 236, 241, 242
[8] B ENNY C HOR AND O DED G OLDREICH: Unbiased bits from sources of weak random-
ness and probabilistic communication complexity. SIAM J. Comput., 17(2):230–261, 1988.
[doi:10.1137/0217015] 229, 233
[9] FAN R. K. C HUNG AND P RASAD T ETALI: Communication complexity and quasi randomness.
SIAM J. Discrete Math., 6(1):110–123, 1993. [doi:10.1137/0406009] 232
[10] M ATEI DAVID AND T ONIANN P ITASSI: Separating NOF communication complexity classes RP
and NP. In Electron. Colloq. on Comput. Complexity (ECCC), February 2008. Report TR08-014.
[ECCC:TR08-014] 228, 229, 236, 239, 241
[11] M ATEI DAVID , T ONIANN P ITASSI , AND E MANUELE V IOLA: Improved separations between
nondeterministic and randomized multiparty communication. ACM Trans. Comput. Log., 1(2), 2009.
[doi:10.1145/1595391.1595392] 228, 229, 236, 241
[12] RONALD DE W OLF: A Brief Introduction to Fourier Analysis on the Boolean Cube. Number 1 in
Graduate Surveys. Theory of Computing Library, 2008. [doi:10.4086/toc.gs.2008.001] 230
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 243
D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV
[13] J OHAN H ÅSTAD AND M IKAEL G OLDMANN: On the power of small-depth threshold circuits.
Comput. Complexity, 1(2):113–129, 1991. [doi:10.1007/BF01272517] 227
[14] BALA K ALYANASUNDARAM AND G EORG S CHNITGER: The probabilistic communication com-
plexity of set intersection. SIAM J. Discrete Math., 5(4):545–557, 1992. [doi:10.1137/0405044]
233
[15] H ARTMUT K LAUCK: Lower bounds for quantum communication complexity. In Proc. 42nd FOCS,
pp. 288–297. IEEE Comp. Soc. Press, 2001. [doi:10.1109/SFCS.2001.959903] 229, 233
[16] H ARTMUT K LAUCK: Rectangle size bounds and threshold covers in communication complexity.
In Proc. 18th Conf. Comput. Complexity (CCC), pp. 118–134. IEEE Comp. Soc. Press, 2003.
[doi:10.1109/CCC.2003.1214415] 228, 229, 233, 237
[17] E YAL K USHILEVITZ AND N OAM N ISAN: Communication Complexity. Cambridge University
Press, New York, 1997. 229, 231, 233, 237
[18] T ROY L EE AND A DI S HRAIBMAN: Disjointness is hard in the multi-party number-on-the-forehead
model. In Proc. 23rd Conf. Comput. Complexity (CCC), pp. 81–91. IEEE Comp. Soc. Press, 2008.
[doi:10.1109/CCC.2008.29] 228, 229, 230, 234, 236, 241, 242
[19] N OAM N ISAN AND M ARIO S ZEGEDY: On the degree of Boolean functions as real polynomials.
Comput. Complexity, 4(4):301–313, 1994. [doi:10.1007/BF01263419] 239
[20] C HRISTOS H. PAPADIMITRIOU AND M ICHAEL S IPSER: Communication complexity. In Proc.
14th STOC, pp. 196–200. ACM Press, 1982. [doi:10.1145/800070.802192] 228
[21] R AMAMOHAN PATURI: On the degree of polynomials that approximate symmetric Boolean
functions. In Proc. 24th STOC, pp. 468–474. ACM Press, 1992. [doi:10.1145/129712.129758] 239
[22] R AN R AZ: The BNS-Chung criterion for multi-party communication complexity. Comput. Com-
plexity, 9(2):113–122, 2000. [doi:10.1007/PL00001602] 232
[23] A. A. R AZBOROV: On the distributional complexity of disjointness. Theoret. Comput. Sci.,
106(2):385–390, 1992. [doi:10.1016/0304-3975(92)90260-M] 233
[24] A. A. R AZBOROV: Quantum communication complexity of symmetric predicates. Izv. Math.,
67(1):145–159, 2003. [doi:10.1070/IM2003v067n01ABEH000422] 229, 233
[25] A LEXANDER R AZBOROV AND AVI W IGDERSON: nΩ(log n) lower bounds on the size of depth-3
threshold circuits with AND gates at the bottom. Inform. Process. Lett., 45(6):303–307, 1993.
[doi:10.1016/0020-0190(93)90041-7] 227
[26] A LEXANDER A. S HERSTOV: Communication lower bounds using dual polynomials. Bull. Eur.
Assoc. Theor. Comput. Sci. (EATCS), 95:59–93, 2008. 229, 236, 239, 242
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 244
A S EPARATION OF NP AND coNP IN M ULTIPARTY C OMMUNICATION C OMPLEXITY
[27] A LEXANDER A. S HERSTOV: The pattern matrix method for lower bounds on quantum communica-
tion. In Proc. 40th STOC, pp. 85–94. ACM Press, 2008. [doi:10.1145/1374376.1374392] 228, 229,
233, 234, 235, 236, 242
[28] A LEXANDER A. S HERSTOV: Separating AC0 from depth-2 majority circuits. SIAM J. Comput.,
38(6):2113–2129, 2009. Preliminary version in 39th STOC, 2007. [doi:10.1137/08071421X] 228,
229, 230, 234, 235, 239, 242
[29] YAOYUN S HI AND Y UFAN Z HU: Quantum communication complexity of block-composed func-
tions. Quantum Inf. Comput., 9(5–6):444–460, 2009. 229, 236
[30] A NDREW C HI -C HIH YAO: On ACC and threshold circuits. In Proc. 31st FOCS, pp. 619–627. IEEE
Comp. Soc. Press, 1990. [doi:10.1109/FSCS.1990.89583] 227
AUTHORS
Dmitry Gavinsky
NEC Laboratories America Inc.
4 Independence Way, Suite 200
Princeton, NJ 08540
dmitry gavinsky gmail com
http://www.nec-labs.com/~dmitry
Alexander A. Sherstov
Microsoft Research
Cambridge, MA 02142
sherstov cs utexas edu
http://www.cs.utexas.edu/~sherstov
ABOUT THE AUTHORS
D MITRY G AVINSKY is a research staff member of the Quantum IT group at NEC Laborato-
ries America. He completed his Ph. D. at the University of Calgary under the supervision
of John Watrous and Richard Cleve. His research interests include quantum computing
and computational complexity theory.
A LEXANDER S HERSTOV recently completed his Ph. D. at the University of Texas at Austin
under the supervision of Adam Klivans. His research interests include computational
complexity theory, computational learning theory, and quantum computing. He is
currently a postdoctoral researcher at Microsoft Research.
T HEORY OF C OMPUTING, Volume 6 (2010), pp. 227–245 245