Authors Alexander A. Sherstov,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17
www.theoryofcomputing.org
On Multiparty Communication with
Large versus Unbounded Error
Alexander A. Sherstov∗
Received September 3, 2016; Revised March 24, 2017; Published December 28, 2018
Abstract: The unbounded-error communication complexity of a Boolean function F is the
limit of the ε-error randomized complexity of F as ε → 1/2. Communication complexity
with weakly unbounded error is defined similarly but with an additive penalty term that de-
pends on 1/2 − ε. Explicit functions are known whose two-party communication complexity
with unbounded error is logarithmic compared to their complexity with weakly unbounded
error. Chattopadhyay and Mande (ECCC Report TR16-095, Theory of Computing 2018)
recently generalized this exponential separation to the number-on-the-forehead multiparty
model. We show how to derive such an exponential separation from known two-party work,
achieving a quantitative improvement along the way. We present several proofs here, some
as short as half a page.
In more detail, we construct a k-party communication problem F : ({0, 1}n )k → {0, 1}
√
that has complexity O(log n) with unbounded error and Ω( n/4k ) with weakly unbounded
error, reproducing the bounds of Chattopadhyay and Mande. In addition, we prove a quadrat-
ically stronger separation of O(log n) versus Ω(n/4k ) using a nonconstructive argument.
ACM Classification: F.1.3, F.2.3
AMS Classification: 68Q17, 68Q15
Key words and phrases: lower bounds, separation of complexity classes, multiparty communication
complexity, unbounded-error communication complexity, PP, UPP
A preliminary version of this paper appeared in ECCC TR16-138.
∗ The author was supported in part by NSF CAREER award CCF-1149018 and an Alfred P. Sloan Foundation Research
Fellowship.
© 2018 Alexander A. Sherstov
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2018.v014a022
A LEXANDER A. S HERSTOV
1 Introduction
The number-on-the-forehead model, due to Chandra et al. [9], is the most powerful model of multiparty
communication. The model features k communicating players and a Boolean function
F : X1 × X2 × · · · × Xk → {−1, +1}
with k arguments. An input (x1 , x2 , . . . , xk ) is distributed among the k players by giving the i-th player
the arguments x1 , . . . , xi−1 , xi+1 , . . . , xk but not xi . This arrangement can be visualized as having the k
players seated in a circle with xi written on the i-th player’s forehead, whence the name of the model.
Number-on-the-forehead is the canonical model in the area because any other way of assigning arguments
to the players results in a less powerful model—provided of course that one does not assign all the
arguments to some player, in which case there is never a need to communicate.
The players communicate according to a protocol agreed upon in advance. The communication
occurs in the form of broadcasts, with a message sent by any given player instantly reaching everyone
else. The players’ objective is to compute F on any given input with minimal communication. To this
end, each player privately holds an unbounded supply of uniformly random bits which he can use in
deciding what message to send at any given point in the protocol. The cost of a protocol is the total bit
length of all the messages broadcast in a worst-case execution. The ε-error randomized communication
complexity Rε (F) of a given function F is the least cost of a protocol that computes F with probability of
error at most ε on every input. Number-on-the-forehead communication complexity is a natural subject
of study in its own right, in addition to its applications to circuit complexity, pseudorandomness, and
proof complexity [2, 32, 17, 23, 5].
Our interest in this paper is in communication protocols that compute a given function F with error
probability close to that of random guessing, 1/2. There are two standard ways to define the complexity
of F in this setting, both inspired by probabilistic polynomial time for Turing machines:
UPP(F) = inf Rε (F)
0⩽ε<1/2
and ( !)
1
PP(F) = inf Rε (F) + log2 1
.
2 −ε
0⩽ε<1/2
The former quantity, introduced by Paturi and Simon [21], is called the communication complexity
of F with unbounded error, in reference to the fact that the error probability can be arbitrarily close
to 1/2. The latter quantity, proposed by Babai et al. [1], includes an additional penalty term that
depends on the error probability. We refer to PP(F) as the communication complexity of F with weakly
unbounded error. Both of these complexity measures give rise to complexity classes in communication
complexity theory [1]. Formally, UPPk is the class of families {Fn,k }∞n=1 of k-party communication
n k
problems Fn,k : ({0, 1} ) → {−1, +1} whose unbounded-error communication complexity is at most
polylogarithmic in n. Its counterpart PPk is defined analogously for the complexity measure PP. The
authors of [21] and [1] studied two-party communication (k = 2). In the generalization just described,
k = k(n) can be an arbitrary constant or a growing function of n.
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 2
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
1.1 Previous work
Large-error communication is by definition no more powerful than unbounded-error communication, and
for twenty years it was unknown whether this containment is proper. Buhrman et al. [8] and the author [24]
answered this question for two-party communication, independently and with unrelated techniques.
These papers exhibited functions F : {0, 1}n × {0, 1}n → {−1, +1} with an exponential gap between
communication complexity with unbounded error versus weakly unbounded error: UPP(F) = O(log n)
√
in both works, versus PP(F) = Ω(n1/3 ) in [8] and PP(F) = Ω( n) in [24]. In complexity-theoretic
notation, these results show that PP2 ( UPP2 .
The analyses by Buhrman et al. [8] and the author [24] were quite specialized. The former was
based on a subtle lemma from Razborov’s quantum lower bound [22] for set disjointness, whereas
the latter was built around an earlier result of Goldmann et al. [16] on the discrepancy of a low-
degree polynomial threshold function. In subsequent work, the author developed a general tech-
nique called the pattern matrix method [25, 26], which makes it possible to obtain communication
lower bounds from simpler, approximation-theoretic complexity measures of Boolean functions. We
used the pattern matrix method in [26] to give a simple alternate proof of the separation due to
Buhrman et al. [8]. Following up, Thaler [31] used the pattern matrix method to exhibit a commu-
nication problem F : {0, 1}n × {0, 1}n → {−1, +1} computable in AC0 with communication complexity
UPP(F) = O(log n) and PP(F) = Ω(n/ log n)2/5 . Thaler’s result improves on the previous separation by
an AC0 function, due to Buhrman et al. [8].
The surveyed work on communication complexity with unbounded and weakly unbounded error
considered the two-party model. The past few years saw a resurgence of interest in multiparty communi-
cation complexity classes, with numerous separations established over the past decade [19, 11, 14, 3, 15].
Recently, Chattopadhyay and Mande [12] revisited the unbounded versus weakly unbounded question
in the multiparty setting. They generalized the original two-party separation [24] to k ⩾ 3 parties,
exhibiting a k-party communication problem F : ({0, 1}n )k → {−1, +1} with UPP(F) = O(log n) and
√
PP(F) = Ω( n/4k − log n − k). Thus, the proper containment PPk ( UPPk continues to hold for up to
k ≈ 0.25 log2 n players.
1.2 Our results
The purpose of our paper is to show how to derive the proper containment PPk ( UPPk in an almost
trivial manner from previous two-party work. The key is to use the pattern matrix-based approach
to the problem [26, 31], as opposed to the earlier two-party work [16, 24] which forms the basis for
Chattopadhyay and Mande’s result.
In more detail, we present three short proofs separating multiparty communication complexity with
unbounded versus weakly unbounded error, all of which work by applying the pattern matrix method to a
result from polynomial approximation. To start with, we give a half-a-page proof that PPk ( UPPk for
up to k ≈ 0.5 log2 n players, by constructing an explicit function with an exponential gap between the
complexity with unbounded error versus weakly unbounded error. The proof of this qualitative result is
presented in Section 3.1 and is virtually identical to the previous analyses in the two-party setting [26, 31].
By applying the pattern matrix method to more recent work in approximation theory, we are able to give
quantitatively improved separations. Our strongest result is the following nonconstructive theorem.
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 3
A LEXANDER A. S HERSTOV
Theorem 1.1 (Nonconstructive separation). There is a k-party communication problem H : ({0, 1}n )k →
{−1, +1} with n
UPP(H) = O(log n) and PP(H) = Ω k .
4
Moreover,
!
n
1
H(x) = sgn + ∑ wi x1,i x2,i · · · xk,i (1.1)
2 i=1
for some fixed w1 , w2 , . . . , wn ∈ {0, ±1, ±2, . . . , ±(2n − 1)}.
By using additional input bits, it is straightforward to obtain an explicit function that contains every
function of the form (1.1) as a subfunction. This yields our third result, which is our main constructive
separation.
√ √
Corollary 1.2. Let F : {0, 1}n+ n × ({0, 1} n )k−1 → {−1, +1} be the k-party communication problem
given by √ √ ! !
n n−1
1 √
F(x) = sgn + ∑ (−1)x1,i, n ∑ 2 j x1,i, j x2,i x3,i . . . xk,i .
2 i=1 j=0
Then √
n
UPP(F) = O(log n) and PP(F) = Ω k
.
4
Ignoring cosmetic differences, the function in this corollary is the same as the functions used in our
original two-party separation [24] and in the proof of Chattopadhyay and Mande [12]. Quantitatively,
Corollary 1.2 reproduces the bounds proved by Chattopadhyay and Mande, whereas Theorem 1.1 gives a
quadratically stronger nonconstructive result.
1.3 Paper organization
The remainder of this paper is organized as follows. Section 2 gives a leisurely review of the technical
preliminaries, which the expert reader may wish to skim or skip altogether. We then prove our three
separations in Sections 3.1–3.3.
2 Preliminaries
There are two common arithmetic encodings for the Boolean values: the traditional encoding false ↔
0, true ↔ 1, and the more recent Fourier-inspired encoding false ↔ 1, true ↔ −1. Throughout this
manuscript, we use the former encoding for the domain of a Boolean function and the latter for the
range. In particular, Boolean functions for us are mappings {0, 1}n → {−1, +1} for some n. For Boolean
functions f : {0, 1}n → {−1, +1} and g : {0, 1}m → {−1, +1}, we let f ◦ g denote the coordinatewise
composition of f with g. Formally, f ◦ g : ({0, 1}m )n → {−1, +1} is given by
1 − g(x1 ) 1 − g(x2 ) 1 − g(xn )
( f ◦ g)(x1 , x2 , . . . , xn ) = f , ,..., , (2.1)
2 2 2
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 4
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
where the linear map on the right-hand side serves the purpose of switching between the distinct
arithmetizations for the domain versus range. A partial function f on a set X is a function whose
domain of definition, denoted dom f , is a nonempty proper subset of X. We generalize coordinatewise
composition f ◦ g to partial Boolean functions f and g in the natural way. Specifically, f ◦ g is the
Boolean function given by (2.1), with domain the set of all inputs (. . . , xi , . . . ) ∈ (dom g)n for which
(. . . , (1 − g(xi ))/2, . . . ) ∈ dom f .
The analytic notation that we use is entirely standard. For a function f : X → R on an arbitrary
finite set X, we let k f k∞ = maxx∈X | f (x)| denote the infinity norm of f . Euler’s number is denoted
e = 2.7182 . . . . The sign function is given, as usual, by
−1, x < 0,
sgn x = 0, x = 0,
1, x > 0.
For a subset X ⊆ R, we let sgn |X denote the restriction of the sign function to X. In other words,
sgn |X : X → {−1, 0, +1} is the mapping that sends x 7→ sgn x. We let log x stand for the logarithm of x to
base 2.
2.1 Approximation by polynomials
Recall that the total degree of a multivariate real polynomial p : Rn → R, denoted deg p, is the largest
degree of any monomial of p. We use the terms “degree” and “total degree” interchangeably in this paper.
Let f : X → R be a given function, for a finite subset X ⊂ Rn . For any d ⩾ 0, define
E( f , d) = min k f − pk∞ ,
p
where the minimum is over real polynomials p of degree at most d. In words, E( f , d) is the minimum
error in a pointwise approximation of f by a polynomial of degree no greater than d. The ε-approximate
degree of f , denoted degε ( f ), is the least degree of a real polynomial p such that k f − pk∞ ⩽ ε. Any
polynomial p with this property is said to be a uniform approximant, or pointwise approximant, to f with
error ε. Observe that
degε ( f ) = min{d : E( f , d) ⩽ ε} .
A common choice of error parameter is ε = 1/3, an aesthetically motivated constant that is replaceable
by any other in (0, 1) without the theory changing in any significant way. Specifically, the following
result of Buhrman et al. [7, p. 384] gives an efficient way to reduce the error in a pointwise approximation
of a Boolean function at the expense of a modest multiplicative increase in the degree of the approximant.
Fact 2.1 (Buhrman et al.). For any function f : X → {−1, +1} on any finite subset X ⊂ Rn ,
1 2
degδ ( f ) ⩽ O log · degε ( f ) , 0 < δ < ε < 1.
(1 − ε)2 δ
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 5
A LEXANDER A. S HERSTOV
Proof (adapted from Buhrman et al.). Consider the degree-d univariate polynomial
d
d i
Bd (t) = ∑ t (1 − t)d−i .
i=dd/2e
i
In words, Bd (t) is the probability of observing more heads than tails in a sequence of d independent coin
flips, each coming up heads with probability t. By the Chernoff bound for sufficiently large
1 2
d=O log ,
(1 − ε)2 δ
Bd sends
ε δ ε δ
0, → 0, and similarly 1− ,1 → 1− ,1 .
1+ε 2 1+ε 2
In particular, if a given Boolean function f (x) is approximated pointwise within ε by a polynomial p(x),
then f (x) is approximated pointwise within δ by
1 1
2Bd p(x) + −1.
2 + 2ε 2
2.2 Approximation of specific functions
Among the first findings in this line of work was Paturi’s tight lower bound [20] for the constant-error
approximation of the sign function. Specifically, Paturi showed that approximating the sign function on
{±1, ±2, ±3, . . . , ±n} pointwise to within 1/3 requires a polynomial of linear degree.
Theorem 2.2 (Paturi).
deg1/3 (sgn |{±1,±2,±3,...,±n} ) = Ω(n).
Paturi in fact proved the stronger result that the majority function on n bits has 1/3-approximate degree
Ω(n), but Theorem 2.2 will suffice for our purposes. On the large-error side, Beigel [6] constructed
the following function in his seminal work on perceptrons. Its remarkable feature is that low-degree
polynomials can represent it in sign but cannot approximate it uniformly except with error exponentially
close to 1.
Theorem 2.3 (Beigel). Let fn : {0, 1}n → {−1, +1} be given by
!
n
fn (x) = sgn 1 + ∑ (−2)i xi .
i=1
√
Then for all 1 ⩽ d ⩽ n, n
E( fn , d) > 1 − exp −Ω 2 .
d
To be precise, Beigel phrased his proof in terms of a related approximation-theoretic quantity known as
threshold weight. A proof of Theorem 2.3 as it is stated here is available, e. g., in Thaler [31, Section 1.2.2].
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 6
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
2.3 Multiparty communication
An excellent reference on communication complexity is the monograph by Kushilevitz and Nisan [18]. In
this overview, we will limit ourselves to key definitions and notation. We adopt the randomized number-
on-the-forehead model, due to Chandra et al. [9]. The model features k communicating players, tasked
with computing a (possibly partial) Boolean function F on the Cartesian product X1 × X2 × · · · × Xk of
some finite sets X1 , X2 , . . . , Xk . A given input (x1 , x2 , . . . , xk ) ∈ X1 × X2 × · · · × Xk is distributed among the
players by placing xi , figuratively speaking, on the forehead of the i-th player (for i = 1, 2, . . . , k). In other
words, the i-th player knows the arguments x1 , . . . , xi−1 , xi+1 , . . . , xk but not xi . The players communicate
by sending broadcast messages, taking turns according to a protocol agreed upon in advance. Each of them
privately holds an unlimited supply of uniformly random bits, which he can use along with his available
arguments when deciding what message to send at any given point in the protocol. The protocol’s purpose
is to allow accurate computation of F everywhere on the domain of F. An ε-error protocol for F is
one which, on every input (x1 , x2 , . . . , xk ) ∈ dom F, produces the correct answer F(x1 , x2 , . . . , xk ) with
probability at least 1 − ε. The cost of a protocol is the total bit length of the messages broadcast by all the
players in the worst case.1 The ε-error randomized communication complexity of F, denoted Rε (F), is
the least cost of an ε-error randomized protocol for F.
2.4 Communication with unbounded and weakly unbounded error
We focus on randomized protocols with probability of error close to that of random guessing, 1/2. There
are two natural ways to define the communication complexity of a multiparty problem F in this setting.
The communication complexity of F with unbounded error, introduced by Paturi and Simon [21], is the
quantity
UPP(F) = inf Rε (F) .
0⩽ε<1/2
The error probability in this formalism is “unbounded” in the sense that it can be arbitrarily close to 1/2.
Babai et al. [1] proposed an alternate quantity, which includes an additive penalty term that depends on
the error probability: ( )
1
PP(F) = inf Rε (F) + log 1 .
2 −ε
0⩽ε<1/2
We refer to PP(F) as the communication complexity of F with weakly unbounded error. These two com-
plexity measures naturally give rise to corresponding classes UPPk and PPk in multiparty communication
complexity [1], both inspired by the Turing machine class PP. Formally, let {Fn,k }∞ n=1 be a family of
k-party communication problems Fn,k : ({0, 1}n )k → {−1, +1}, where k = k(n) is either a constant or a
c
growing function. Then {Fn,k }∞n=1 ∈ UPPk if and only if UPP(Fn,k ) ⩽ log n for some constant c and all
c
n ⩾ c. Analogously, {Fn,k }n=1 ∈ PPk if and only if PP(Fn,k ) ⩽ log n for some constant c and all n ⩾ c.
∞
By definition,
PPk ⊆ UPPk .
The following result, proved in the setting of k = 2 parties by Paturi and Simon [21, Theorem 2], gives a
large class of communication problems that are efficiently computable with unbounded error.
1 The contribution of a b-bit broadcast to the protocol cost is b rather than k · b.
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 7
A LEXANDER A. S HERSTOV
Fact 2.4 (Paturi and Simon). Let F : ({0, 1}n )k → {−1, +1} be a k-party communication problem such
that F(x) = sgn p(x) for some polynomial p with ` monomials. Then
UPP(F) ⩽ dlog `e + 2 .
For the reader’s convenience, we include a proof of this result.
Proof of Fact 2.4. For a subset S ⊆ {1, 2, . . . , n} × {1, 2, . . . , k}, let xS denote the product of the variables
indexed by S. By hypothesis, !
`
F(x) = sgn ∑ aS xS i i
i=1
for some subsets S1 , S2 , . . . , S` and some reals aS1 , aS2 , . . . , aS` . Consider the following communication
protocol, which involves only two of the k players. The first player chooses a random index i according to
the probability distribution |aSi |/(|aS1 | + |aS2 | + · · · + |aS` |), and broadcasts i. He then collaborates with
the second player to output a random element of {−1, +1} with expected value sgn(aSi xSi ).
It is straightforward to verify that this protocol can be implemented using at most dlog `e + 2 bits of
communication. For correctness, the output on a given input x has expected value
`
1
∑ aSi xSi ,
|aS1 | + |aS2 | + · · · + |aS` | i=1
which agrees in sign with F(x). Therefore, the protocol computes F(x) correctly with probability greater
than 1/2.
2.5 Discrepancy
Our main result involves proving, for communication problems F of interest, an upper bound on UPP(F)
and a lower bound on PP(F). The former is done using Fact 2.4; the latter relies on technical machinery
which we now review. A k-dimensional cylinder intersection is a function χ : X1 × X2 × · · · × Xk → {0, 1}
of the form
k
χ(x1 , x2 , . . . , xk ) = ∏ χi (x1 , . . . , xi−1 , xi+1 , . . . , xk ) ,
i=1
where χi : X1 ×· · ·×Xi−1 ×Xi+1 ×· · ·×Xk → {0, 1}. In other words, a k-dimensional cylinder intersection
is the product of k functions with range {0, 1}, where the i-th function does not depend on the i-th
coordinate but may depend arbitrarily on the other k − 1 coordinates. Introduced by Babai et al. [2],
cylinder intersections are the fundamental building blocks of communication protocols and for that reason
play a central role in the theory. For a (possibly partial) Boolean function F on X1 × X2 × · · · × Xk and a
probability distribution P on X1 × X2 × · · · × Xk , the discrepancy of F with respect to P is given by
discP (F) = ∑ P(x) + max ∑ F(x)P(x)χ(x) ,
χ
x∈dom
/ F x∈dom F
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 8
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
where the maximum is over cylinder intersections χ. The minimum discrepancy over all distributions is
denoted
disc(F) = min discP (F) .
P
Upper bounds on the discrepancy give lower bounds on randomized communication complexity, a classic
technique known as the discrepancy method [13, 2, 18].
Theorem 2.5 (Discrepancy method). Let F be a (possibly partial) Boolean function on X1 × X2 × · · · × Xk .
Then
1 − 2ε
2Rε (F) ⩾ .
disc(F)
A proof of Theorem 2.5 in the stated generality is available in [29, Theorem 2.9]. Combining this theorem
with the definition of PP(F) gives the following corollary.
Corollary 2.6. Let F be a (possibly partial) Boolean function on X1 × X2 × · · · × Xk . Then
2
PP(F) ⩾ log .
disc(F)
2.6 Pattern matrix method
Theorem 2.5 and Corollary 2.6 highlight the role of discrepancy in proving lower bounds on randomized
communication complexity. Apart from a few canonical examples [18], discrepancy is a challenging
quantity to analyze. The pattern matrix method is a technique that gives tight bounds on the discrepancy
and communication complexity for a class of communication problems. The technique was developed
in [25, 26] for two-party communication complexity and has since been generalized by several authors to
the multiparty setting, e. g., [19, 11, 14, 4, 10, 29, 28]. We now review the strongest form [29, 28] of the
pattern matrix method, focusing our discussion on discrepancy bounds.
Set disjointness is the k-party communication problem of determining whether k given subsets of the
universe {1, 2, . . . , n} have empty intersection, where, as usual, the i-th party knows all the sets except for
the i-th. Identifying the sets with their characteristic vectors, set disjointness corresponds to the Boolean
function DISJn,k : ({0, 1}n )k → {−1, +1} given by
n
_
DISJn,k (x1 , x2 , . . . , xk ) = ¬ x1,i ∧ x2,i ∧ · · · ∧ xk,i . (2.2)
i=1
The partial function UDISJn,k on ({0, 1}n )k , called unique set disjointness, is defined as the restriction
of DISJn,k to inputs x ∈ ({0, 1}n )k such that x1,i ∧ x2,i ∧ · · · ∧ xk,i = 1 for at most one coordinate i. In
set-theoretic terms, this restriction corresponds to requiring that the k sets either have empty intersection
or intersect in a unique element.
The pattern matrix method pertains to the communication complexity of composed communication
problems. Specifically, let G be a (possibly partial) Boolean function on X1 × X2 × · · · × Xk , representing
a k-party communication problem, and let f : {0, 1}n → {−1, +1} be given. The coordinatewise compo-
sition f ◦ G is then a k-party communication problem on X1n × X2n × · · · × Xkn . We are now in a position to
state the pattern matrix method for discrepancy bounds [29, Theorem 5.7].
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 9
A LEXANDER A. S HERSTOV
Theorem 2.7 (Sherstov). For every Boolean function f : {0, 1}n → {−1, +1}, all positive integers m
and k, and all reals 0 < γ < 1,
!deg1−γ ( f )
e · 2k n
disc( f ◦ UDISJm,k ) ⩽ √ +γ .
deg1−γ ( f ) m
This theorem makes it possible to prove communication lower bounds by leveraging the existing literature
on polynomial approximation. In follow-up work, the author improved Theorem 2.7 to an essentially
tight upper bound [28, Theorem 5.7]. However, we will not need this sharper version.
3 Main results
We are now in a position to establish the proper containment PPk ( UPPk for up to k ≈ 0.5 log n
players. We present three proofs for this separation. All of them apply the pattern matrix method
to a relevant result on polynomial approximation, in a manner closely analogous to the two-party
work [26, 31]. The key new element is the observation that the unique set disjointness function has an
exact representation on its domain as a polynomial with a small number of monomials. Specifically,
define UDISJ∗m,k : ({0, 1}m )k → R by
m
UDISJ∗m,k (x) = −1 + 2 ∑ x1,i x2,i · · · xk,i .
i=1
Then
UDISJm,k (x) = UDISJ∗m,k (x) for all x ∈ dom UDISJm,k . (3.1)
Section 3.1 presents our simplest and shortest proof of PPk ( UPPk . We follow up in Sections 3.2 and 3.3
with quantitatively stronger separations, settling the main constructive and nonconstructive results of this
paper.
3.1 A qualitative separation
The literature on the polynomial approximation of Boolean functions spans a broad spectrum of technical
sophistication. Here, we combine the pattern matrix method with a particularly well-known and basic
result on polynomial approximation. Our proof is virtually identical to the previous proofs in the two-party
setting, e. g., [26, Section 10] and [31, Section 4.2.3]. The only point of departure, (3.1), is to check that
the inner gadget remains a sparse polynomial as the number of parties grows.
Theorem 3.1. For all positive integers n and k, there is an (explicitly given) k-party communication
problem Fn,k : ({0, 1}n )k → {−1, +1} such that
UPP(Fn,k ) = O(log n) and (3.2)
n 1/7
PP(Fn,k ) = Ω k . (3.3)
4
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 10
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
Moreover,
!
n
Fn,k (x) = sgn w0 + ∑ wi x1,i x2,i · · · xk,i (3.4)
i=1
for fixed reals w0 , w1 , . . . , wn .
Proof. Let fn : {0, 1}n → {−1, +1} be the function defined in Theorem 2.3. Then fn (x) = sgn p(x) for a
linear polynomial p : {0, 1}n → R, and
deg1−exp(−cn1/3 ) ( fn ) ⩾ n1/3 (3.5)
for some constant c > 0. Abbreviate m = d2k+1 e · n2/3 e2 and consider the k-party communication problem
0 : ({0, 1}nm )k → {−1, +1} given by
Fn,k
1 − UDISJ∗m,k 1 − UDISJ∗m,k 1 − UDISJ∗m,k
0
Fn,k = sgn p , ,..., ,
2 2 2
where the right-hand side features the coordinatewise composition of p with n independent copies of
UDISJ∗m,k . The identity (3.1) implies that Fn,k
0 coincides with f ◦ UDISJ
n m,k on the domain of the latter.
Therefore,
0
PP(Fn,k ) ⩾ PP( fn ◦ UDISJm,k )
2
⩾ log
disc( fn ◦ UDISJm,k )
2
⩾ log 1/3
2−n + exp(−cn1/3 )
= Ω(n1/3 ) ,
where the second step uses Corollary 2.6 and the third step follows from (3.5) by the pattern matrix
method (Theorem 2.7). Now (3.3) and (3.4) are immediate by letting
0
Fn,k = Fb(n/4k+4 )3/7 c,k ,
whereas (3.2) follows from (3.4) by Fact 2.4.
Theorem 3.1 is not as strong as our main results, to be established shortly. It is nevertheless of qualitative
interest because of its corollary.
Corollary 3.2. Let ε > 0 be an arbitrary constant. Then for k ⩽ (0.5 − ε) log n,
PPk ( UPPk .
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 11
A LEXANDER A. S HERSTOV
3.2 The nonconstructive separation
We now turn to a nonconstructive separation of PPk and UPPk , which quantitatively is our strongest.
The main new ingredient here is an approximation-theoretic result [27, Theorem 5.1] for halfspaces.
Informally, it states that approximating a certain halfspace in n Boolean variables is at least as hard as
approximating the sign function on an exponentially larger domain, {±1, ±2, ±3, . . . , ± exp(Ω(n))}.
Theorem 3.3 (Sherstov). Let 0 < α < 1 be any sufficiently small absolute constant. Then there exist inte-
gers w1 , w2 , . . . , wn ∈ {0, 1, 2, . . . , 2n − 1} such that the Boolean function fn : {0, 1}n × {0, 1, 2, . . . , n} →
{−1, +1} given by
!
n
1
fn (x,t) = sgn + ∑ wi xi − 2bαnc+1t (3.6)
2 i=1
obeys
E( fn , d) ⩾ E(sgn |{±1,±2,±3,...,±2bαnc } , d) , d = 0, 1, . . . , bαnc .
This result was proved in [27] in the greater generality of approximation by rational functions. The
special case of polynomial approximation, stated above, corresponds to fixing q = 1 in the proof of [27,
Theorem 5.1].
Corollary 3.4. There exist integers w1 , w2 , . . . , wn+1 ∈ {0, 1, 2, . . . , 2n − 1} such that the Boolean function
hn : {0, 1}2n → {−1, +1} given by
!
n 2n
1
hn (x) = sgn + ∑ wi xi − wn+1 ∑ xi (3.7)
2 i=1 i=n+1
obeys
E(hn , cn) > 1 − exp(−cn) ,
where c > 0 is an absolute constant.
Proof. Let 0 < α < 1 be a sufficiently small absolute constant from Theorem 3.3, and abbreviate
S = sgn |{±1,±2,±3,...,±2bαnc } .
Theorem 2.2 and Fact 2.1 imply that
bαnc 1
Ω(2 ) ⩽ deg1/3 (S) ⩽ O · bαnc ,
(1 − E(S, bαnc))2
whence
1/2
bαnc
E(S, bαnc) ⩾ 1 − Ω . (3.8)
2bαnc
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 12
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
Now fix w1 , w2 , . . . , wn whose existence is guaranteed by Theorem 3.3, and set wn+1 = 2bαnc+1 . Let fn
and hn be given by (3.6) and (3.7). Then
E(hn , bαnc) = E( fn , bαnc)
⩾ E(S, bαnc) ,
where the first step holds by a standard symmetrization argument (see, e. g., [27, Proposition 2.6]) and the
second step is immediate from Theorem 3.3. This completes the proof in view of (3.8).
We are now in a position to prove our nonconstructive separation, stated as Theorem 1.1 in the
introduction.
Theorem 3.5 (restatement of Theorem 1.1). There is a k-party communication problem
Hn,k : ({0, 1}n )k → {−1, +1}
with
UPP(Hn,k ) = O(log n) and (3.9)
n
PP(Hn,k ) = Ω k . (3.10)
4
Moreover,
!
n
1
Hn,k (x) = sgn + ∑ wi x1,i x2,i · · · xk,i (3.11)
2 i=1
for some fixed w1 , w2 , . . . , wn ∈ {0, ±1, ±2, . . . , ±(2n − 1)}.
Proof. Let hn : {0, 1}2n → {−1, +1} be the function whose existence is assured by Corollary 3.4. Then
deg1−exp(−cn) (hn ) ⩾ cn (3.12)
for some constant c > 0, and moreover hn (x) = sgn p(x) for a linear polynomial p : {0, 1}2n → R with
constant term 1/2 and all other coefficients integers bounded in absolute value by 2n − 1. Abbreviate
m = d2k+2 e/ce2 and consider the k-party communication problem Hn,k 0 : ({0, 1}2nm )k → {−1, +1} given
by
1 − UDISJ∗m,k 1 − UDISJ∗m,k 1 − UDISJ∗m,k
0
Hn,k = sgn p , ,..., ,
2 2 2
where the right-hand side features the coordinatewise composition of p with 2n independent copies of
UDISJ∗m,k . The identity (3.1) implies that Hn,k
0 coincides with h ◦ UDISJ
n m,k on the domain of the latter.
Therefore,
0
PP(Hn,k ) ⩾ PP(hn ◦ UDISJm,k )
2
⩾ log
disc(hn ◦ UDISJm,k )
2
⩾ log −cn
2 + exp(−cn)
= Ω(n) ,
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 13
A LEXANDER A. S HERSTOV
where the second step uses Corollary 2.6 and the third step follows from (3.12) by the pattern matrix
0
method (Theorem 2.7). A moment’s thought shows that Hbn/(4m)c,k is a subfunction of some Hn,k in
the theorem statement, whence (3.10) and (3.11). The remaining property (3.9) follows from (3.11) by
Fact 2.4.
3.3 The constructive separation
Despite being nonconstructive, Theorem 3.5 implies the following constructive result, which is our
strongest constructive separation of PPk and UPPk .
√ √
Corollary 3.6 (restatement of Corollary 1.2). Let Fn,k : {0, 1}n+ n × ({0, 1} n )k−1 → {−1, +1} be the
k-party communication problem given by
√ √ ! !
n n−1
1 √
Fn,k (x) = sgn + ∑ (−1)x1,i, n ∑ 2 j x1,i, j x2,i x3,i . . . xk,i .
2 i=1 j=0
Then
UPP(Fn,k ) = O(log n) and (3.13)
√
n
PP(Fn,k ) = Ω . (3.14)
4k
Proof. In the notation of Theorem 3.5, every H√n,k is a restriction of Fn,k . Indeed, every H√n,k can be
obtained from Fn,k by appropriately fixing x1,i,0 , x1,i,1 , . . . , x1,i,√n−1 ∈ {0, x1,i } and x1,i,√n ∈ {0, 1}. As a
result, (3.14) follows from (3.10), whereas (3.13) is immediate by Fact 2.4.
Acknowledgments
The author is thankful to Arkadev Chattopadhyay and Nikhil Mande for a stimulating discussion and
helpful feedback on an earlier version of this manuscript.
References
[1] L ÁSZLÓ BABAI , P ETER F RANKL , AND JANOS S IMON: Complexity classes in communica-
tion complexity theory. In Proc. 27th FOCS, pp. 337–347. IEEE Comp. Soc. Press, 1986.
[doi:10.1109/SFCS.1986.15] 2, 7
[2] L ÁSZLÓ BABAI , N OAM N ISAN , AND M ARIO S ZEGEDY: Multiparty protocols, pseudorandom
generators for logspace, and time-space trade-offs. J. Comput. System Sci., 45(2):204–232, 1992.
Preliminary version in STOC’89. [doi:10.1016/0022-0000(92)90047-M] 2, 8, 9
[3] PAUL B EAME , M ATEI DAVID , T ONIANN P ITASSI , AND P HILIPP W OELFEL: Separating determin-
istic from randomized multiparty communication complexity. Theory of Computing, 6(1):201–225,
2010. Preliminary version in ICALP’07. [doi:10.4086/toc.2010.v006a009] 3
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 14
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
[4] PAUL B EAME AND DANG -T RINH H UYNH -N GOC: Multiparty communication complexity and
threshold circuit size of AC0 . SIAM J. Comput., 41(3):484–518, 2012. Preliminary version in
FOCS’09. [doi:10.1137/100792779] 9
[5] PAUL B EAME , T ONIANN P ITASSI , AND NATHAN S EGERLIND: Lower bounds for Lovász-
Schrijver systems and beyond follow from multiparty communication complexity. SIAM J. Comput.,
37(3):845–869, 2007. Preliminary version in ICALP’05. [doi:10.1137/060654645] 2
[6] R ICHARD B EIGEL: Perceptrons, PP, and the polynomial hierarchy. Comput. Complexity, 4(4):339–
349, 1994. Preliminary version in SCT’92. [doi:10.1007/BF01263422] 6
[7] H ARRY B UHRMAN , I LAN N EWMAN , H EIN R ÖHRIG , AND RONALD DE W OLF: Robust polynomi-
als and quantum algorithms. Theory Comput. Syst., 40(4):379–395, 2007. Preliminary version in
STACS’05. [doi:10.1007/s00224-006-1313-z, arXiv:quant-ph/0309220] 5
[8] H ARRY B UHRMAN , N IKOLAI K. V ERESHCHAGIN , AND RONALD DE W OLF: On computation and
communication with small bias. In Proc. 22nd IEEE Conf. on Computational Complexity (CCC’07),
pp. 24–32. IEEE Comp. Soc. Press, 2007. [doi:10.1109/CCC.2007.18] 3
[9] A SHOK K. C HANDRA , M ERRICK L. F URST, AND R ICHARD J. L IPTON: Multi-party protocols. In
Proc. 15th STOC, pp. 94–99. ACM Press, 1983. [doi:10.1145/800061.808737] 2, 7
[10] A RKADEV C HATTOPADHYAY: Circuits, Communication, and Polynomials. Ph. D. thesis, McGill
University, 2008. eScholarship@McGill. 9
[11] A RKADEV C HATTOPADHYAY AND A NIL A DA: Multiparty communication complexity of disjoint-
ness. Electron. Colloq. on Comput. Complexity (ECCC), January 2008. [ECCC:TR08-002] 3,
9
[12] A RKADEV C HATTOPADHYAY AND N IKHIL M ANDE: Separation of unbounded-error models in
multi-party communication complexity. Theory of Computing, 14(21), 2018. Preliminary version
ECCC TR16-095. [doi:10.4086/toc.2018.v014a021] 3, 4
[13] B ENNY C HOR AND O DED G OLDREICH: Unbiased bits from sources of weak randomness and
probabilistic communication complexity. SIAM J. Comput., 17(2):230–261, 1988. Preliminary
version in FOCS’85. [doi:10.1137/0217015] 9
[14] M ATEI DAVID , T ONIANN P ITASSI , AND E MANUELE V IOLA: Improved separations between
nondeterministic and randomized multiparty communication. ACM Trans. Comput. Theory, 1(2/5),
2009. Preliminary version in RANDOM’08. [doi:10.1145/1595391.1595392] 3, 9
[15] D MITRY G AVINSKY AND A LEXANDER A. S HERSTOV: A separation of NP and coNP
in multiparty communication complexity. Theory of Computing, 6(10):227–245, 2010.
[doi:10.4086/toc.2010.v006a010, arXiv:1004.0817] 3
[16] M IKAEL G OLDMANN , J OHAN H ÅSTAD , AND A LEXANDER A. R AZBOROV: Majority gates vs.
general weighted threshold gates. Comput. Complexity, 2(4):277–300, 1992. Preliminary version in
SCT’92. [doi:10.1007/BF01200426] 3
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 15
A LEXANDER A. S HERSTOV
[17] J OHAN H ÅSTAD AND M IKAEL G OLDMANN: On the power of small-depth threshold circuits. Com-
put. Complexity, 1(2):113–129, 1991. Preliminary version in FOCS’90. [doi:10.1007/BF01272517]
2
[18] E YAL K USHILEVITZ AND N OAM N ISAN: Communication Complexity. Cambridge University
Press, 1997. 7, 9
[19] T ROY L EE AND A DI S HRAIBMAN: Disjointness is hard in the multiparty number-on-the-
forehead model. Comput. Complexity, 18(2):309–336, 2009. Preliminary version in CCC’08.
[doi:10.1007/s00037-009-0276-2] 3, 9
[20] R AMAMOHAN PATURI: On the degree of polynomials that approximate symmetric Boolean
functions. In Proc. 24th STOC, pp. 468–474. ACM Press, 1992. [doi:10.1145/129712.129758] 6
[21] R AMAMOHAN PATURI AND JANOS S IMON: Probabilistic communication complexity. J. Com-
put. System Sci., 33(1):106–123, 1986. Preliminary version in FOCS’84. [doi:10.1016/0022-
0000(86)90046-2] 2, 7
[22] A LEXANDER A. R AZBOROV: Quantum communication complexity of symmetric predicates.
Izv. Math., 67(1):145–159, 2003. [doi:10.1070/IM2003v067n01ABEH000422, arXiv:quant-
ph/0204025] 3
[23] A LEXANDER A. R AZBOROV AND AVI W IGDERSON: nΩ(log n) lower bounds on the size of depth-3
threshold circuits with AND gates at the bottom. Inform. Process. Lett., 45(6):303–307, 1993.
Version available at CiteSeer. [doi:10.1016/0020-0190(93)90041-7] 2
[24] A LEXANDER A. S HERSTOV: Halfspace matrices. Comput. Complexity, 17(2):149–178, 2008.
Preliminary version in CCC’07. [doi:10.1007/s00037-008-0242-4] 3, 4
[25] A LEXANDER A. S HERSTOV: Separating AC0 from depth-2 majority circuits. SIAM J. Comput.,
38(6):2113–2129, 2009. Preliminary version in STOC’07. [doi:10.1137/08071421X] 3, 9
[26] A LEXANDER A. S HERSTOV: The pattern matrix method. SIAM J. Comput., 40(6):1969–2000,
2011. Preliminary version in STOC’08. [doi:10.1137/080733644] 3, 9, 10
[27] A LEXANDER A. S HERSTOV: Optimal bounds for sign-representing the intersection of two half-
spaces by polynomials. Combinatorica, 33(1):73–96, 2013. Preliminary version in STOC’10.
[doi:10.1007/s00493-013-2759-7] 12, 13
[28] A LEXANDER A. S HERSTOV: Communication lower bounds using directional derivatives. J. ACM,
61(6):1–71, 2014. Preliminary version in STOC’13. [doi:10.1145/2629334] 9, 10
[29] A LEXANDER A. S HERSTOV: The multiparty communication complexity of set disjointness. SIAM
J. Comput., 45(4):1450–1489, 2016. Preliminary version in STOC’12. [doi:10.1137/120891587] 9
[30] A LEXANDER A. S HERSTOV: On multiparty communication with large versus unbounded error.
Electron. Colloq. on Comput. Complexity (ECCC), 23:138, 2016. [ECCC:TR16-138]
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 16
O N M ULTIPARTY C OMMUNICATION WITH L ARGE VERSUS U NBOUNDED E RROR
[31] J USTIN T HALER: Lower bounds for the approximate degree of block-composed functions. In Proc.
43rd Internat. Colloq. on Automata, Languages and Programming (ICALP’16), pp. 17:1–17:15.
Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2016. [doi:10.4230/LIPIcs.ICALP.2016.17,
ECCC:TR14-150] 3, 6, 10
[32] A NDREW C HI -C HIH YAO: On ACC and threshold circuits. In Proc. 31st FOCS, pp. 619–627. IEEE
Comp. Soc. Press, 1990. [doi:10.1109/FSCS.1990.89583] 2
AUTHOR
Alexander A. Sherstov
Associate professor
University of California, Los Angeles
sherstov cs ucla edu
http://www.cs.ucla.edu/~sherstov
ABOUT THE AUTHOR
A LEXANDER A. S HERSTOV, known to his friends and colleagues as Sasha, is an asso-
ciate professor of computer science at the University of California, Los Angeles. Prior
to joining UCLA, Sasha obtained his Ph. D. at the University of Texas at Austin un-
der the direction of Adam Klivans and spent two years as a postdoctoral scholar at
Microsoft Research. He has broad research interests in theoretical computer science,
including computational complexity theory, computational learning theory, and quantum
computing.
T HEORY OF C OMPUTING, Volume 14 (22), 2018, pp. 1–17 17