Authors Justin Gilmer, Michal Kouck{\'y}, Michael Saks,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18
www.theoryofcomputing.org
A Communication Game Related to the
Sensitivity Conjecture
Justin Gilmer∗ Michal Koucký† Michael Saks‡
Received July 7, 2015; Revised September 12, 2016; Published September 4, 2017
Abstract: One of the major outstanding foundational problems about Boolean functions is
the sensitivity conjecture, which asserts that the degree of a Boolean function is bounded
above by some fixed power of its sensitivity. We propose an attack on the sensitivity
conjecture in terms of a novel two-player communication game. A lower bound of the form
nΩ(1) on the cost of this game would imply the sensitivity conjecture.
To investigate the problem of bounding the cost of the game, three natural (stronger)
variants of the question are considered. For two of these variants, protocols are presented
that show that the hoped-for lower bound does not hold. These protocols satisfy a certain
monotonicity property, and we show that the cost of any monotone protocol satisfies a strong
lower bound in the original variant.
√
There is an easy upper bound of n on the cost of the game. We also improve slightly
on this upper bound. This game and its connection to the sensitivity conjecture was indepen-
dently discovered by Andy Drucker (arXiv:1706.07890).
ACM Classification: F.1.3
AMS Classification: 94C10, 68Q17, 68Q15
Key words and phrases: complexity theory, Boolean functions, sensitivity conjecture, sensitivity, degree
of Boolean functions, decision tree, communication complexity
A preliminary version of this paper appeared in the Proceedings of the 6th Innovations in Theoretical Computer Science
conference, 2015 [7].
∗ Supported by NSF grant CCF 083727.
† The research leading to these results has received funding from the European Research Council under the European
Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 616787. Partially supported by the project
14-10003S of GA ČR and a grant from Neuron Fund for Support of Science.
‡ Supported by NSF grants CCF-083727, CCF-1218711 and by Simons Foundation award 332622.
© 2017 Justin Gilmer, Michal Koucký, and Michael Saks
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2017.v013a007
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
1 Introduction
1.1 A communication game
The focus of this paper is a somewhat unusual cooperative two-player communication game. The game is
parameterized by a positive integer n and is denoted Gn . Alice receives a permutation σ = (σ1 , . . . , σn )
of [n] = {1, . . . , n} and a bit b ∈ {0, 1} and sends Bob a message (which is restricted in a way that will be
described momentarily). Bob receives the message from Alice and outputs a subset J of [n] that must
include σn , the last element of the permutation. The cost to Alice and Bob is the size of the set |J|.
The message sent by Alice is constrained as follows: Alice constructs an array v consisting of n cells
which we will refer to as locations, where each location v` is initially empty, denoted by v` = ∗. Alice
gets the input as a data stream σ1 , . . . , σn , b and is required to fill the cells of v in the order specified by
σ . After receiving σi for i < n, Alice fills location σi with 0 or 1; once written this can not be changed.
Upon receiving σn and b, Alice writes b in location σn . The message Alice sends to Bob is the completed
array in {0, 1}n .
A protocol Π is specified by Alice’s algorithm for filling in the array, and Bob’s function mapping the
received array to the set J.
Definition 1.1. The cost of a protocol c(Π) is the maximum of the output size |J| over all inputs
σ1 , . . . , σn , b.
√
For example, consider the following protocol. Let k = d ne. Alice and Bob fix a partition of the
locations of v into k blocks each of size at most k. Alice fills v as follows: When σi arrives, if σi is the
last location of its block to arrive then fill the entry with 1 otherwise fill it with 0.
Notice that if b = 1 then the final array v will have a single 1 in each block. If b = 0 then v will have
a unique all-0 block.
Bob chooses J as follows: if there is an all-0 block, then J is set to be that block, and otherwise J is
set to be the set of locations containing 1’s. It is clear that σn ∈ J and so this is a valid protocol. In all
√
cases the size of J will be at most k and so the cost of the protocol is d ne. We will refer to this protocol
as the AND-OR protocol. In Section 2.1 we remark on this protocol’s connection to the Boolean function
√ √
^n _n
AND-OR(x) = xi j .
i=1 j=1
Definition 1.2. The function C(n) is defined to be the minimum cost of any protocol for Gn .
We are interested in the growth rate of C(n) as a function of n. In particular, we propose:
Question 1.3. Is there a δ > 0 such that C(n) = Ω(nδ )?
1.2 Connection to the sensitivity conjecture
Why consider such a strange game? The motivation is that the game provides a possible approach to the
well known sensitivity conjecture from Boolean function complexity, which we define now.
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 2
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
Definition 1.4. The sensitivity of an n-variate Boolean function f at an input x, denoted sx ( f ), is the
number of locations ` such that if we flip the bit of x in location ` then the value of the function changes.
Alternatively, sx ( f ) is the number of neighbors of x in the hamming graph whose f value is different
from f (x).
Definition 1.5. The sensitivity of f , denoted as s( f ), is the maximum of sx ( f ) over all Boolean inputs x.
Definition 1.6. The degree of a function f , deg( f ), is the smallest degree of a (real) polynomial p in
variables x1 , . . . , xn such that, for all x ∈ {0, 1}n , f (x) = p(x).
Conjecture 1.7 (The Sensitivity Conjecture). There is a δ > 0 such that for any Boolean function f ,
s( f ) ≥ Ω(deg( f )δ ).
An easy argument (given in Section 2) connects the cost function C(n) of the game Gn to the sensitivity
conjecture:
Proposition 1.8. For any Boolean function on n variables, s( f ) ≥ C(deg( f )).
In particular, an affirmative answer to Question 1.3 would imply the sensitivity conjecture.
We note that Andy Drucker [6] independently formulated the above communication game, observed
its connection to the sensitivity conjecture, and formulated Question 3.4 given below.
1.3 Background on the sensitivity conjecture
Sensitivity and degree belong to a large class of complexity measures for Boolean functions that seek to
quantify, for each function f , the amount of knowledge about individual variables needed to evaluate
f . Other such measures include decision tree complexity and its randomized and quantum variants,
certificate complexity, and block sensitivity. The value of such a measure is at most the number of
variables. There is a long line of research aimed at bounding one such measure in terms of another. For
measures a and b let us write a ≤r b if there are constants C1 , C2 such that for every total Boolean function
f , a( f ) ≤ C1 b( f )r + C2 . For example, the decision tree complexity of f , D( f ), is at least its degree
deg( f ) and thus deg ≤1 D. It is also known [12] that D ≤3 deg. We say that a is polynomially bounded by
b if a ≤r b for some r > 0 and that a and b are polynomially equivalent if each is polynomially bounded
by the other.
The measures mentioned above, with the notable exception of sensitivity, are known to be polynomi-
ally equivalent. For example, Nisan and Szegedy [14] proved bs( f ) ≤2 deg( f ), and also proved a result
in the other direction, which was improved in [3] to deg( f ) ≤3 bs( f ). For a survey of such results, see [4]
and [9]; some recent results include [1, 8].
The sensitivity conjecture, posed as a question by Nisan [13] asserts that s( f ) is polynomially equiva-
lent to at least one (and therefore all) of the other measures mentioned. There are many reformulations
and related conjectures; see [9] for a survey.
The sensitivity conjecture perhaps more commonly appears as a question on the relationship between
sensitivity and block sensitivity. For example, Nisan and Szegedy [14] asked specifically if bs( f ) =
O(s2 ( f )) for all functions, and as of this writing no counterexample has been given. The best known
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 3
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
bound relating sensitivity to block sensitivity was given by Ambainis et al. [2] who showed that for any
Boolean function f , bs( f ) ≤ C( f ) ≤ 2s( f )−1 s( f ) (improving on the bs( f ) ≤ es( f )+o(1) bound of Kenyon
and Kutin [10]).
1.4 Outline of the paper
In Section 2 we prove that a positive answer to Question 1.3 would imply the sensitivity conjecture. We
show that adversary arguments for proving that Boolean functions are evasive (that is have decision tree
complexity D( f ) = n) provide strategies for the communication game. We also prove that it suffices to
answer Question 1.3 for the restricted class of order-oblivious protocols.
In Section 3 we present three stronger variants of Question 1.3. We exhibit protocols that show that
two of these variants have negative answers. One might then expect that variants of one of these protocols
might lead to a negative answer to Question 1.3. However, we observe that these protocols satisfy a
√
property called monotonicity and in Section 4 we prove a b nc lower bound on the cost of any monotone
protocol. Thus a protocol that gives a negative answer to Question 1.3 must look quite different from
the two protocols that refuted the strengthenings. We also prove a rather weak lower bound for a special
class of protocols
√ called assignment-oblivious protocols. Finally, in Section 5 we construct a protocol
with cost 0.99n, thus beating the AND-OR protocol by a constant factor. Let r(k) = log(C(k))/ log(k).
After a preliminary version of our paper appeared, Szegedy [15] showed that for any k, C(n) = O(nr(k) ).
Our example shows that there is a k for which r(k) < 1/2 and so it follows that C(n) = O(n1/2−δ ) for
some δ > 0. Szegedy further showed that C(30) ≤ 5 which gives the best currently known upper bound
C(n) = O(n0.4732 ).
2 Connection between the sensitivity conjecture and the game
In this section we prove Proposition 1.8, which connects the sensitivity conjecture with the two-player
game described in the introduction.
We use e` to denote the assignment in {0, 1}n that is 1 in location ` and 0 elsewhere. Given
v, w ∈ {0, 1}n , v ⊕ w denotes their bitwise mod-2 sum.
Definition 2.1. The hamming graph Hn is the unordered graph whose vertex set is {0, 1}n , where a pair
of vertices form an edge if they differ in exactly one coordinate.
Alice’s strategy maps the permutation-bit pair (σ , b) to a Boolean array v and Bob’s strategy maps the
array v to a subset of [n]. We now show that for each strategy for Alice there is a canonical best strategy
for Bob. For a permutation σ , let ΠA (σ ) denote the array Alice writes while receiving σ1 , . . . , σn−1 (so
location σn is labeled ∗). Note that ΠA (σ ) can be viewed as an edge in Hn .
Definition 2.2. The edge set E(Π) of a protocol Π is the set of edges ΠA (σ ) over all permutations σ .
Definition 2.3. Given a Boolean function f , let E( f ) denote the set of sensitive edges of f .
Given Alice’s output v, the possible values for σn are precisely those locations ` that satisfy (v, v ⊕ e` )
is an edge in E(Π). Thus the best strategy for Bob is to output this set of locations. It follows that c(Π) is
equal to the maximum vertex degree of the graph E(Π).
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 4
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
Proposition 1.8 will therefore follow by showing the following: Given a Boolean function with degree
d and sensitivity s, there is a strategy Π for Alice for the game Gd such that the graph E(Π) has maximum
degree at most s.
We need a few preliminaries. A subfunction of a Boolean function f is a function g obtained from f
by fixing some of the variables of f to 0 or 1. For a subfunction g of f , s( f ) ≥ s(g). We say a function
has full degree if deg( f ) is equal to the number of variables of f . We start by recalling some well known
facts.
Lemma 2.4. For any Boolean function f there exists a subfunction g on deg( f ) variables that has full
degree.
Proof. If p is the (unique) multilinear real polynomial that agrees with f on the Boolean cube, then p
contains a monomial ∏`∈S x` where |S| = deg( f ). Let g be the function obtained by fixing the variables
in [n] \ S to 0. Then g is a function on deg( f ) variables that has full degree.
Lemma 2.5. Given a function f with full degree and a location `, there exists a bit b such that the
function obtained from f by fixing x` = b is also of full degree.
Proof. The polynomial (viewed as a function from {0, 1}n → {0, 1}) for f may be written in the form
p1 (x1 , x2 , . . . , x`−1 , x`+1 , . . . , xn ) + x` p2 (x1 , x2 , . . . , x`−1 , x`+1 , . . . , xn ) .
If p1 has a nonzero coefficient on the monomial ∏k6=` xk , then we set x` = 0 and the resulting function
will have full degree. For the other case, note p2 must have a nonzero coefficient on ∏k6=` xk because f
has full degree. Thus, setting x` = 1 will work.
The proof of this lemma is essentially the same as the standard argument that the decision tree
complexity of any function f is at least deg( f ).
We are now ready to prove Proposition 1.8.
Proof. Given f , let g be a subfunction of deg( f ) variables with full degree. We construct a protocol Π
that satisfies E(Π) ⊆ E(g). As Alice receives σ1 , σ2 , . . . , σn , she fills in v so that the restriction of g to
each partial assignment remains a full degree function, which is possible by Lemma 2.5. After Alice fills
location σn−1 , the function g restricted to v is a non-constant function of one variable, and so the edge
ΠA (σ ) is a sensitive edge for g. This implies that E(Π) ⊆ E(g). We conclude that c(Π) ≤ s(g) ≤ s( f ).
The proof shows that a degree-n Boolean function having sensitivity s can be converted into a strategy
for Alice for the game Gn of cost at most s. We don’t know whether this connection goes the other
way, i. e., we can’t rule out the possibility that the answer to Question 1.3 is negative but the sensitivity
conjecture is still true.
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 5
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
2.1 Connection to decision tree complexity
An n-variate Boolean function is evasive if its decision tree complexity is n. It is folklore that every
evasive Boolean function admits an adversary argument. View the problem of evaluating the function by
a decision tree as a game between the querier who wishes to evaluate the function and who decides which
variable to read next, and the adversary who decides the value of the variable. A function is evasive if
there is a strategy for the adversary that forces the querier to ask all n questions. A common technique for
proving a function is evasive is to exhibit an adversary argument. For example, to prove that
√ √
^n _n
AND-OR(x) = xi j
i=1 j=1
is evasive, the adversary can use the strategy: answer 0 to every variable unless the variable is the last
W
variable in its -block, in which case answer 1. This adversary is exactly Alice’s part of the AND-OR
protocol described in the introduction. For more examples of adversary arguments see [11].
Adversary arguments correspond to protocols Π for Alice. In fact a function f is evasive if and only
if there exists a protocol Π for which E(Π) ⊆ E( f ). This work explores whether we can use the structure
of an arbitrary adversary (or protocol) to exhibit a lower bound on sensitivity.
2.2 Order-oblivious protocols
In the game Gn , at each step i < n, the value written by Alice at location σi may depend on her knowledge
up to that step, which includes both the sequence σ1 , . . . , σi and the partial assignment already made to v
at locations σ1 , . . . , σi−1 . In this section we consider a natural restriction of Alice’s strategy.
Definition 2.6. A protocol Π is order-oblivious if, for each i, the bit Alice writes in location σi depends
only on σi and the current partial assignment written on v but not on the order in which σ1 , . . . , σi−1
arrived.
The following easy proposition shows that it suffices to answer Question 1.3 for order-oblivious
protocols.
Proposition 2.7. Given any protocol Π, there exists an order-oblivious protocol Π0 such that E(Π0 ) ⊆
E(Π). In particular, c(Π0 ) ≤ c(Π).
Proof. First some notation. Given a permutation σ let σ≤k denote the first k elements of σ . We let
ΠA (σ≤k ) denote the partial assignment written on v after Alice has been streamed σ1 , . . . , σk .
Given Π we define an order-oblivious protocol Π0 of cost at most that of Π. We define Π0 in steps,
where in step i Alice receives σi and writes a bit in that location. Given k ≥ 0 we assume that Π0 has been
defined up through step k and has the property that for every permutation σ , there is a permutation τ of
σ1 , . . . , σk so that ΠA (τ) = Π0A (σ≤k ).
Suppose σk+1 arrives and the current state of the array is v := Π0 (σ≤k ). From v Alice can deduce the
set {σ1 , . . . , σk } (the set of locations not labeled *). Let τ ∗ be the lexicographically smallest permutation
of σ1 , . . . , σk such that ΠA (τ ∗ ) = Π0A (σ≤k ). Alice then writes on location σk+1 according to what Π does
after τ ∗ . Note that the bit written on location σk+1 does not depend on the relative order of σ1 , σ2 , . . . , σk .
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 6
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
By construction, Π0 is order-oblivious. Also for any permutation σ there is a permutation τ for which
ΠA (τ) = Π0A (σ ). This implies that E(Π0 ) ⊆ E(Π).
3 Stronger variants of Question 1.3
We now present three natural variants of Question 1.3: Questions 3.1, 3.4 and 3.9. We refute the latter
two by exhibiting and analyzing some specific protocols.
The cost function c(Π) of a protocol is the worst-case cost over all choices of σ1 , . . . , σn , b. Alterna-
tively, we can consider the average size of the set output by Bob, with respect to uniformly random1 σ
and b, of the set Bob outputs. We call this the expected cost of Π and denote it by ce(Π). Let C(n)e denote
the minimum expected cost of a protocol for Gn .
e = Ω(nδ )?
Question 3.1. Is there a δ > 0 such that C(n)
An affirmative answer to this question would give an affirmative answer to Question 1.3.
It is well known that the natural probabilistic version of the sensitivity conjecture, where sensitivity
is replaced by average sensitivity is trivially false (for example, for the OR function). However, there
is apparently no connection between average sensitivity and average protocol cost. For example, the
protocol induced by the decision tree adversary for OR has Alice write a 0 at each step. Note that E(Π)
is exactly the set of sensitive edges for the OR function. However, the average cost ce(Π) is (n + 1)/2
whereas the average sensitivity of the OR function is o(1).
We also remark that an analog of Proposition 2.7 holds for the cost function ce(Π), and therefore it
suffices to answer the question for order-oblivious protocols. The proof of the analog is similar to the
proof of Proposition 2.7, except when modifying the protocol τ ∗ is not selected to be the lexicographically
smallest permutation in the indicated set, but rather the permutation in the indicated set that minimizes
the expected cost conditioned on the first k steps.
There is another natural information-theoretic variant of Question 1.3. We first formally define some
information-theoretic concepts. For a detailed introduction to information theory see [5].
Definition 3.2. The entropy of a random variable X with probability mass function p(x) is
H(X) = − ∑ p(x) log2 p(x) .
x
Definition 3.3. Given random variables X and Y the conditional entropy of Y given X is
H(Y | X) = ∑ p(x)H(Y | X = x) .
x
When we run a fixed protocol Π on a uniformly random permutation σ and bit b, we can view the
array v produced by Alice as a random variable. Let e h(Π) = H(σn | v). Intuitively this measures the
average number of bits of uncertainty that Bob has about σn after seeing v. It is easy to show that this is
bounded above by log(c(Π)). Let H(n)
e be the minimum of e h(Π) over all protocols Π for Gn . We can
now state the information-theoretic analog of Question 1.3, which was independently posed by Andy
Drucker [6].
1 Here we restrict attention to the uniform distribution on σ , b; see the remark following the proof of Theorem 3.5.
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 7
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
Question 3.4. Is there a positive constant δ such that H(n)
e = Ω(δ log(n))?
An affirmative answer to this would have implied an affirmative answer to Question 1.3, but the
answer to this new question turns out to be negative.
h(Π) = 3 + dlog log(n)e.
Theorem 3.5. There is an order-oblivious protocol Π for Gn such that e
Remark 3.6. It might seem that this could be proved by giving a protocol Π that is not order-oblivious
and converting it into an order-oblivious protocol as described earlier. However, while we know that
this can be done without increasing worst-case cost or average cost, it is possible that e
h may increase.
Therefore, we construct the desired order-oblivious protocol directly.
Proof. Let k = dlog(n)e and associate each location ` ∈ [n] to its binary expansion, viewed as a vector
b(`) ∈ Fk2 . Note that 0 ∈ / [n], and thus each vector b(`) is nonzero. For an array v ∈ {0, 1}n we define
Γ(v) := ∑`:v` =1 b(`), i. e., the vector in Fk2 obtained by summing the vectors corresponding to the 1 entries
of v. Say that an array v ∈ {0, 1, ∗}n is admissible if there is a way of filling in the *’s so that for the
resulting array w we have Γ(w) = 0k , where 0k is the all-0 vector in Fk2 . We call such a w a completion of
v. For an admissible array v, let v̂ be the unique completion of v such that
1. Γ(v̂) = 0k ,
2. the number r of 0’s in v̂ is minimum,
3. the ordered sequence `1 < . . . < `r of locations of the 0’s in v̂ is lexicographically minimum subject
to conditions (1) and (2). That is, for each j ∈ [r], ` j is minimum possible given `1 , . . . , ` j−1 .
We now describe the protocol. Let t > k be an integer (which we’ll choose to be dlog(n)e2 ). Alice
says 0 for the first n − t steps. The resulting array u has n − t 0’s and t ∗’s. Since u can be completed to
the all-0 array, u is admissible. Alice fills in the remaining positions to agree with û.
This strategy is order-oblivious: A simple induction argument shows that for each array w reached
under the above strategy, w is admissible and ŵ = û.
We also make the following claim.
Claim 3.7. Let S = {b(`) | û` = 0 & u` = ∗}. We claim that |S| ≤ k.
Proof. Suppose for contradiction that |S| ≥ k + 1. Then there must be a subset of vectors in S which are
linearly dependent. This means we can set those coordinates in û to 1 and still satisfy Γ(û) = 0k . This
contracts the minimality of û.
Let v denote the array in {0, 1}n received by Bob. We now obtain an upper bound on the conditional
entropy of σn given v. Let u = u(σ ) be the array obtained after the first n − t steps and let T (σ ) be the
set of positions of *’s in u. Let S(σ ) be the subset of T (σ ) consisting of those positions set to 0 in û.
Let L be the random variable that is 1 if σn ∈ S(σ ) and 0 otherwise. Since S(σ ) depends only on the
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 8
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
set T (σ ) and not on the order of the last t locations, the probability that L = 1 is maxσ |S(σ )|/|T (σ )| ≤
dlog(n)e/dlog(n)e2 = 1/dlog(n)e, where the inequality applied Claim 3.7. We have:
H(σn | v) ≤ H(σn , L | v)
= H(L | v) + H(σn | v, L)
≤ 1 + H(σn | v, L)
= 1 + H(σn | v, L = 1) Pr[L = 1]
+ H(σn | v, L = 0) Pr[L = 0]
1
≤ 1 + H(σn ) + H(σn | v, L = 0) .
dlog(n)e
We bound the final expression. H(σn ) = log(n) so the second term is ≤ 1. For the third term, we condition
further on the value of the final bit b:
1
H(σn | v, L = 0) ≤ H(b) + (H(σn | v, L = 0, b = 1) + H(σn | v, L = 0, b = 0)) .
2
Of course, H(b) = 1. Given L = 0, we have σn ∈ T (σ ) − S(σ ). If b = 1, then σn is one of at most t
positions set to 1, and so the conditional entropy of σn is at most log(t) ≤ 2dlog log(n)e. If b = 0 then
Γ(v) = σn (since σn is the unique location that if set to 1 would make the vectors corresponding to the
locations of 1’s sum to 0k ). The conditional entropy in this case is 0.
Summing up all of the conditional entropy contributions gives H(σn | v) ≤ 3 + dlog log(n)e which
concludes the proof.
Remark 3.8. In Questions 3.1 and 3.4 we assume that the permutation σ is chosen uniformly at random.
Other natural variants arise if we allow the adversary to pick an arbitrary distribution. Also, one can
consider variants in which Alice and Bob use a randomized protocol and the selected permutation is
worst case. We leave these variants as the subject of possible future investigations.
For our last variant of Question 1.3, suppose Alice can communicate to Bob with a w-ary alphabet
instead of a binary alphabet. Thus, Alice is streamed a permutation σ , and when σi arrives she may write
any of the symbols {1, . . . , w} in location σi in v. At the last step b ∈ {1, . . . , w} arrives and Alice must
write it in location σn . Bob sees v and has to output a set J that contains σn . The cost of the protocol is
the maximum size of J over all σ and b. For w ≥ 2, C(w) (n) denotes the minimum cost of any protocol Π
on n variables where Alice uses w-ary alphabet. This leads us to:
Question 3.9. For each w ≥ 2 is there a δ (w) > 0 such that C(w) (n) = Ω(nδ (w) )?
An affirmative answer to this question would have implied the sensitivity conjecture. Indeed, instead
of considering functions from {0, 1}n to {0, 1} one can consider functions from {1, . . . , w}n to {0, 1}. All
the complexity measures naturally generalize to this setting, and one can also generalize the sensitivity
conjecture. The sensitivity conjecture over w-ary alphabet (w ≥ 2) is equivalent to the original sensitivity
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 9
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
conjecture, and an affirmative answer to Question 3.9 would have implied the sensitivity conjecture over
w-ary alphabet.
We will show that Question 3.9 has a negative answer for w > 2; in fact C(2 j+1) (n) is bounded above
by a jth-iterated logarithm of n.
Define the function τ : N −→ N by τ(k) = d4 log2 (k)e. Let k0 = 2128 and observe that for k ≥ k0 we
have 32 ≤ τ(k) ≤ k/32. The functions (t j : N −→ N : j ≥ 0) are defined by:
• t0 (n) = n for all n.
• For j ≥ 1, t j (n) = t j−1 (τ(n)) if n ≥ k0 and t j (n) = n for n < k0 .
Thus t j is essentially the base-r iterated log function for r = 21/4 .
The main result of this part is:
Theorem 3.10. For each j ≥ 0 there is a protocol Π j using alphabet {1, . . . , 2 j + 1} whose cost is at
most t j (n).
To describe Π j we’ll need a specialized variant of error 0correcting codes. For T ⊆ [n], and s ≤ |T |,
T
s denotes the set of subsets of T of size s. For sets S, S , S4S0 denotes the symmetric difference
(S − S0 ) ∪ (S0 − S). A (T, s)-map over alphabet Σ is a family α = (αS : S ∈ Ts ) where each αS is a
function from S to Σ. We say α is binary if |Σ| = 2, and is k-robust if:
For S1 , S2 ∈ T2 satisfying |S1 4S2 | = 2, there are at most s − k − 1 indices j ∈ S1 ∩ S2 such
that αS1 ( j) = αS2 ( j).
Lemma 3.11. For any subset T of [n], t = |T |, and t ≥ s ≥ τ(t) there is a binary (T, s)-map that is
bs/32c-robust.
(The reader may wish to skip the routine proof and go on to the proof of Theorem 3.10.) The proof
of the lemma uses codes that correct deletion errors. For a string s, a deletion is the removal of some
symbol from s (shrinking its length by 1). We say that a code C ⊆ Σk corrects t deletions provided that
for all v 6= w ∈ C, if v0 is obtained from v by t deletions and w0 is obtained from w by t deletions then
v0 6= w0 . (Thus any code word can be recovered uniquely even after t deletions.)
Proposition 3.12. For all k, there is a binary code Ck ⊆ {0, 1}k with |Ck | ≥ 2k/2 that corrects bk/32c
deletions.
Proof. Let Gk be the graph on {0, 1}k in which two strings x and y are joined if they have a common
substring z (not necessarily continuous) of length (31/32)k. Any independent set of Gk corrects bk/32c
deletions. Let Ck be a maximal independent set. Letting ∆k denote the maximum degree of a vertex in
Gk , then (recalling a standard argument) every vertex of V (Gk ) −Ck has at least one edge to Ck and so
|V (Gk )| − |Ck | ≤ ∆k |Ck | which implies |Ck | ≥ 2k /(∆k + 1). Thus, it suffices to show ∆k < 2k/2 .
Note that 2
k
∆k ≤ 2bk/32c ,
bk/32c
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 10
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
since for a fixed string x each neighbor y can be specified by selecting the subsets of bk/32c positions to
delete from x and from y and the values of the bits deleted from y. Using the standard bound nk ≤ 2h(k/n)n
where h(λ ) = λ log2 (1/λ ) + (1 − λ ) log2 (1/(1 − λ )), the upper bound on ∆k is less than 2k/2 .
Proof of Lemma 3.11. For i ∈ T , let rT (i) denote the rank of i in T , i. e., the position of i when T is
arranged in increasing order. For S ⊆ T , R(S) = ∑i∈S rT (i). We have R(S) ≤ t 2 ≤ 2s/2 , since s ≥ 4 log(t).
The code Cs given by Proposition 3.12 has size at least 2s/2 , so we can assign each h ∈ [t 2 ] to a unique
codeword ws (h) (which is a binary string of length s).
T
For S ∈ s define αS so that it maps the jth smallest index in S to the jth bit in ws (R(S)).
To check the conclusion, suppose S1 , S2 ∈ Ts and |S1 4S2 | = 2, which implies R(S1 ) 6= R(S2 ). Let
J = { j ∈ S1 ∩ S2 : αS1 ( j) = αS2 ( j)} .
The codewords ws (R(S1 )) and ws (R(S2 )) can be made equal by deleting s − |J| symbols from each
codeword. Since Cs corrects bs/32c deletion errors, |J| ≤ s − bs/32c − 1.
Proof of Theorem 3.10. If n < k0 , then t j (n) = n and the protocol is trivial: Alice writes 0 in every
position and Bob outputs [n]. So assume n ≥ k0 .
We’ll first present the proof for j = 1, and then generalize. In Π1 , Alice processes the streamed
permutation in two phases numbered 0 and 1. In phase 0, Alice writes 0 in the first n − τ(n) locations
of the streamed permutation. Since n ≥ k0 ≥ 32, Lemma 3.11 implies that there is a 1-robust binary
([n], τ(n))-map α using alphabet {1, 2}. The set L1 of locations that are unfilled after phase 0 has size
τ(n). During phase 1, Alice receives the next τ(n) − 1 locations and for each such location q writes
αL1 (q). The final location σn is filled by the adversary to be 0, 1 or 2.
Bob outputs his set as follows. Let M0 be the set of locations in v that have a 0, and M1 be the set
of locations that have a 1 or 2. If |M0 | = n − τ(n) then the adversary wrote 1 or 2 in the final location
and Bob can output M1 , which has size t1 (n). Otherwise |M0 | = n − τ(n) + 1 and the adversary wrote 0
in location σn . For q ∈ M0 , let M1 (q) = M1 ∪ {q}, so L1 = M1 (σn ). The values written in locations M1
agree with αL1 . Since α is a 1-robust ([n], τ(n))-map and |M1 | = |L1 | − 1, it follows that L1 is determined
(among the sets M1 (q)) by the values written on M1 . Thus Bob knows M1 and L1 = M1 ∪ {σn } and can
output {σn }. This completes the case j = 1.
For the general case, given n ≥ k0 , consider the infinite sequence t0 (n),t1 (n),t2 (n), . . .. Let h be the
minimum of j and the first index i for which ti < k0 . Let u0 , u1 , . . . , uh+1 be the sequence with ui = ti (n)
for i ≤ h and uh+1 = 1.
Alice’s strategy consists of h + 1 phases numbered 0 to h. During phase i, Alice processes the next
ui − ui+1 elements of the permutation and fills in the corresponding locations (as described below). The
unique unfilled location after phase h is labeled by the adversary.
Define the sets L0 ⊇ . . . ⊇ Lh+1 where L0 = [n] and Li is the set of unfilled locations after phase i − 1.
Thus |Li | = ui for i ∈ {0, . . . , h + 1}. The set of locations filled during phase i is Li − Li+1 .
During phase 0, Alice writes 0 in each received location. For 0 < i ≤ h, during phase i, Alice writes
symbols from Σi = {2i − 1, 2i}. If ui < k0 (which is only possible for i = h), Alice writes 2h in each
received location. Otherwise ui ≥ k0 and Alice uses Corollary 3.11 to construct a (Li−1 , ui )-map α i over
alphabet Σi that is bui /32c-robust. In phase i, for each q ∈ Li − Li+1 , Alice writes αLi i (q) on location q.
(Since Alice knows Li−1 and Li at the start of phase i she can construct αLi i .)
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 11
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
Bob must then select J ⊆ [n]. We will ensure that |J| = uh or |J| = 1. Since uh < k0 ≤ t j (n) or
uh = t j (n), the protocol cost is at most t j (n).
Alice writes exactly di = ui − ui+1 symbols from Σi . Let i∗ be the index such that the final symbol
(chosen by the adversary) lies in Σi∗ . Let Mi be the set of locations whose label is in Σi . For i 6= i∗ ,
|Mi | = di , and |Mi∗ | = di∗ + 1. Bob knows the numbers di and the sets Mi , and can determine i∗ .
If i∗ = h, then Bob outputs Mh , which has size uh = 1 + dh and must contain σn . So assume i∗ < h.
Let K = j≥i∗ +1 M j . For q ∈ Mi∗ , let K(q) = K ∪ {q}. Bob knows K and Mi∗ and knows that σn ∈ Mi∗ ,
S
Li∗ = Mi∗ ∪ K, and Li∗ +1 = K(σn ). We claim that Bob can reconstruct the set Li∗ +1 . If so, he can then
output {σn } = Li∗ +1 − K, which costs 1.
i∗ +1
By Alice’s protocol, the symbols written on Mi∗ +1 agree with αK(σ n)
. So it is enough to show that for
∗
i +1 ∗
i +1
each q ∈ Mi∗ , q 6= σn , αK(q) disagrees with αK(σ n)
on some index in Mi∗ +1 .
∗
Recall that α i +1 is a (Li∗ , ui∗ +1 )-map that is bui∗ +1 /32c-robust. We claim that bui∗ +1 /32c ≥ ui∗ +2 .
If i + 1 < h then ui∗ +1 ≥ k0 and ui∗ +2 = τ(ui∗ +1 ) ≤ ui∗ +1 /32 by the choice of k0 . If i∗ + 1 = h, then
∗
ui∗ +2 = 1, so it suffices that ui∗ +1 ≥ 32, which follows from ui∗ +1 = τ(ui∗ ) ≥ τ(k0 ) ≥ 32.
∗
Therefore α i +1 is ui∗ +2 -robust. Since |K(q)4K(σn )| = 2, αK(q) and αK(σn ) can agree on at most
ui∗ +1 − ui∗ +2 − 1 indices, while |Mi∗ +1 | = ui∗ +1 − ui∗ +2 , so αK(q) and αK(σn ) differ on some location of
Mi∗ +1 , as required to complete the proof.
4 Lower bounds for restricted protocols
In the previous section, two stronger variants of Question 1.3 turned out to have negative answers, which
may suggest that Question 1.3 also has a negative answer. In this section however, we prove a lower
bound which implies that any counterexample to Question 1.3 will need to look quite different from the
two protocols provided in the last section.
Monotone Protocols. An order-oblivious protocol can be specified by a sequence of maps A1 , . . . , An
where each Ai maps partial assignments on the set [n] to a single bit. When location σi arrives, the bit
Alice writes is Aσi (v). For partial assignments α and β , we say that β is an extension of α, denoted as
β ≥ α, if β is obtained from α by fixing additional variables. An order-oblivious protocol is monotone if
each of the maps A1 , . . . , An are monotone with respect to the extension partial order. That is, if β ≥ α are
partial assignments, then Ai (β ) ≥ Ai (α) for each i. For simplicity we assume that these maps are defined
on all partial assignments, even though the resulting protocol may never reach every partial assignment.
Both the AND-OR protocol described in the introduction and the protocol constructed in Theorem 3.5
are examples of monotone protocols. Monotonicity generalizes to protocols on w-ary alphabets, and the
w-ary protocol of Theorem 3.10 is monotone (if we order the alphabet symbols in reverse 2 j + 1 < 2 j <
. . . < 1). Our main result in this section is that monotone protocols on binary alphabets have cost at least
√
b nc. In particular, Question 1.3 is true for such protocols. For the rest of the paper, all protocols will be
on binary alphabets.
√
Theorem 4.1. All monotone protocols have cost at least b nc.
√
Proof. Let Π be a monotone protocol. We show that E(Π) has a vertex of degree at least b nc.
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 12
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
For a permutation σ denote by bumpk (σ ) the permutation obtained from σ by “bumping” the
element k to the end of σ and maintaining the same relative order for the rest of σ . For example,
bump1 (321654) = 326541.
We let w(σ ) denote the array ΠA (σ ) with the entries sorted in σ order. In other words, w(σ ) is the
array defined by w(σ )i = ΠA (σ )σi .
Claim 4.2. Let σ be any permutation and let τ be obtained from σ by performing some sequence of
bumps on σ . Suppose that τ and m < n satisfy the following:
• The elements τ1 , τ2 , . . . , τm were never bumped.
• Alice originally wrote a 0 on the locations τ1 , . . . , τm , that is ΠA (σ )τi = 0 for all i ≤ m.
Then ΠA (τ)τi = 0 for all i ≤ m, i. e., w(τ) begins with m 0’s.
Proof. The claim follows easily by induction on i. Suppose we have already shown that w(τ) begins with
(i − 1) 0’s. Let v(σ , k) denote the partial assignment written on v just before Alice receives the index k
(here the reader should take care to distinguish this from the partial assignment just before Alice receives
σk ). Consider the partial assignment v(τ, τi ). It follows from the first assumption and the inductive
hypothesis that v(σ , τi ) is an extension of v(τ, τi ). Thus, since Alice originally wrote a 0 on location τi , by
monotonicity she continues to write a 0 on that location when being streamed τ (that is ΠA (τ)τi = 0).
Let σ be the permutation such that w(σ ) is lexicographically minimum.
Claim 4.3. w(σ ) consists of a block of 0’s, followed by a block of 1’s, followed by a single *.
Proof. The result is trivial if there are no 1’s. Let j be the location of the first 1, and let k be the last
position in the block of 1’s beginning at j. We claim k = n − 1. Suppose k < n − 1. Then there is a 0 in
position k + 1. Let τ be obtained from σ by bumping σ j , . . . , σk . By Claim 4.2, w(τ) begins with j 0’s,
contradicting the lexicographic minimality of σ .
Let n − t be the number of initial 0’s in w(σ ) so the number of 1’s is t − 1. Let T = {σn−t+1 , . . . , σn }
and let x be the vector that is 1 in those positions and 0 elsewhere. For k between 1 and n, let τ (k) =
bumpk (σ ), so τ (σn ) = σ .
The vectors of the form ΠA (φ ) and w(φ ) have a single *. For b ∈ {0, 1} we write ΠA (φ , b) and
w(φ , b) for the vectors obtained by replacing the * by b.
Claim 4.4. The vertices ΠA (τ (k) , 1) for k ∈ T are all equal to x. Therefore x belongs to an edge in
direction k for each k ∈ T and so has degree at least t in E(Π).
Proof. Let k ∈ T . Clearly w(τ (k) , 1) has the first n − t bits 0, and so by the choice of σ the remaining bits
are 1. This implies ΠA (τ (k) ) has 1’s in the positions indexed by the last t elements of τ (k) which is the set
T.
We will now find an assignment y that has degree at least (n − t)/(t + 1) in the graph E(Π).
Claim 4.5. For k among the first n − t elements of σ , w(τ (k) ) has the first n − t − 1 bits equal to 0, and
has at most one 0 among the next t bits (and last bit *).
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 13
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
Proof. Claim 4.2 immediately implies that the first n − t − 1 bits of w(τ (k) ) are 0. Now take all of the
locations that are labeled 1 in ΠA (τ (k) ) and bump them to the end and let this new permutation be ρ.
Claim 4.2 implies that all 0’s remain 0. By the lexicographic minimality of w(σ ), w(ρ) has at most n − t
0’s which implies that there was at most a single 0 in τ (k) in positions n − t + 1 or higher.
Now classify each of the first n − t elements of σ into sets Sn−t , . . . , Sn . Element k ∈ Sn if w(τ (k) )
has t 1’s. Otherwise k ∈ S j where j is the location of the unique 0 of w(τ (k) ) in locations n − t to n − 1.
Choose j∗ so that |S j∗ | is maximum and let m = |S j∗ |, which is at least (n − t)/(t + 1). For k ∈ S j∗ , let
y(k) = ΠA (τ (k) , 0). Let u = σ j∗ +1 and let y be the vector that is 1 on the positions of T − {u} and 0
elsewhere.
Claim 4.6. The assignments y(k) for k ∈ S j∗ are all equal to y, and thus y has degree at least m in E(Π).
Proof. By the definition of the bump operation the sequence of elements appearing in positions n −
t, . . . , n − 1 in τ (k) is σn−t+1 , . . . , σn and the element in position j∗ of τ (k) is σ j∗ +1 = u. Thus y(k) is 1 on
the elements of T − {u} and 0 elsewhere.
We thus have a point x of degree at least t and a point y of degree at least (n − t)/(t + 1) in E(Π).
√ √
This implies that cost of Π is at least max(t, (n − t)/(t + 1)) > n − 1 and is thus at least b nc. This
proves Theorem 4.1.
As demonstrated by the AND-OR protocol, Theorem 4.1 is tight up to a constant factor. We remark
that the monotone protocols we consider here seem to have no general connection to the class of monotone
Boolean functions, and our result for monotone protocols seems to be unrelated to the easy and well
known fact that the sensitivity conjecture is true for monotone functions.
We conclude this section with a lower bound for a second class of protocols. Although the lower
bound is only logarithmic, proving a logarithmic lower bound for all protocols with a large enough
constant would improve on the best known bounds relating degree and sensitivity.
Assignment-Oblivious Protocols. For a permutation σ , we write ` <σ k to denote that the element
` comes before the element k in σ . Let Sk (σ ) = {` : ` <σ k}. For example, if σ = 321654 then
S1 (σ ) = {2, 3}. We say a protocol is assignment-oblivious if the bit written by Alice in location k only
depends on the set Sk (σ ) (and not on the assignment of bits to that set). Such protocols can be described
by a collection of n hypergraphs H1 , H2 , . . . , Hn , where each H` is a hypergraph with vertex set [n] \ {`}.
When k arrives, Alice writes a 1 if and only if the set Sk (σ ) is in Hk .
Theorem 4.7. Every assignment-oblivious protocol Π has c(Π) ≥ log2 (n)/2.
Proof. First some definitions. Recall that an edge e ∈ Hn may be written as an array in {0, 1, ∗}n for
which e` = ∗ on exactly one location `. We call this location ` the free location of that edge. We say two
edges e, e0 collide if e` = e0` for all ` that is not a free location of either edge. Equivalently, two edges
collide if they share at least one vertex (each edge collides with itself).
Let Π be an assignment-oblivious protocol. Our proof follows by finding an edge e ∈ E(Π) which
collides with log2 (n) distinct edges. This will imply that one of the vertices in e has degree at least
log2 (n)/2.
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 14
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
Given a permutation σ = σ1 σ2 . . . σn and k ∈ [n] we define swapk (σ ) to be the permutation obtained
by swapping the positions of the elements k and σn within σ and keeping every other element in the same
place. For example, swap3 (654321) = 654123. The lemma will follow by constructing a permutation σ
such that that ΠA (σ ) and ΠA (swapk (σ )) collide for each k ∈ {σn−1 , . . . , σn−dlog2 (n)e }
We build up such a σ in a greedy manner. We start with setting σn−1 = 1. With σn−1 fixed, the bit Alice
writes in location 1 is completely determined by σn (and does not depend on the values we later choose
for σ1 , . . . , σn−2 ). This holds by the assignment-oblivious property and because S1 (σ ) = {` : ` 6= 1, σn }.
Let R1 be the locations ` for which setting σn = ` results in Alice writing a 1 in location 1. At least one
of |R1 |, |Rc1 | are bigger than d(n − 1)/2e, let T1 be that set. Now we fix σn−2 to be any element in T1 .
Having fixed σn−1 and σn−2 , the bit Alice writes on location σn−2 also only depends on the value of
σn . Now let R2 be the subset of indices j in T1 such that setting σn = j would cause Alice to write a 1 in
location σn−2 . At least one of |R2 |, |Rc2 | are bigger than d(|T1 | − 1)/2e, let T2 ⊆ T1 be that set. This process
is iteratively repeated. At step i we set σn−i to be an arbitrary element of Ti−1 . With σn−1 , . . . , σn−i now
fixed, the value written in location σn−i depends only on the value of σn . The set Ri is defined to be
all such values of σn that result in Alice writing a 1 in location σn−i and Ti ⊆ Ti−1 is defined to be the
larger of Ri and Rci . We proceed until the set Ti has only one element in it, in this case we assign σn to be
that element. This process will take at least dlog2 (n)e steps. We then assign the remaining elements to
σ1 , . . . , σn−i−1 in an arbitrary order.
We now claim that ΠA (σ ) and ΠA (swapk (σ )) collide for k = σn , σn−1 , . . . , σn−dlog2 (n)e .
Claim 4.8. Let i < dlog2 (n)e, and let k = σn−i . Then ΠA (σ )` = ΠA (swapk (σ ))` for all ` 6= k, σn .
Proof. Let σ 0 = swapk (σ ). If ` <σ k then S` (σ ) = S` (σ 0 ) and so Alice writes the same bit to location `
under both permutations.
Suppose that ` >σ k. Let j be such that σn− j = `. Note that σn−1 = σn−1 0 ,...,σ 0
n− j = σn− j . Recall
that holding σn−1 , . . . , σn− j fixed, the bit Alice writes at location ` depends only on the value of σn , and
furthermore that bit is the same as for all settings of σn ∈ T j . Since both σn and σn0 = k are in the set T j , it
follows that ΠA (σ )` = ΠA (σ 0 )` .
By the above claim, Π(σ ) collides with Π(swapk (σ )) for at least dlog2 (n)e values of k. Thus, at least
one of the vertices in ΠA (σ ) has degree more than dlog2 (n)/2e. This concludes the proof.
5 A protocol with lower cost than the AND-OR protocol
√
The AND-OR protocol has cost d ne which matches our lower bound for monotone protocols (within
√
±1). In this section we prove existence of a non-monotone protocol with cost (1 − ε) n. After a
preliminary version of this paper appeared, Mario Szegedy [15] gave a protocol of cost O(n0.4732 ).
Definition 5.1. An (n, k)-proper code is an indexed family
o
n
n [n]
xS ∈ {0, 1} | S ∈
k2
of vectors such that
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 15
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
1. for each S ∈ [n]
k2
, the support of xS is a subset of S,
2. for each S 6= T , the hamming distance of xS and xT is at least 2k + 1.
Lemma 5.2. For n sufficiently large and n ≥ k2 ≥ 0.99n there exists an (n, k)-proper code.
Proof. We prove existence through a random construction. For each S ∈ [n]
k 2 define xS to be a random
vector supported on S. Call a pair of sets S, T ∈ [n]
k2
bad if xS and xT differ in less than 2k + 1 positions.
The number of coordinates on which xS and xT differ is at least the number of coordinates in S on which
they differ. Holding xT fixed, the probability that S, T is bad is at most the probability of obtaining fewer
2
than 2k + 1 heads from k2 unbiased coin tosses, which is 2−k (1−o(1)) . Taking a union bound over all pairs
of sets in [n]
k2
we get that the probability that there is a bad pair is at most
2
n
2−0.99n(1−o(1)) = o(1) ,
0.99n
and so with positive probability there are no bad pairs, and so the desired code exists.
√
Theorem 5.3. For some ε > 0 and all sufficiently large n there is a protocol Π with c(Π) ≤ (1 − ε) n.
√
Proof. The construction is a variant of the AND-OR protocol. Let k = d 0.99ne. By Lemma 5.2 there
exists an (n, k)-proper code {xS | S ∈ [n]
k 2 }. Fix such a code.
The protocol Π is as follows: Alice writes 0 in the first n − k2 locations. Let S be the set of remaining
k2 locations. View S as split into k blocks where each successive block consists of the smallest k
unassigned indices in S. For the last k2 elements of the permutation, when index j arrives Alice writes
xS, j unless j is the final element of its block to arrive, in which case Alice writes 1 − xS, j .
The word received by Bob differs from xS in at most k places (one for each block) and so by the
distance property of the code, Bob can deduce the set S. If there is a block of S such that the received
vector agrees with xS on the entire block then Bob outputs that block (since that block must include σn );
otherwise Bob outputs the set of positions (one per block) in which the received vector disagrees with xS
(which again must include σn ).
Acknowledgements
We thank Ran Raz for helpful discussions. We are grateful to the anonymous referees whose comments
helped us to improve the presentation of the paper.
References
[1] A NDRIS A MBAINIS , K ASPARS BALODIS , A LEKSANDRS B ELOVS , T ROY L EE , M IKLOS S ANTHA ,
AND J URIS S MOTROVS : Separations in query complexity based on pointer functions. In Proc.
48th STOC, pp. 800–813. ACM Press, 2016. Version at ECCC. [doi:10.1145/2897518.2897524,
arXiv:1506.04719] 3
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 16
A C OMMUNICATION G AME R ELATED TO THE S ENSITIVITY C ONJECTURE
[2] A NDRIS A MBAINIS , M OHAMMAD BAVARIAN , Y IHAN G AO , J IEMING M AO , X IAOMING S UN ,
AND S ONG Z UO: Tighter relations between sensitivity and other complexity measures. In Proc. 41st
Internat. Colloq. on Automata, Languages and Programming (ICALP’14), volume 8572 of LNCS, pp.
101–113. Springer, 2014. Version at ECCC. [doi:10.1007/978-3-662-43948-7_9, arXiv:1411.3419]
4
[3] ROBERT B EALS , H ARRY B UHRMAN , R ICHARD C LEVE , M ICHELE M OSCA , AND RONALD DE
W OLF: Quantum lower bounds by polynomials. J. ACM, 48(4):778–797, 2001. Preliminary version
in FOCS’98. [doi:10.1145/502090.502097, arXiv:quant-ph/9802049] 3
[4] H ARRY B UHRMAN AND RONALD DE W OLF: Complexity measures and decision tree complexity:
A survey. Theoret. Comput. Sci., 288(1):21–43, 2002. [doi:10.1016/S0304-3975(01)00144-X] 3
[5] T HOMAS C OVER AND J OY T HOMAS: Elements of Information Theory. John Wiley & Sons, Inc.,
2012. 7
[6] A NDREW D RUCKER: A note on a communication game, 2017. [arXiv:1706.07890] 3, 7
[7] J USTIN G ILMER , M ICHAL KOUCKÝ , AND M ICHAEL S AKS: A new approach to the sensitivity
conjecture. In Proc. 9th Internat. Conf. on Information Communication Technology and Systems
(ITCS’15), pp. 247–254. ACM Press, 2015. [doi:10.1145/2688073.2688096] 1
[8] J USTIN G ILMER , M ICHAEL S AKS , AND S RIKANTH S RINIVASAN: Composition limits and sepa-
rating examples for some boolean function complexity measures. Combinatorica, 36(3):265–311,
2016. Preliminary version in CCC’13. [doi:10.1007/s00493-014-3189-x, arXiv:1306.0630] 3
[9] P OOYA H ATAMI , R AGHAV K ULKARNI , AND D ENIS PANKRATOV: Variations on the Sensitivity
Conjecture. Number 4 in Graduate Surveys. Theory of Computing Library, 2011, pp. 1–27.
[doi:10.4086/toc.gs.2011.004] 3
[10] C LAIRE K ENYON AND S AMUEL K UTIN: Sensitivity, block sensitivity, and `-block sensitivity of
Boolean functions. Inform. and Comput., 189(1):43–53, 2004. [doi:10.1016/j.ic.2002.12.001] 4
[11] L ÁSZLÓ L OVÁSZ AND N EAL E. YOUNG: Lecture notes on evasiveness of graph properties, 2002.
[arXiv:cs/0205031] 6
[12] G ATIS M IDRIJANIS: Exact quantum query complexity for total Boolean functions, 2004.
[arXiv:quant-ph/0403168] 3
[13] N OAM N ISAN: CREW PRAMs and decision trees. SIAM J. Comput., 20(6):999–1007, 1991.
Preliminary version in STOC’89. [doi:10.1137/0220062] 3
[14] N OAM N ISAN AND M ARIO S ZEGEDY: On the degree of Boolean functions as real poly-
nomials. Comput. Complexity, 4(4):301–313, 1994. Preliminary version in STOC’92.
[doi:10.1007/BF01263419] 3
[15] M ARIO S ZEGEDY: An O(n0.4732 ) upper bound on the complexity of the GKS communication game,
2015. [arXiv:1506.06456] 4, 15
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 17
J USTIN G ILMER , M ICHAL KOUCK Ý , AND M ICHAEL S AKS
AUTHORS
Justin Gilmer
Google
gilmer google com
Michal Koucký
Associate professor
Charles University, Prague, Czech Republic
koucky iuuk mff cuni cz
http://iuuk.mff.cuni.cz/~koucky/
Mike Saks
Distinguished professor of mathematics
Rutgers University, Piscataway, NJ
saks math rutgers edu
https://www.math.rutgers.edu/~saks/
ABOUT THE AUTHORS
J USTIN G ILMER graduated from Rutgers in 2015; his advisor was Michael Saks. He lives in
Mountain View, California and is a researcher in Machine Learning at Google. He loves
hiking, biking, and the free food at work.
M ICHAL KOUCKÝ received his Ph. D. in Computer Science from Rutgers in 2003, where he
was advised by Eric Allender. After having great time as a postdoc at McGill in Montréal
and CWI in Amsterdam, he moved back to his home town—Prague. He spent almost
a decade at the Institute of Mathematics of the Czech Academy of Sciences, and then
came back to his alma mater, Charles University. He is interested in various aspects of
theoretical computer science but he also loves programming. He is always amazed by
beauty that can be found in math and nature.
M IKE S AKS received his Ph. D. in Mathematics from M.I.T. in 1980, where he was advised
by Daniel Kleitman. He was a postdoctoral fellow at UCLA, and held positions at
Bell Communications Research and the Computer Science and Engineering Department
at UCSD. He has worked in a variety of areas in theory of computing and discrete
mathematics: lower bounds for data structures, circuits, communication complexity,
branching programs and decision trees, streaming algorithms, sublinear algorithms,
satisfiability algorithms, online algorithms, distributed computing, and derandomization,
and extremal problems for graphs, hypergraphs and partially ordered sets.
T HEORY OF C OMPUTING, Volume 13 (7), 2017, pp. 1–18 18