Authors Yuval Filmus, Or Meir, Avishay Tal,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51
www.theoryofcomputing.org
Shrinkage under Random Projections,
and Cubic Formula Lower Bounds
for π¨πͺ 0
Yuval Filmusβ Or Meirβ Avishay Tal
Received February 19, 2021; Revised December 28, 2021; Published December 5, 2023
Abstract. HΓ₯stad showed that any De Morgan formula (composed of AND, OR
and NOT gates) shrinks by a factor of π(π Λ 2 ) under a random restriction that leaves
each variable alive independently with probability π [SICOMP, 1998]. Using this
e 3 ) formula size lower bound for the Andreev function, which,
result, he gave an Ξ©(π
up to lower order improvements, remains the state-of-the-art lower bound for any
explicit function.
In this paper, we extend the shrinkage result of HΓ₯stad to hold under a far wider
family of random restrictions and their generalization β random projections. Based
e 3 ) formula size lower bound for an explicit
on our shrinkage results, we obtain an Ξ©(π
function computable in π¨πͺ 0 . This improves upon the best known formula size lower
bounds for π¨πͺ 0 , that were only quadratic prior to our work. In addition, we prove
that the KRW conjecture [Karchmer et al., Computational Complexity 5(3/4), 1995]
holds for inner functions for which the unweighted quantum adversary bound is
tight. In particular, this holds for inner functions with a tight Khrapchenko bound.
Our random projections are tailor-made to the functionβs structure so that the
function maintains structure even under projection β using such projections is
necessary, as standard random restrictions simplify π¨πͺ 0 circuits. In contrast, we
show that any De Morgan formula shrinks by a quadratic factor under our random
projections, allowing us to prove the cubic lower bound.
An extended abstract of this article appeared in the Proceeding of the 12th Innovations in Theoretical Computer
Science Conference (ITCSβ21).
β Taub Fellow β supported by the Taub Foundations. The research was funded by ISF grant 1337/16.
β Partially supported by the Israel Science Foundation (grant No. 1445/16).
Β© 2023 Yuval Filmus, Or Meir, and Avishay Tal
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2023.v019a007
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Our proof techniques build on HΓ₯stadβs proof for the simpler case of balanced
formulas. This allows for a significantly simpler proof at the cost of slightly worse
parameters. As such, when specialized to the case of π-random restrictions, our
proof can be used as an exposition of HΓ₯stadβs result.
ACM Classification: F.1.1
AMS Classification: 68Q06
Key words and phrases: circuit complexity, lower bounds, shrinkage, formulas, formula
complexity, KRW conjecture
1 Introduction
1.1 Background
Is there an efficiently solvable computational task that does not admit an efficient parallel
algorithm? A formalization of this question asks, is P * NC1 ? The answer is still unknown. The
question can be rephrased as follows (ignoring issues of uniformity): is there a Boolean function
in P that does not have a (De Morgan) formula of polynomial size? (We define De Morgan
formulas formally in Section 2; informally, these are formulas composed of AND, OR and NOT
gates.)
The history of formula lower bounds for functions in P goes back to the 1960s, with the
seminal result of Subbotovskaya [55] that introduced the technique of random restrictions.
Subbotovskaya showed that the Parity function on π variables requires formulas of size at least
Ξ©(π 1.5 ). Khrapchenko [33], using a different proof technique, showed that in fact the Parity
function on π variables requires formulas of size Ξ(π 2 ). Later, Andreev [4] came up with a
new explicit function (now known as the Andreev function) for which he was able to obtain an
Ξ©(π 2.5 ) size lower bound. This lower bound was subsequently improved in [29, 44, 23, 56] to
π 3βπ(1) .
The line of work initiated by Subbotovskaya and Andreev relies on the shrinkage of formulas
under π-random restrictions. A π-random restriction is a randomly chosen partial assignment
to the inputs of a function. Set a parameter π β (0, 1). We fix each variable independently with
probability 1 β π to a uniformly random bit, and we keep the variable alive with probability π.
Under such a restriction, formulas shrink (in expectation) by a factor more significant than π.
Subbotovskaya showed that formulas shrink to at most π 1.5 times their original size, whereas
subsequent work [44, 29] improved the bound to π 1.55 and then to π 1.63 . Finally, HΓ₯stad [23]
showed that the shrinkage exponent of formulas is 2, or in other words, that formulas shrink by
a factor of π 2βπ(1) under π-random restrictions. Tal [56] improved the shrinkage factor to π(π 2 )
β obtaining a tight result, as exhibited by the Parity function.
In a nutshell, shrinkage results are useful for proving lower bounds as long as the explicit
function being analyzed maintains structure under such restrictions and does not trivialize. For
example, the Parity function does not become constant as long as at least one variable remains
alive. Thus any formula πΉ that computes Parity must be of at least quadratic size, or else the
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 2
S HRINKAGE UNDER R ANDOM P ROJECTIONS
formula πΉ under restriction, keeping each variable alive with probability 100/π, would likely
become a constant function, whereas Parity would not. Andreevβs idea is similar, though he
manages to construct a function such that under a random restriction keeping only Ξ(log π) of
the variables, the formula size should be at least Ξ©Μ(π) (in expectation). This ultimately gives the
nearly cubic lower bound.
1.1.1 The KRW Conjecture
Despite much effort, proving P * NC1 , and even just breaking the cubic barrier in formula lower
bounds, have remained a challenge for more than two decades. An approach to solve the P
versus NC1 problem was suggested by Karchmer, Raz and Wigderson [31]. They conjectured
that when composing two Boolean functions, π and π, the formula size of the resulting function,
π π, is (roughly) the product of the formula sizes of π and π.1 We will refer to this conjecture
as the βKRW conjectureβ. Under the KRW conjecture (and even under weaker variants of it),
[31] constructed a function in P with no polynomial-size formulas. It remains a major open
challenge to settle the KRW conjecture.
A few special cases of the KRW conjecture are known to be true. The conjecture holds when
either π or π is the AND or the OR function. HΓ₯stadβs result [23] and its improvement [56]
show that the conjecture holds when the inner function π is the Parity function and the outer
function π is any function. This gives an alternative explanation to the π 3βπ(1) lower bound for
the Andreev function. Indeed, the Andreev function is at least as hard as the composition of
a maximally hard function π on log π bits and π = Parityπ/log π , where the formula size of π is
Ξ©Μ(π) and the formula size of Parityπ/log π is Ξ(π 2 /log2 π). Since the KRW conjecture holds for
this special case, the formula size of the Andreev function is at least Ξ©Μ(π 3 ). In other words, the
state-of-the-art formula size lower bounds for explicit functions follow from a special case of the
KRW conjecture β the case in which π is the Parity function. Moreover, this special case follows
from the shrinkage offormulas under π-random restrictions.
1.1.2 Bottom-up versus top-down techniques
Whereas random restrictions are a βbottom-upβ proof technique [24], a different line of work
suggested a βtop-down" approach using the language of communication complexity. The
connection between formula size and communication complexity was introduced in the seminal
paper by Karchmer and Wigderson [32]. They defined for any Boolean function π a two-party
communication problem KW π : Alice gets an input π₯ such that π (π₯) = 1, and Bob gets an input π¦
such that π (π¦) = 0. Their goal is to identify a coordinate π on which π₯ π β π¦ π , while minimizing
their communication. It turns out that there is a one-to-one correspondence between any
protocol tree solving KW π and any formula computing the function π . Since protocols naturally
traverse the tree from root to leaf, proving lower bounds on their size or depth is done usually
in a top-down fashion. This framework has proven to be very useful in proving formula lower
1More precisely, the original KRW conjecture [31] concerns depth complexity rather than formula complexity.
The variant of the conjecture for formula complexity, which is discussed above, was posed in [19].
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 3
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
bounds in the monotone setting (see, e. g., [32, 20, 47, 31, 46, 21, 45]) and in studying the KRW
conjecture (see, e. g., [31, 14, 26, 19, 13, 34, 41, 12, 42]). Moreover, in a recent paper [13], Dinur
and Meir were able to reprove HΓ₯stadβs cubic lower bound using the framework of Karchmer
and Wigderson. As Dinur and Meirβs proof showed that top-down techniques can replicate
HΓ₯stadβs cubic lower bound, a natural question (which motivated this project) arose:
Are top-down techniques superior to bottom-up techniques?
Towards that, we focused on a candidate problem: prove a cubic lower bound for an explicit
function in π¨πͺ 0 .2 Based on the work of Dinur and Meir [13], we suspected that such a lower
bound could be achieved using top-down techniques. We were also certain that the problem
cannot be solved using the random restriction technique. Indeed, in order to prove a lower
bound on a function π using random restrictions, one should argue that π remains hard under a
random restriction, however, it is well-known that functions in π¨πͺ 0 trivialize under π-random
restrictions [1, 16, 57, 22]. Based on this intuition, surely random restrictions cannot show that a
function in π¨πͺ 0 requires cubic size. Our intuition turned out to be false.
1.2 Our results
In this article, we construct an explicit function in π¨πͺ 0 which requiresformulas of size π 3βπ(1) .
Surprisingly, our proof is conducted via the bottom-up technique of random projections, which
is a generalization of random restrictions (more details below).
Theorem 1.1. There exists a family of Boolean functions β π : {0, 1} π β {0, 1} for π β β such that
1. β π can be computed by uniform depth-4 unbounded fan-in formulas of size π(π 3 ).
2. The formula size of β π is at least π 3βπ(1) .
Prior to our work, the best formula size lower bounds for an explicit function in π¨πͺ 0 were
only quadratic [43, 10, 30, 6].
Our hard function is a variant of the Andreev function. More specifically, recall that the
Andreev function is based on the composition π π, where π is a maximally hard function and
π is the Parity function. Since Parity is not in π¨πͺ 0 , we cannot take π to be the Parity function in
our construction. Instead, our hard function is obtained by replacing the Parity function with
the Surjectivity function of Beame and Machmouchi [6].
As in the case of the Andreev function, we establish the hardness of our function by proving
an appropriate special case of the KRW conjecture. To this end, we introduce a generalization of
the unweighted adversary method [2], called the soft-adversary bound, which we denote Advπ (π)
(see Section 7.1). We prove the KRW conjecture for the special case in which the outer function π
is any function, and π is a function whose formula complexity is bounded tightly by the soft-
adversary bound. We then obtain Theorem 1.1 by applying this version of the KRW conjecture
to the case where π is the Surjectivity function. We note that our KRW result holds in particular
2Recall that π¨πͺ 0 is the class of functions computed by constant depth polynomial size circuits composed of AND
and OR gates of unbounded fan-in, with variables or their negation at the leaves.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 4
S HRINKAGE UNDER R ANDOM P ROJECTIONS
for functions π with a large Khrapchenko bound, which in turn implies the known lower bounds
in the cases where π is the Parity function [23] and the Majority function [17].
Our proof of the special case of the KRW conjecture follows the methodology of HΓ₯stad [23],
who proved the special case in which π is Parity on π variables. HΓ₯stad proved that formulas
shrink by a factor of (roughly) π 2 under π-random restrictions. Choosing π = 1/π shrinks a
formula for π π by a factor of roughly π 2 , which coincides with the formula complexity of π.
On the other hand, on average each copy of π simplifies to a single input variable, and so π π
simplifies to π . This shows that πΏ( π π) & πΏ( π ) Β· πΏ(π).
Our main technical contribution is a new shrinkage theorem that works in a far wider range
of scenarios than just π-random restrictions: A random projection is a generalization of a random
restriction in which each of the input variables π₯ 1 , . . . , π₯ π may either be fixed to a constant or be
replaced with a literal from the set π¦1 , . . . , π¦π , π¦1 , . . . , π¦π where π¦1 , . . . , π¦π are new variables.
Given a function π with soft-adversary bound Advπ (π), we construct a random projection which,
on the one hand, shrinks formulas by a factor of Advπ (π), and on the other hand, simplifies π π
to π . We thus show that πΏ( π π) & πΏ( π ) Β· Advπ (π), and in particular, if Advπ (π) β πΏ(π), then
πΏ( π π) & πΏ( π ) Β· πΏ(π), just as in HΓ₯stadβs proof. Our random projections are tailored specifically
to the structure of the function π π, ensuring that π π simplifies to π under projection. This
enables us to overcome the aforementioned difficulty. In contrast, π-random restrictions that do
not respect the structure of π π would likely result in a restricted function that is much simpler
than π and in fact would be a constant function with high probability.
Our shrinkage theorem applies more generally to two types of random projections, which
we call fixing projections and hiding projections. Roughly speaking, fixing projections are random
projections in which substituting a constant for one of the variables π¦1 , . . . , π¦π results in a
projection that is much more probable. Hiding projections are random projections in which
substituting a constant for a variable π¦ π hides which of the input variables π₯ 1 , . . . , π₯ π were
mapped to {π¦ π , π¦ π } before the substitution. We note that our shrinkage theorem for fixing
projections captures HΓ₯stadβs result for π-random restrictions as a special case.
The proof of our shrinkage theorem is based on HΓ₯stadβs proof [23], but also simplifies it. In
particular, we take the simpler argument that HΓ₯stad uses for the special case of completely
balanced trees, and adapt it to the general case. As such, our proof avoids a complicated case
analysis, at the cost of slightly worse bounds. Using our bounds, it is nevertheless easy to obtain
the π 3βπ(1) lower bound for the Andreev function. Therefore, one can see the specialization of
our shrinkage result to π-random restrictions as an exposition of HΓ₯stadβs cubic lower bound.
1.2.1 An example: our techniques when specialized to π Majorityπ
To illustrate our choice of random projections, we present its instantiation to the special case of
π π, where π : {0, 1} π β {0, 1} is non-constant and π = Majorityπ for some odd integer π. In this
case, the input variables to π π are composed of π disjoint blocks, π΅1 , . . . , π΅ π , each containing
π variables. We use the random projection that for each block π΅ π = {π₯ π(πβ1)+1 , . . . , π₯ ππ }, picks
one variable in the block π΅ π uniformly at random, projects this variable to the new variable π¦ π ,
and fixes the rest of the variables in the block in a balanced way so that the number of zeros and
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 5
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
ones in the block is equal (i. e., we have exactly (π β 1)/2 zeros and (π β 1)/2 ones). It is not hard
to see that under this choice, π π simplifies to π . On the other hand, we show that this choice
of random projections shrinks the formula complexity by a factor of β 1/π 2 . Combining the
two together, we get that πΏ( π Majorityπ ) & πΏ( π ) Β· π 2 . Note that in this distribution of random
projections, the different coordinates are not independent of one another, and this feature allows
us to maintain structure.
1.3 Related work
Our technique of using tailor-made random projections was inspired by the celebrated result of
Rossman, Servedio, and Tan [25] that proved an average-case depth hierarchy. In fact, the idea
to use tailor-made random restrictions goes back to HΓ₯stadβs thesis [27, Chapter 6.2]. Similar to
our case, in [27, 25], π-random restrictions are too crude to separate depth π from depth π + 1
circuits. Given a circuit πΆ of depth π + 1, the main challenge is to construct a distribution of
random restrictions or projections (tailored to the circuit πΆ) that on the one hand maintains
structure for πΆ, but on the other hand simplify any depth π circuit πΆ 0.
1.4 Paper outline
The paper starts with brief preliminaries in Section 2. Then, in Section 3, we define the notions of
fixing and hiding projections, state the corresponding shrinkage theorems, and sketch how they
can be used to prove a cubic formula lower bound for π¨πͺ 0 . We prove the shrinkage theorems
for fixing and hiding projections in Sections 4 and 5, respectively. In Section 6 we provide a brief
interlude on the join operation for projections. The quantum adversary bound, the Khrapchenko
bound, and their relation to hiding projections are discussed in Section 7. Finally, Section 8
contains a proof of Theorem 1.1, as a corollary of a more general result which is a special case of
the KRW conjecture. In the same section we also rederive the cubic lower bound on Andreevβs
function, and the cubic lower bound on the Majority-based variant considered in [17].
2 Preliminaries
Throughout the paper, we use bold letters to denote random variables. For any π β β , we
denote by [π] the set {1, . . . , π}. Given a bit π β {0, 1}, we denote its negation by π. We
assume familiarity with the basic definitions of communication complexity (see, e. g., [37]). All
logarithms in this paper are base 2.
Definition 2.1. A (De Morgan) formula (with bounded fan-in) is a binary tree, whose leaves are
labeled with literals from the set {π₯ 1 , π₯ 1 , . . . , π₯ π , π₯ π }, and whose internal vertices are labeled as
AND (β§) or OR (β¨) gates. The size of a formula π, denoted size(π), is the number of leaves in the
tree. The depth of the formula is the depth of the tree. A formula with unbounded fan-in is defined
similarly, but every internal vertex in the tree can have any number of children. Unless stated
explicitly otherwise, whenever we say βformulaβ we refer to a formula with bounded fan-in.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 6
S HRINKAGE UNDER R ANDOM P ROJECTIONS
Definition 2.2. A formula π computes a Boolean function π : {0, 1} π β {0, 1} in the natural
way. The formula complexity of a Boolean function π : {0, 1} π β {0, 1}, denoted πΏ( π ), is the size
of the smallest formula that computes π . The depth complexity of π , denoted π·( π ), is the smallest
depth of a formula that computes π . For convenience, we define the size and depth of the
constant functions to be zero.
A basic property of formula complexity is that it is subadditive:
Fact 2.3. For every two functions π1 , π2 : {0, 1} π β {0, 1} we have πΏ( π1 β§ π2 ) β€ πΏ( π1 ) + πΏ( π2 ) and
πΏ( π1 β¨ π2 ) β€ πΏ( π1 ) + πΏ( π2 ).
The following theorem shows that every small formula can be βbalancedβ to obtain a shallow
formula.
Theorem 2.4 (Formula balancing, [7], following [54, 8]). For every π β β and πΌ > 0, the following
holds: For every formula π of size π , there exists an equivalent formula π0 of depth at most π(2 πΌ Β· log π )
1
and size at most π 1+πΌ (where the constant inside the big-O notation does not depend on the choice of πΌ).
Notation 2.5. With a slight abuse of notation, we will often identify a formula π with the
function it computes. In particular, the notation πΏ(π) denotes the formula complexity of the function
computed by π, and not the size of π (which is denoted by size(π)).
Notation 2.6. Given a Boolean variable π§, we denote by π§ 0 and π§ 1 the literals π§ and π§, respectively.
In other words, π§ π = π§ β π.
Notation 2.7. Given a literal β , we define var(β ) to be the underlying variable, that is, var(π§) =
var(π§) = π§.
Notation 2.8. Let Ξ be a deterministic communication protocol that takes inputs from π Γ β¬,
and recall that the leaves of the protocol induce a partition of π Γ β¬ to combinatorial rectangles.
For every leaf β of Ξ , we denote by πβ Γ β¬β the combinatorial rectangle that is associated with β .
We use the framework of KarchmerβWigerson relations [32], which relates the complexity
of π to the complexity of a related communication problem KW π .
Definition 2.9 ([32]). Let π : {0, 1} π β {0, 1} be a Boolean function. The KarchmerβWigderson
relation of π , denoted KW π , is the following communication problem: The inputs of Alice and
Bob are strings π β π β1 (1) and π β π β1 (0), respectively, and their goal is to find a coordinate
π β [π] such that π π β π π . Note that such a coordinate must exist since π β1 (1) β© π β1 (0) = β
and
hence π β π.
Theorem 2.10 ([32], see also [48]). Let π : {0, 1} π β {0, 1}. The communication complexity of KW π
is equal to π·( π ), and the minimal number of leaves in a protocol that solves KW π is πΏ( π ).
We use the following two standard inequalities.
β π₯+π¦
Fact 2.11 (the AMβGM inequality). If π₯, π¦ β₯ 0 then π₯ Β· π¦ β€ 2 .
Fact 2.12 (special case of Cauchy-Schwarz
β β β β inequality). For every π‘ non-negative real numbers
π₯ 1 , . . . , π₯ π‘ we have π₯ 1 + . . . + π₯ π‘ β€ π‘ Β· π₯ 1 + . . . + π₯ π‘ .
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 7
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
3 Main results
In this section we define the notions of fixing and hiding projections and state their associated
shrinkage theorems (see Sections 3.1 and 3.2, respectively). We then sketch the proof of our
cubic formula lower bounds for π¨πͺ 0 assuming those theorems (see Section 3.3). We start by
defining projections and the relevant notation.
Definition 3.1. Let π₯ 1 , . . . , π₯ π and π¦1 , . . . , π¦π be Boolean variables. A projection π from π₯ 1 , . . . , π₯ π
to π¦1 , . . . , π¦π is a function from the set {π₯ 1 , . . . , π₯ π } to the set {0, 1, π¦1 , π¦1 , . . . , π¦π , π¦π }. Given
such a projection π and a Boolean function π : {0, 1} π β {0, 1} over the variables π₯ 1 , . . . , π₯ π , we
denote by π | π : {0, 1} π β {0, 1} the function obtained from π by substituting π(π₯ π ) for π₯ π for
each π β [π] in the natural way. Unless stated explicitly otherwise, all projections in this section
are from π₯1 , . . . , π₯ π to π¦1 , . . . , π¦π , and all functions from {0, 1} π to {0, 1} are over the variables
π₯ 1 , . . . , π₯ π . A random projection is a distribution over projections.
Notation 3.2. Let π be a projection. For every π β [π] and bit π β {0, 1}, we denote by π π¦ π βπ the
projection that is obtained from π by substituting π for π¦ π .
Notation 3.3. With a slight abuse of notation, if a projection π maps all the variables π₯ 1 , . . . , π₯ π
to constants in {0, 1}, we will sometimes treat it as a binary string in {0, 1} π .
3.1 Fixing projections
Intuitively, a π-fixing projection is a random projection π
in which for every variable π₯ π , the
probability that π
maps a variable π₯ π to a literal π¦ ππ is much smaller than the probability that
π
fixes π¦ π to a constant, regardless of the values that π assigns to the other variables. This property is
essentially the minimal property that is required in order to carry out the argument of HΓ₯stad
[23]. Formally, we define π-fixing projections as follows.
Definition 3.4. Let 0 β€ π0 , π1 β€ 1. We say that a random projection π
is a (π0 , π1 )-fixing projection
if for every projection π, every bit π β {0, 1}, and every variable π₯ π , we have
Pr π
(π₯ π ) β {0, 1} and π
var(π
(π₯ π ))βπ = π β€ π π Β· Pr[π
= π] .
(3.1)
β
For shorthand, we say that π
is a π-fixing projection, for π = π0 π1 .
If needed, one can consider without loss of generality only variables π₯ π such that π(π₯ π ) β {0, 1},
as otherwise Equation 3.1 holds trivially, with the left-hand side equaling zero.
Example 3.5. In order to get intuition for the definition of fixing projections, let us examine how
this definition applies to random restrictions. In our terms, a restriction is a projection from
π₯ 1 , . . . , π₯ π to π₯ 1 , . . . , π₯ π that maps every variable π₯ π either to itself or to {0, 1}. Suppose that π is
any distribution over restrictions, and that π is some fixed restriction. In this case, the condition
of being π-fixing can be rewritten as follows:
Pr[π(π₯ π ) = π₯ π and π π₯ π βπ = π] β€ π π Β· Pr[π = π] .
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 8
S HRINKAGE UNDER R ANDOM P ROJECTIONS
Denote by π0 , π0 the restrictions obtained from π, π by truncating π₯ π (i. e., π0 = π| {π₯1 ,...,π₯ π }β{π₯ π } ).
Using this notation, we can rewrite the foregoing equation as
Pr[π(π₯ π ) = π₯ π and π0 = π0 and π π₯ π βπ (π₯ π ) = π(π₯ π )] β€ π π Β· Pr[π(π₯ π ) = π(π₯ π ) and π0 = π0] .
Now, observe that it is always the case π π₯ π βπ (π₯ π ) = π, and therefore the probability on the
left-hand side is non-zero only if π(π₯ π ) = π. Hence, we can restrict ourselves to the latter case,
and the foregoing equation can be rewritten again as
Pr[π(π₯ π ) = π₯ π and π0 = π0] β€ π π Β· Pr[π(π₯ π ) = π and π0 = π0] .
Finally, if we divide both sides by Pr[π0 = π0], we obtain the following intuitive condition:
Pr[π(π₯ π ) = π₯ π | π0 = π0] β€ π π Β· Pr[π(π₯ π ) = π | π0 = π0] .
This condition informally says the following: π is a fixing projection if the probability of leaving
π₯ π unfixed is at most π π times the probability of fixing it to π, and this holds regardless of what
the restriction assigns to the other variables.
In particular, it is now easy to see that the classic random restrictions are fixing projections.
Recall that a π-random restriction fixes each variable independently with probability 1 β π to
a random bit. Due to the independence of the different variables, the foregoing condition
simplifies to
Pr[π(π₯ π ) = π₯ π ] β€ π π Β· Pr[π(π₯ π ) = π] ,
and it is easy to see that this condition is satisfied for π0 = π1 = 1βπ .
2π
We prove the following shrinkage theorem for π-fixing projections, which is analogous to the
shrinkage theorem of [23] for random restrictions in the case of balanced formulas.
Theorem 3.6 (Shrinkage under fixing projections). Let π be a formula of size π and depth π, and let
π
be a π-fixing projection. Then
β
πΌ πΏ(π| π
) = π π 2 Β· π2 Β· π + π Β· π .
We prove Theorem 3.6 in Section 4. It is instructive to compare our shrinkage theorem to the
shrinkage theorem by HΓ₯stad [23] for unbalanced formulas:
Theorem 3.7 ([23]). Let π be a formula of size π , and let π be a π-random restriction. Then
β
1
πΌ πΏ(π| π ) = π π Β· 1 + log ,π Β·π +πΒ· π .
2 3/2
min
π
Our Theorem 3.6 has somewhat worse parameters compared to Theorem 3.7: specifically,
the factor of π 2 does not appear in Theorem 3.7. The reason is that the proof of [23] uses a fairly
complicated case analysis in order to avoid losing that factor, and we chose to skip this analysis
in order to obtain a simpler proof. We did not check if the factor of π 2 in our result can be
avoided by using a similar case analysis. By applying formula balancing (Theorem 2.4) to our
shrinkage theorem, we can obtain the following result, which is independent of the depth of the
formula.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 9
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Corollary 3.8. Let π : {0, 1} π β {0, 1} be a function with formula complexity π , and let π
be a π-fixing
projection. Then
1+π β 1 1/2+π β 1
πΌ [πΏ( π | π
)] = π 2 Β· π log π
+πΒ·π log π
.
Proof. By assumption, there exists a formula π of size π that computes π . We balance the
formula π by applying Theorem 2.4 with πΌ = β 1 , and obtain a new formula π0 that
log π
1+ β 1 β π( β 1 )
computes π and has size π log π
and depth π(2 log π Β· log π ) = π log π
. The required result
now follows by applying Theorem 3.6 to π . 0
3.2 Hiding projections
Intuitively, a hiding projection is a random projection π
in which, when given π
π¦ π βπ , it is hard
to tell in which locations the variable π¦ π appears in π
. Formally, we define π-hiding projections
as follows.
Definition 3.9. Let 0 β€ π0 , π1 β€ 1. We say that a random projection π
is a (π0 , π1 )-hiding projection
if for every projection π, every bit π β {0, 1}, and all variables π₯ π , π¦ π , we have
Pr π
(π₯ π ) β π¦ π , π¦ π π
π¦ π βπ = π β€ π π ,
whenever the event conditioned on has positive probability. For shorthand, we say that π
is a
β
π-hiding projection, for π = π0 π 1 .
To illustrate the definition, consider the following natural random restriction: given π variables
π₯ 1 , . . . , π₯ π , the restriction chooses a set of π variables uniformly at random, and fixes all the
other variables to random bits. This restriction is not captured by the notion of π-random
restrictions or by fixing projections, but as we demonstrate next, it can be implemented by
hiding projections. We start with the simple case of π = 1, and then consider the general case.
Example 3.10. In order to implement the case of π = 1, consider the random projection π
from π₯1 , . . . , π₯ π to π¦ that is defined as follows: the projection π
chooses an index π β [π] and
a bit π β {0, 1} uniformly at random, sets π
(π₯ π ) = π¦ π , and sets π
(π₯ π0 ) to a random bit for all
π 0 β [π] \ {π}. It is clear that π
is essentially equivalent to the random restriction described
above for π = 1. We claim that π
is a π1 -hiding projection. To see this, observe that for every
bit π β {0, 1}, the projection π
π¦βπ is a uniformly distributed string in {0, 1} π , and moreover, this
is true conditioned on any possible value of π. In particular, the random variable π is independent
of π
π¦βπ . Therefore, for every projection π β {0, 1} π and index π β [π] we have
1
Pr π
(π₯ π ) β {π¦, π¦} π
π¦βπ = π = Pr π = π π
π¦βπ = π = Pr[π = π] = ,
π
so π
satisfies the definition of a (π0 , π1 )-hiding projection with π0 = π1 = π1 . Intuitively, given
π
π¦βπ , one cannot guess the original location of π¦ in π
better than a random guess.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 10
S HRINKAGE UNDER R ANDOM P ROJECTIONS
Example 3.11. We turn to consider the case of a general π β β . Let π
be the random projection
from π₯1 , . . . , π₯ π to π¦1 , . . . , π¦π that is defined as follows: the projection π
chooses π distinct
ππ
indices π1 , . . . , π π β [π] and π bits π1 , . . . , ππ β {0, 1} uniformly at random, sets π
(π₯ π π ) = π¦ π for
every π β [π], and sets all the other variables π₯ π to random bits. It is clear that π
is essentially
equivalent to the random projection described above. We show that π
is a πβπ+1 1
-hiding
projection. To this end, we should show that for every π β [π], π β [π], and π β {0, 1} we have
1
Pr π
(π₯ π ) β π¦ π , π¦ π π
π¦ π βπ = π β€ .
πβπ+1
For simplicity of notation, we focus on the case where π = π. Observe that π
π¦π βπ reveals
the values of π1 , . . . , π πβ1 , and the projection of π
π¦π βπ on the remaining π β π + 1 indices
is uniform over {0, 1} πβπ+1 . Moreover, the latter assertion remains true conditioned on any
possible value of π π (which must be different from the known values of π1 , . . . , π πβ1 ). Therefore
given π1 , . . . , π πβ1 , the random variable π π (ranging over all indices other than π1 , . . . , π πβ1 )
is independent of the projection of π
π¦π βπ on the π β π + 1 indices other than π1 , . . . , π πβ1 .
It follows that for every projection π in the support of π
π¦π βπ and every index π β [π], if
ππβ1
π
(π₯ π1 ) = π¦1π1 , . . . , π
(π₯ π πβ1 ) = π¦πβ1 then:
Pr π
(π₯ π ) β {π¦π , π¦π } π
π¦π βπ = π = Pr π π = π π
π¦π βπ = π
1
= Pr[π π = π | π1 = π1 , . . . , π πβ1 = π πβ1 ] β€ ,
πβπ+1
as required.
In Section 5, we prove the following shrinkage theorem for hiding projections.
Theorem 3.12 (Shrinkage under hiding projections). Let π be a formula of size π and depth π, and
let π
be a π-hiding projection. Then
β
πΌ πΏ(π| π
) = π π 4 Β· π 2 Β· π2 Β· π + π 2 Β· π Β· π .
Applying formula balancing, we can obtain an analog of Corollary 3.8, with an identical
proof.
Corollary 3.13. Let π : {0, 1} π β {0, 1} be a function with formula complexity π , and let π
be a
π-hiding projection. Then
1+π β 1 2 +π
1 β1
πΌ [πΏ( π | π
)] = π Β· π Β· π
4 2 log π
+π Β·πΒ·π
2 log π
.
Remark 3.14. Note that Theorem 3.12 loses a factor of π 4 compared to Theorem 3.6. While this
loss might be large in general, it will be of no importance in our applications. The factor π 4 can
be improved to π 2 in our actual applications, as discussed in Section 5.1.
The following example shows that the loss of a factor of at least π 2 is necessary for
Theorem 3.12. Let π = Parityπ be the Parity function over π₯ 1 , . . . , π₯ π , and note that the formula
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 11
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
complexity of π is π = Ξ(π 2 ). Let π
be the random projection from Example 3.11, and recall
that it is a π-hiding projection for π = πβπ+1
1
. Then, π | π
is the Parity function over π bits, and
therefore its formula complexity is Ξ(π ). It follows that when π β€ π/2,
2
πΌ [πΏ( π | π
)] = Ξ(π 2 ) = Ξ(π 2 Β· π 2 Β· π ).
3.3 Sketch: Cubic formula lower bound for π¨πͺ 0
We now sketch how our shrinkage theorems can be used to prove the cubic formula lower
bound for π¨πͺ 0 . We start by sketching how our shrinkage theorems can be used to reprove a
quadratic formula lower bound for the surjectivity function Surj (originally due to [6]). Then,
we sketch a lower bound for compositions of the form π Surj, where π is an arbitrary function.
Finally, we combine the latter bound with an idea of Andreev [4] to obtain a cubic lower bound
for an explicit function in π¨πͺ 0 . We note that the first step is the key idea of the proof, whereas
the two latter steps are relatively standard. A formal proof following this sketch can be found in
Section 8.
3.3.1 Lower bound for the surjectivity function
The surjectivity function Surjπ : {0, 1} π β {0, 1} [6] takes as input a list of π numbers in [π] for
some π β β , encoded in binary, and outputs 1 if and only if all the numbers in [π] appear in
the list. For our purposes, we write π0 = π β 2 and set π = 32 Β· π0 + 2. We sketch a proof that
πΏ(Surjπ ) β Ξ©(π 2 ). Since the input length of Surjπ , when measured in bits, is π = π(π log π), this
implies that πΏ(Surjπ ) β Ξ©( π2 ).
2
log π
To this end, we construct a random projection π
for Surjπ that maps the input variables
π₯ 1 , . . . , π₯ π to a single variable π¦ as follows: We choose two distinct numbers π, π β [π] at
random, and then partition the remaining π β 2 numbers at random to two sets π¨ and π© of size
2 (π β 2) = 2 π each. Then, we create a list of π = 2 Β· π + 2 numbers in [π] as follows:
1 1 0 3 0
1. Initially, all the places in the list are vacant.
2. We put each of the 12 π0 + 1 numbers in π¨ βͺ {π} at a single random vacant place in the list.
3. We put each of the 12 π0 numbers in π© in two random vacant places in the list (for a total of
π0 places).
4. Note that so far we filled 32 π0 + 1 = π β 1 vacant places in the list. The remaining vacant
place in the list is left empty.
We set π
such that it fixes all the inputs variables π₯ 1 , . . . , π₯ π to constants according to the list
we chose, except for the variables corresponding to the empty place in the list. Then, the
dlog πe binary input variables that correspond to the empty place are mapped to {0, 1, π¦, π¦} such
that if we substitute π¦ = 1 we get the number π and if we substitute π¦ = 0 we get the number π.
Observe that if π¦ = 1, then the numbers in π¨ βͺ {π, π} appear in the list exactly once and the
numbers in π© appear exactly twice. On the other hand, if π¦ = 0, then the numbers in π¨ appear
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 12
S HRINKAGE UNDER R ANDOM P ROJECTIONS
in the list exactly once, the numbers in π© βͺ {π} appear in the list exactly twice, and π does not
appear at all.
Observe that Surjπ | π
= π¦. Indeed, if π¦ = 1 then the foregoing list of numbers contains all
the numbers in [π] and thus Surjπ takes the value 1, and if π¦ = 0 then the list of numbers does
not contain π and thus Surjπ takes the value 0. We claim that π
is an π( π1 )-hiding projection.
Intuitively, if we substitute a value π β {0, 1} in π¦, then given the resulting projection π
π¦βπ it is
difficult to tell which input variables were mapped to π¦ in the original projection π
. The reason
is that in order to tell that, one has to guess the empty place in the original list. To this end,
one has to guess π among the β 12 π0 numbers that appear exactly once in the list (if π = 1) or to
guess π among the β 12 π0 numbers that appear exactly twice in the list (if π = 0).
A bit more formally, observe that π
maps an input variable π₯ β to {π¦, π¦} only if π₯ β corresponds
to the empty place in the list of π
, and that π
π¦β0 and π
π¦β1 represent lists of π numbers in [π].
Hence, to find an upper bound on the probability that π
(π₯ β ) β {π¦, π¦} conditioned on the choice
of π
π¦βπ (for π β {0, 1}), we need an upper bound on the probability that a place in the list
represented by π
π¦βπ was the empty place in the list of π
. We consider two cases:
β’ If π = 1, then the empty place in the list of π
contains π in the list of π
π¦β1 . From the
perspective of someone who only sees π
π¦β1 , the number π could be any number among
the 12 π0 + 2 numbers that appear exactly once in the list of π
π¦β1 . Therefore, conditioned
on π
π¦β1 , the empty place in the list of π
is uniformly distributed among 12 π0 +2 places in the
list of π
π¦β1 . Hence, any place in the list of π
π¦β1 has probability of at most 1 10 = π π1
2 π +2
to be the empty place in π
.
β’ On the other hand, if π = 0, then the empty place in the list of π
contains π in the list
of π
π¦β0 . From the perspective of someone who only sees π
π¦β0 , the number π could be
any number among the 12 π0 + 1 numbers that appear exactly twice in the list of π
π¦β0 .
Therefore, conditioned on π
π¦β0 , the empty place in the list of π
is uniformly distributed
among 2 Β· 21 π0 + 1 places in the list of π
π¦β0 . Hence, any place in the list has probability
= π π1 to be the empty place in π
.
of at most 1 10
2Β·( 2 π +1)
We showed that π
is an π( π1 )-hiding projection, and therefore by our shrinkage theorem for
hiding projections (specifically, Corollary 3.13) we get that
1
πΌ [πΏ(Surjπ | π
)] . Β· πΏ(Surj).
π2
(We ignored the power of β 1
from Corollary 3.13 in order to simplify the exposition.)
log πΏ(Surjπ )
On the other hand, as explained above, we have Surj | π = π¦, and therefore πΏ(Surjπ | π
) = 1. It
follows that πΏ(Surjπ ) & π 2 , as required.
We note that the above proof is a special case of a more general phenonmenon. Specifically,
in Section 7.1, we show that such a hiding projection can be constructed for every function that
has a large (unweighted) adversary bound. As [6] showed, the surjectivity function has such a
large adversary bound, and therefore we can construct a hiding projection for it.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 13
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
3.3.2 Lower bound for π Surjπ
Let π : {0, 1} π β {0, 1} be any function, and let Surjπ be defined as in the previous section.
The block-composition of π and Surjπ , denoted π Surjπ , is the function that takes π inputs
π₯ 1 , . . . , π₯ π β {0, 1} π for Surjπ and outputs
( π Surjπ )(π₯ 1 , . . . , π₯ π ) = π (Surj(π₯ 1 ), . . . , Surj(π₯ π ))
π2
We sketch a proof that πΏ( π Surjπ ) β Ξ© πΏ( π ) Β· . Denote by π₯ π,1 , . . . , π₯ π,π the boolean
log2 π
variables of the input π₯ π for each π β [π]. Let π
be the π( π1 )-hiding projection from the previous
section, and let π
π be random projection from π₯ 1,1 , . . . , π₯ π,π to π¦1 , . . . , π¦ π that is obtained by
joining π independent copies of π
. In Section 6 we show that the join operation maintains
the hiding property, and therefore π
π is an π( π1 )-hiding projection. Now, it is not hard to see
that π Surjπ | π
π = π and therefore πΏ( π Surjπ | π
π ) = πΏ( π ). On the other hand, by our shrinkage
theorem for hiding projections (specifically, Corollary 3.13) we get that
1
πΌ [πΏ( π Surjπ | π
π )] . Β· πΏ( π Surjπ ).
π2
(Again, we ignored the power of β 1
from Corollary 3.13 in order to simplify the
log πΏ( π Surjπ )
exposition.) It follows that
1
πΏ( π ) . Β· πΏ( π Surjπ ),
π2
and therefore
πΏ( π Surjπ ) β Ξ© πΏ( π ) Β· π 2 = Ξ©Μ πΏ( π ) Β· π 2 ,
as required.
3.3.3 Cubic lower bound for π¨πͺ 0
Let π = 2 π Β· (π + 1) and let β π : {0, 1} π β {0, 1} be the function that takes as input the truth table
of a function π : {0, 1} π β {0, 1} and π inputs π₯1 , . . . , π₯ π for Surj2π , and outputs
πΉ( π , π₯1 , . . . , π₯ π ) = ( π Surj)(π₯ 1 , . . . , .π₯ π )
As mentioned above, the function β π is a variant of the Andreev function in which the parity
function is replaced with Surj. It is not hard to show that β π is in π¨πͺ 0 (see Section 8.2 for details).
We sketch a proof that πΏ(β π ) β Ξ©(π 3βπ(1) ).
To see that πΏ(β π ) β Ξ©(π 3βπ(1) ), recall that by a standard
counting argument, there exists a
2π
function π : {0, 1} π β {0, 1} such that πΏ( π ) = Ξ log π . We now hardwire π as an input to β π ,
which turns the function β π to the function π Surj2π . By the lower bound we have for π Surj2π ,
it follows that
2π
πΏ(πΉ) β₯ πΏ( π Surj) β Ξ©Μ πΏ( π ) Β· 2 2π
= Ξ©Μ Β· 22π = Ξ©(π 3βπ(1) ),
log π
as required.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 14
S HRINKAGE UNDER R ANDOM P ROJECTIONS
Remark 3.15. Note that in the proof of this result, we did not use the shrinkage theorem for fixing
projections β indeed, we only used the shrinkage theorem for hiding projections. However, the
proof of the shrinkage theorem for hiding projections itself relies on the shrinkage theorem for
fixing projections.
4 Proof of shrinkage theorem for fixing projections
In this section, we prove our the shrinkage theorem for fixing projections. Our proof is based on
the ideas of [23], but the presentation is different. We start by recalling the definition of fixing
projections and the theoremβs statement.
Definition 3.4. Let 0 β€ π0 , π1 β€ 1. We say that a random projection π
is a (π0 , π1 )-fixing projection
if for every projection π, every bit π β {0, 1}, and every variable π₯ π , we have
Pr π
(π₯ π ) β {0, 1} and π
var(π
(π₯ π ))βπ = π β€ π π Β· Pr[π
= π] .
β
For shorthand, we say that π
is a π-fixing projection, for π = π0 π1 .
Theorem 3.6 (Shrinkage under fixing projections). Let π be a formula of size π and depth π, and let
π
be a π-fixing projection. Then
β
πΌ πΏ(π| π
) = π π 2 Β· π2 Β· π + π Β· π .
Fix a formula π of size π and depth π, and let π
be a π-fixing projection. We would like to get
an upper bound on the expectation of πΏ(π| π
). As in [23], we start by giving an upper bound the
probability that the projection π
shrinks a formula to size 1. Specifically, we prove the following
lemma in Section 4.1.
Lemma 4.1. Let π : {0, 1} π β {0, 1} be a Boolean function, and let π
be a π-fixing projection. Then,
q
Pr[πΏ( π | π
) = 1] β€ π Β· πΏ( π ).
Next, we show that to find and upper bound on the expectation of πΏ(π| π
), it suffices to
give an upper bound on the probability that the projection π
shrinks two formulas to size 1
simultaneously. In order to state this claim formally, we introduce some notation.
Notation 4.2. Let π be a gate of π. We denote the depth of π in π by depthπ (π), and omit π if it is
clear from context (the root has depth 0). If π is an internal gate, we denote the subformulas that
are rooted in its left and right children by left(π) and right(π), respectively. Here, by βinternal
gateβ, we refer to a gate which is an internal node of the tree, i. e., either an AND or an OR gate.
We prove the following lemma, which says that in order to upper-bound πΌ πΏ(π| π
) it suffices
to upper-bound, for every internal gate π, the probability that left(π) and right(π) shrink to size 1
under π
.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 15
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Lemma 4.3. For every projection π it holds that
Γ
πΏ(π| π ) β€ (depth(π) + 2) Β· 1{ πΏ(left(π)|π )=1and πΏ(right(π)|π )=1} + 1πΏ(π|π )=1 .
internal gate π of π
We would like to use Lemmas 4.1 and 4.3 to prove the shrinkage theorem. As a warm-up,
let us make the simplifying assumption that for every two functions π1 , π2 : {0, 1} π β {0, 1}, the
events πΏ( π1 | π
) = 1 and
πΏ( π2| π
) = 1 are independent. If this were true, we could have given an
upper bound on πΌ πΏ(π| π
) as follows:
Γ h i
πΌ πΏ(π| π
) β€
(depth(π) + 2) Β· πΌ 1{ πΏ(left(π)|π
)=1 and πΏ(right(π)|π
)=1} + πΌ 1πΏ(π|π )=1
int. gate π of π
(Lemma 4.3)
Γ
Pr πΏ(left(π)| π
) = 1 and πΏ(right(π)| π
) = 1 + πΌ 1πΏ(π|π )=1
β€ (π + 2) Β·
int. gate π of π
(π is of depth π)
Γ
Pr[πΏ(left(π)| π
) = 1] Β· Pr πΏ(right(π)| π
) = 1 + πΌ 1πΏ(π|π )=1
= (π + 2) Β·
int. gate π of π
(simplifying assumption)
Γ q β
β€ (π + 2) Β· π2 Β· πΏ(left(π)) Β· πΏ(right(π)) + π Β· π (Lemma 4.1)
int. gate π of π
Γ β
β€ π 2 Β· (π + 2) Β· πΏ(left(π)) + πΏ(right(π)) + π Β· π
(AMβGM inequality)
int. gate π of π
Γ β
β€ π Β· (π + 2) Β· size(left(π)) + size(right(π)) + π Β· π
2
int. gate π of π
Γ β
= π 2 Β· (π + 2) Β· size(π) + π Β· π .
int. gate π of π
The last sum counts every leaf β of π once for each internal ancestor of β , so the last expression
is equal to
Γ β
π 2 Β· (π + 2) Β· depth(β ) + π Β· π
leaf β of π
Γ β
β€ π 2 Β· (π + 2) Β· π+πΒ· π
leaf β of π
β
= π 2 Β· (π + 2) Β· π Β· π + π Β· π
β
= π π 2 Β· π2 Β· π + π Β· π ,
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 16
S HRINKAGE UNDER R ANDOM P ROJECTIONS
which is the bound we wanted. However, the above calculation only works under our simplifying
assumption, which is false: the events πΏ( π1 | π
) = 1 and πΏ( π2 | π
) = 1 will often be dependent. In
particular, in order for the foregoing calculation to work, we need the following inequality to
hold: q
Pr[πΏ( π2 | π
) = 1 | πΏ( π1 | π
) = 1] β€ π Β· πΏ( π2 ).
This inequality holds under our simplifying assumption by Lemma 4.1, but may not hold in
general. Nevertheless, we prove the following similar statement in Section 4.2.
Lemma 4.4. Let π
be a π-fixing projection. Let π1 , π2 : {0, 1} π β {0, 1}, let π, π β {0, 1}, and let π¦ π be
a variable. Then, q
h i
Pr πΏ( π2 | π
π¦ π βπ ) = 1 π1 | π
= π¦ ππ β€ π Β· πΏ( π2 ).
Intuitively, Lemma 4.4 breaks the dependency between the events πΏ( π1 | π
) = 1 and πΏ( π2 | π
) = 1
by fixing in π2 the single literal to which π1 has shrunk. We would now like to use Lemma 4.4 to
prove the theorem. To this end, we prove an appropriate variant of Lemma 4.3, which allows
using the projection π π¦ π βπ rather than π in the second function. This variant is motivated by
the following βone-variable simplification rulesβ of [23], which are easy to verify.
Fact 4.5 (one-variable simplification rules). Let β : {0, 1} π β {0, 1} be a function over the variables
π¦1 , . . . , π¦π , and let π β {0, 1}. We denote by β π¦ π βπ the function obtained from β by setting π¦ π to the
bit π. Then:
β’ The function π¦ ππ β¨ β is equal to the function π¦ ππ β¨ β π¦ π βπ .
β’ The function π¦ ππ β§ β is equal to the function π¦ ππ β§ β π¦ π βπ .
In order to use the simplification rules, we define, for every internal gate π of π and
projection π, an event β°π,π as follows: if π is an OR gate, then β°π,π is the event that there exists
some literal π¦ ππ (for π β {0, 1}) such that left(π)| π = π¦ ππ and πΏ(right(π)| π π¦ π βπ ) = 1. If π is an AND
gate, then β°π,π is defined similarly, except that we replace π π¦ π βπ with π π¦ π βπ . We have the
following lemma, which is proved in Section 4.3.
Lemma 4.6. For every projection π we have
Γ
πΏ(π| π ) β€ (depth(π) + 2) Β· 1β°π,π + 1πΏ(π|π )=1 .
internal gate π of π
We can now use the following corollary of Lemma 4.4 to replace our simplifying assumption.
Corollary 4.7. For every internal gate π of π we have
q
Pr β°π,π
β€ π Β· πΏ(left(π)) Β· πΏ(right(π)).
2
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 17
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Proof. Let π be an internal gate of π. We prove the corollary for the case where π is an OR gate,
and the proof for the case that π is an AND gate is similar. We have
h i
Pr β°π,π
= Pr β literal π¦ ππ : left(π)| π
= π¦ ππ and πΏ(right(π)| π
π¦ π βπ ) = 1
Γ h i
= Pr left(π)| π
= π¦ ππ and πΏ(right(π)| π
π¦ π βπ ) = 1
literal π¦ ππ
Γ h i h i
= Pr πΏ(right(π)| π
π¦ π βπ ) = 1 left(π)| π
= π¦ ππ Β· Pr left(π)| π
= π¦ ππ
literal π¦ ππ
q Γ h i
β€πΒ· πΏ(right(π)) Β· Pr left(π)| π
= π¦ ππ (Lemma 4.4)
literal π¦ ππ
q
=πΒ· πΏ(right(π)) Β· Pr[πΏ(left(π)| π
) = 1]
q
β€ π2 Β· πΏ(left(π)) Β· πΏ(right(π)), (Lemma 4.1)
as required.
The shrinkage theorem now follows using the same calculation as above, replacing Lemma 4.3
with Lemma 4.6 and the simplifying assumption with Corollary 4.7:
Γ β
πΌ πΏ(π| π
) β€ (π + 2) Β· Pr β°π,π
+ π Β· π
(Lemma 4.6)
internal gate π of π
Γ q β
β€ π 2 Β· (π + 2) Β· πΏ(left(π)) Β· πΏ(right(π)) + π Β· π (Corollary 4.7)
internal gate π of π
Γ β
β€ π Β· (π + 2) Β· πΏ(left(π)) + πΏ(right(π)) + π Β· π
2
(AMβGM ineq.)
internal gate π of π
Γ β
= π Β· (π + 2) Β·
2
size(π) + π Β· π
internal gate π of π
β
β€ π(π 2 Β· π 2 Β· π + π Β· π ).
In the remainder of this section, we prove Lemmas 4.1, and 4.4, and 4.6.
Remark 4.8. In this paper, we do not prove Lemma 4.3, since we do not actually need it to
prove our main results in Section 8. However, this lemma can be established using the proof of
Lemma 4.6, with some minor changes.
4.1 Proof of Lemma 4.1
We begin with proving Lemma 4.1, restated next.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 18
S HRINKAGE UNDER R ANDOM P ROJECTIONS
Lemma 4.1. Let π : {0, 1} π β {0, 1} be a Boolean function, and let π
be a π-fixing projection. Then,
q
Pr[πΏ( π | π
) = 1] β€ π Β· πΏ( π ).
Let π : {0, 1} π β {0, 1}, and let β° be the set
p of projections π such that πΏ( π | π ) = 1. We prove
that the probability that π
β β° is at most π Β· πΏ( π ). Our proof follows closely the proof of [23,
Lemma 4.1].
Let Ξ be a protocol that solves KW π and has πΏ( π ) leaves (such a protocol exists by Theo-
rem 2.10). Let π and β¬ be the sets of projections π for which π | π is the constants 1 and 0,
respectively. We extend the protocol Ξ to take inputs from π Γ β¬ as follows: when Alice and
Bob are given as inputs the projections ππ΄ β π and ππ΅ β β¬, respectively, they construct strings
π, π β {0, 1} π from ππ΄ , ππ΅ by substituting 0 for all the variables π¦1 , . . . , π¦π , and invoke Ξ on the
inputs π and π. Observe that π and π are indeed legal inputs for Ξ (since π (π) = 1 and π (π) = 0).
Moreover, recall that the protocol Ξ induces a partition of π Γ β¬ to combinatorial rectangles,
and that we denote the rectangle of the leaf β by πβ Γ β¬β (see Notation 2.8).
Our proof strategy is the following: We associate with every projection π β β° a leaf of Ξ ,
denoted leaf(π). We consider the two disjoint events β° + , β° β that correspond to the event that
π | π is a single positive literal or a single negative literal, respectively, and show that for every
leaf β we have
p
Pr π
β β° + and leaf(π
) = β β€ π Β·
Pr[π
β πβ ] Β· Pr[π
β β¬β ] (4.1)
p
Pr[π
β β° β and leaf(π
) = β ] β€ π Β· Pr[π
β πβ ] Β· Pr[π
β β¬β ]. (4.2)
Together, the two inequalities imply that
p
Pr[π
β β° and leaf(π
) = β ] β€ 2π Β· Pr[π
β πβ ] Β· Pr[π
β β¬β ].
The desired bound on Pr[π
β β°] will follow by summing the latter bound over all the leaves β
of Ξ .
We start by explaining how to associate a leaf with every projection π β β° + . Let π β β° + .
Then, it must be the case that π | π = π¦ π for some π β [π]. We define the projections π1 = π π¦ π β1
and π0 = π π¦ π β0 , and observe that π1 β π and π0 β β¬. We now define leaf(π) to be the leaf to
which Ξ arrives when invoked on inputs nπ1 ando π0 . Observe that the output of Ξ at leaf(π)
must be a variable π₯ π that satisfies π(π₯ π ) β π¦ π , π¦ π , and thus πvar(π(π₯ π ))β1 = π1 .
Next, fix a leaf β . We prove that Pr[π
β β° + and leaf(π
) = β ] β€ π1 Β· Pr[π
β πβ ]. Let π₯ π be the
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 19
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
output of the protocol Ξ at β . Then,
Pr π
β β° + and leaf(π
) = β β€ Pr π
1 β πβ and π
(π₯ π ) β {0, 1} and π
var(π
(π₯ π ))β1 = π
1
β€ Pr π
(π₯ π ) β {0, 1} and π
var(π
(π₯ π ))β1 β πβ
Γ
Pr π
(π₯ π ) β {0, 1} and π
var(π
(π₯ π ))β1 = π
=
πβπβ
Γ
β€ π1 Β· Pr[π
= π] (since π
is (π 0 , π1 )-fixing)
πβπβ
= π1 Β· Pr[π
β πβ ] .
Similarly, it can be proved that Pr[π
β β° + and leaf(π
) = β ] β€ π 0 Β· Pr[π
β β¬β ]. Together, the two
bounds imply that
p p
Pr π
β β° + and leaf(π
) = β β€ π1 Pr[π
β πβ ] Β· π0 Pr[π
β β¬β ] = π Β·
Pr[π
β πβ ] Β· Pr[π
β β¬β ]
for every leaf β of Ξ . We define leaf(π) for projections π β β° β in an analogous way, and then a
similar argument shows that
p
Pr[π
β β° β and leaf(π
) = β ] β€ π Β· Pr[π
β πβ ] Β· Pr[π
β β¬β ].
It follows that
p
Pr[π
β β° and leaf(π
) = β ] β€ 2π Β· Pr[π
β πβ ] Β· Pr[π
β β¬β ].
Finally, let β denote the set of leaves of Ξ . It holds that
Γ
Pr[π
β β°] = Pr[π
β β° and leaf(π
) = β ]
β ββ
Γp
β€ 2π Β· Pr[π
β πβ ] Β· Pr[π
β β¬β ]
β ββ
p sΓ
β€ 2π Β· |β| Β· Pr[π
β πβ ] Β· Pr[π
β β¬β ] (Cauchy-Schwarz β see Fact 2.12)
β ββ
q sΓ
= 2π Β· πΏ( π ) Β· Pr[π
β πβ ] Β· Pr[π
β β¬β ].
β ββ
π΄ π΅
β ββ Pr[π
β πβ ] Β· Pr[π
β β¬β ] β€ 4 . To this end, let π
, π
Γ 1
We conclude the proof by showing that
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 20
S HRINKAGE UNDER R ANDOM P ROJECTIONS
be two independent random variables that are distributed identically to π
. Then, we have
Γ Γ
Pr π
π΄ β πβ Β· Pr π
π΅ β β¬β
Pr[π
β πβ ] Β· Pr[π
β β¬β ] =
β ββ β ββ
Γ
Pr (π
π΄ , π
π΅ ) β πβ Γ β¬β (π
π΄ , π
π΅ are independent)
=
β ββ
= Pr (π
π΄ , π
π΅ ) β π Γ β¬
(πβ Γ β¬β are a partition of π Γ β¬)
= Pr[π
π΄ β π] Β· Pr[π
π΅ β β¬] (π
π΄ , π
π΅ are independent)
= Pr[π
β π] Β· Pr[π
β β¬]
β€ Pr[π
β π] Β· (1 β Pr[π
β π]) β€ 1/4. (π,β¬ are disjoint)
4.2 Proof of Lemma 4.4
We turn to proving Lemma 4.4, restated next.
Lemma 4.4. Let π
be a π-fixing projection. Let π1 , π2 : {0, 1} π β {0, 1}, let π, π β {0, 1}, and let π¦ π be
a variable. Then, h i q
Pr πΏ( π2 | π
π¦ π βπ ) = 1 π1 | π
= π¦ ππ β€ π Β· πΏ( π2 ). (4.3)
β
Let π
be a (π0 , π1 )-fixing random projection, and let π = π0 π1 . Let π1 , π2 : {0, 1} π β {0, 1},
let π, π β {0, 1}, and let π¦ π be a variable. We prove Equation 4.3. For simplicity, we focus on
the case that π1 | π
= π¦ π , and the case that π1 | π
= π¦ π can be dealt with similarly. The crux of the
proof is to show that the random projection π
π¦ π βπ is essentially a (π0 , π1 )-fixing projection even
when conditioned on the event π1 | π
= π¦ π , and therefore Equation 4.3 is implied immediately by
Lemma 4.1.
In order to carry out this argument, we first establish some notation. Let π° + = π
β1 (π¦ π )
and π° β = π
β1 (π¦ π ), and denote by π
0 the projection obtained from π
by restricting its domain
to {π₯ 1 , . . . , π₯ π } \(π° + βͺ π° β ). We denote by π2,π° + ,π° β the function over {π₯ 1 , . . . , π₯ π } \(π° + βͺ π° β ) that is
obtained from π2 by hard-wiring π and π to the variables in π° + and π° β , respectively. Observe that
π2 | π
π¦ π βπ = π2,π° + ,π° β | π
0 , so it suffices to prove that for every two disjoint sets πΌ + , πΌ β β {π₯ 1 , . . . , π₯ π }
we have q
Pr πΏ( π2,π° + ,π° β | π
0 ) = 1 π1 | π
= π¦ π , π° + = πΌ + , π° β = πΌ β β€ π Β· πΏ( π2 ).
(4.4)
Let πΌ + , πΌ β β {π₯ 1 , . . . , π₯ π } be disjoint sets, and let β be the event that π°+ = πΌ + and π° β = πΌ β . For
convenience, let πΎ = {π₯ 1 , . . . , π₯ π } \(πΌ + βͺ πΌ β ) and π = {π¦1 , . . . , π¦π } \ π¦ π , so π
0 is a random
projection from πΎ to π when conditioned on β. To prove Equation 4.4, it suffices to prove that
π
0 is a (π0 , π1 )-fixing projection when conditioned on the events β and π1 | π
= π¦ π , and then the
inequality will follow from Lemma 4.1. We first prove that π
0 is a (π0 , π1 )-fixing projection when
conditioning only on the event β (and not on π1 | π
= π¦ π ).
Proposition 4.9. Conditioned on the event β, the projection π
0 is a (π 0 , π1 )-fixing projection.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 21
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Proof. We prove that π
0 satisfies the definition of a fixing projection. Let π0 be a projection from
πΎ to π, and let π₯ π β πΎ. Let π β {0, 1}. We have
h i
Pr π
0(π₯ π ) β {0, 1} and π
0var(π
0(π₯ π ))βπ = π0 β
= Pr π
(π₯ π ) β {0, 1} and π
var(π
(π₯ π ))βπ | πΎ = π0 and β /Pr[β]
Γ
Pr π
(π₯ π ) β {0, 1} and π
| var(π
(π₯ π ))βπ = π /Pr[β]
=
π : π| πΎ =π0 ,πβ1 (π¦ π )=πΌ + ,
πβ1 (π¦ π )=πΌ β
Γ
β€ π π Β· Pr[π
= π]/Pr[β] (since π
is (π0 , π1 )-fixing)
π : π| πΎ =π0 ,πβ1 (π¦ π )=πΌ + ,
πβ1 (π¦ π )=πΌ β
= π π Β· Pr[π
| πΎ = π0 and β] /Pr[β]
= π π Β· Pr[π
0 = π0 | β] ,
as required.
We now prove that π
0 remains a (π0 , π1 )-fixing projection when conditioning on π1 | π
= π¦ π in
addition to β. The crucial observation is that the event π1 | π
= π¦ π is essentially a filter, defined
next.
Definition 4.10. A set of projections β° from π₯1 , . . . , π₯ π to π¦1 , . . . , π¦π is a filter if it is closed under
assignment to variables, i. e., if for every π β β°, every variable π¦ π , and every bit π β {0, 1}, we
have π π¦ π βπ β β°.
It turns out that the property of a projection being a (π0 , π1 )-fixing projection is preserved
when conditioning on filters. Formally:
Proposition 4.11. Let β° be a filter and let π
β be a (π 0 , π1 )-fixing projection. Then, π
β |β° is a (π 0 , π1 )-fixing
projection.
Proof. Let πβ be a projection, and let π₯ π be a variable. Let π β {0, 1}. We would like to prove that
h i
Pr π
β (π₯ π ) β {0, 1} and π
βvar(π
β (π₯ π ))βπ = πβ π
β β β° β€ π π Β· Pr[π
β = πβ | π
β β β°] .
If πβ β β°, then both sides of the equation are equal to zero: this is obvious for the right-hand
side, and holds for the left-hand side since if there is a projection π0 β β° and a variable π¦ π such
that π0π¦ π βπ = πβ then it must be the case that πβ β β° by the definition of a filter. Thus, we may
assume that πβ β β°. Now, it holds that
h i
Pr π
β (π₯ π ) β {0, 1} and π
βvar(π
β (π₯ π ))βπ = πβ π
β β β°
h i
β€ Pr π
(π₯ π ) β {0, 1} and π
var(π
β (π₯ π ))βπ = π /Pr[π
β β β°]
β β β
β€ π π Β· Pr[π
β = πβ ] /Pr[π
β β β°] (π
β is (π0 , π1 )-fixing)
= π π Β· Pr[ π
β = πβ | π
β β β°] , (πβ β β°)
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 22
S HRINKAGE UNDER R ANDOM P ROJECTIONS
as required.
Consider the event π1 | π
= π¦ π . Viewed as a set of projections from π₯ 1 , . . . , π₯ π to π¦1 , . . . , π¦π ,
this event is not a filter, since it is not closed under assignments to π¦ π . However, this event is
closed under assignments to all variables except π¦ π : when π1 | π
= π¦ π , the equality continues to hold
even if the variables in π are fixed to constants. Moreover, observe that conditioned on β, the
event π1 | π
= π¦ π depends only on the values that π
assigns to πΎ. Thus, we can view the event
π1 | π
= π¦ π as a set of projections from πΎ to π, and taking this view, this event is a filter. Since π
0 is
a (π0 , π1 )-fixing projection from πΎ to {π¦1 , . . . , π¦π } \ π¦ π when conditioned on β, we conclude
that it is a (π0 , π1 )-fixing projection when conditioned on both β and π1 | π
= π¦ π . It follows by
Lemma 4.1 that
q q
+ + β β
Pr πΏ( π2,πΌ + ,πΌ β | π
0 ) = 1 π1 | π
= π¦ π , π° = πΌ , π° = πΌ β€πΒ· πΏ( π2,πΌ + ,πΌ β ) β€ π Β· πΏ( π2 ),
as required.
4.3 Proof of Lemma 4.6
Finally, we prove Lemma 4.6, restated next.
Lemma 4.6. For every projection π we have
Γ
πΏ(π| π ) β€ (depth(π) + 2) Β· 1β°π,π + 1πΏ(π|π )=1 . (4.5)
internal gate π of π
Let π be a projection. We prove that π satisfies Equation 4.5. Recall that β°π,π is the event
that there exists some literal π¦ ππ such that left(π)| π = π¦ ππ and
β’ πΏ(right(π)| π π¦ π βπ ) = 1 if π is an OR gate, or
β’ πΏ(right(π)| π π¦ π βπ ) = 1 if π is an AND gate.
We prove Equation 4.5 by induction. If π consists of a single leaf, then the upper bound clearly
holds. Otherwise, the root of π is an internal gate. Without loss of generality, assume that the
root is an OR gate. We denote the subformulas rooted at the left and right children of the root
by πβ and π π , respectively. We consider several cases:
β’ If πΏ(π| π ) = 1, then the upper bound clearly holds.
β’ Suppose that πΏ(πβ | π ) β₯ 2. By the subadditivity of formula complexity (Fact 2.3), we have
πΏ(π| π ) β€ πΏ(πβ | π ) + πΏ(π π | π ).
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 23
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
By the induction hypothesis, we have
Γ
πΏ(πβ | π ) β€ (depthπβ (π) + 2) Β· 1β°π,π + 1πΏ(πβ |π )=1
internal gate π of πβ
Γ
= (depthπβ (π) + 2) Β· 1β°π,π (πΏ(πβ | π ) β₯ 2)
internal gate π of πβ
Γ
= (depthπ (π) + 1) Β· 1β°π,π ,
internal gate π of πβ
where the equality holds since depthπβ (π) = depthπ (π) β 1 for every gate π of πβ . Since
πΏ(πβ | π ) β₯ 2, at least one of the terms in the last sum must be non-zero, so it holds that
Γ Γ
(depthπ (π) + 1) Β· 1β°π,π β€ (depthπ (π) + 2) Β· 1β°π,π β 1.
internal gate π of πβ internal gate π of πβ
Next, by the induction hypothesis we have
Γ
πΏ(π π | π ) β€ (depthππ (π) + 2) Β· 1β°π,π + 1πΏ(ππ |π )=1
internal gate π of π π
Γ
β€ (depthπ (π) + 2) Β· 1β°π,π + 1.
internal gate π of π π
By combining the two bounds, we get that
πΏ(π| π ) β€ πΏ(πβ | π ) + πΏ(π π | π )
Γ
β€ (depthπ (π) + 2) Β· 1β°π,π β 1
internal gate π of πβ
Γ
+ (depthπ (π) + 2) Β· 1β°π,π + 1
internal gate π of π π
Γ
β€ (depth(π) + 2) Β· 1β°π,π
internal gate π of π
Γ
β€ (depth(π) + 2) Β· 1β°π,π + 1πΏ(π|π )=1 ,
internal gate π of π
as required.
β’ If πΏ(π π | π ) β₯ 2, then we use the same argument of the previous case by exchanging πβ and
ππ .
β’ Suppose that πΏ(π| π ) β₯ 2, πΏ(πβ | π ) β€ 1 and πΏ(π π | π ) β€ 1. Then, it must be the case that
πΏ(π| π ) = 2 and also that πΏ(πβ | π ) = 1 and πΏ(π π | π ) = 1 (or otherwise πΏ(π| π ) = 1). In
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 24
S HRINKAGE UNDER R ANDOM P ROJECTIONS
particular, πβ | π is equal to some literal π¦ ππ . It follows that π| π = π¦ ππ β¨ π π | π , and by the
one-variable simplification rules (Fact 4.5), this function is equal to
π¦ ππ β¨ π π | π π¦ π βπ = π¦ ππ β¨ π π | π π¦ π βπ .
Thus, it must be the case that πΏ(π π | π π¦ π βπ ) = 1 (since πΏ(π| π ) = 2). It follows that if we let π
be the root of π, then the event β°π,π occurs and so
(depth(π) + 2) Β· 1β°π,π = 2 = πΏ(π| π ),
so the desired upper bound holds.
We proved that the upper bound holds in each of the possible cases, so the required result
follows.
5 Proof of shrinkage theorem for hiding projections
In this section we prove our the shrinkage theorem for hiding projections. We start by recalling
the definition of hiding projections and their shrinkage theorem.
Definition 3.9. Let 0 β€ π0 , π1 β€ 1. We say that a random projection π
is a (π0 , π1 )-hiding projection
if for every projection π, every bit π β {0, 1}, and all variables π₯ π , π¦ π , we have
Pr π
(π₯ π ) β π¦ π , π¦ π π
π¦ π βπ = π β€ π π ,
whenever the event conditioned on has positive probability. For shorthand, we say that π
is a
β
π-hiding projection, for π = π0 π 1 .
Theorem 3.12 (Shrinkage under hiding projections). Let π be a formula of size π and depth π, and
let π
be a π-hiding projection. Then
β
πΌ πΏ(π| π
) = π π 4 Β· π 2 Β· π2 Β· π + π 2 Β· π Β· π .
The crux of the proof of Theorem 3.12 is that hiding projections can be converted into fixing
projections. This is captured by the following result.
Lemma 5.1. Let π
be a π-hiding projection from π₯ 1 , . . . , π₯ π to π¦1 , . . . , π¦π . Then, there exists a (4π 2 Β· π)-
fixing projection π
0 from π₯ 1 , . . . , π₯ π to π¦1 , . . . , π¦π and an event β of probability at least 12 such that
π
0 = π
conditioned on β . Furthermore, the event β is independent of π
.
We prove Lemma 5.1 in Section 5.1. By combining Lemma 5.1 with our shrinkage theorem
for fixing projections, we obtain the following shrinkage theorem for hiding projections.
Theorem 3.12. Let π be a formula of size π and depth π, and let π
be a π-hiding projection. Then
β
πΌ πΏ(π| π
) = π π Β· π Β· π Β· π + π Β· π Β· π .
4 2 2 2
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 25
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Proof. Let π
0 and β be the (4π 2 Β· π)-fixing projection and event that are obtained from π
by
Lemma 5.1. Since Pr [β ] β₯ 12 , we have
1 1
πΌ πΏ(π| π
0 ) β₯ Pr[β ] Β· πΌ πΏ(π| π
0 ) β β₯ Β· πΌ πΏ(π| π
) β = Β· πΌ πΏ(π| π
) ,
2 2
where the second inequality holds since conditioned on β it holds that π
0 = π
, and the
last equality holds since the event β is independent of π
. On the other hand, by applying
Theorem 3.6 to π
0, we obtain that
β
πΌ πΏ(π| π
0 ) = π(π 4 Β· π 2 Β· π2 Β· π + π 2 Β· π Β· π ).
The theorem follows by combining the two bounds.
5.1 Proof of Lemma 5.1
We use the following straightforward generalization of the property of hiding projections.
Claim 5.2. Let π
be a (π 0 , π1 )-hiding projection, and let π be a random set of projections that is
independent of π
. Then, for every π β {0, 1}, we have
Pr π
(π₯ π ) β π¦ π , π¦ π π
π¦ π βπ β π β€ π π .
Proof. Let π
and π be as in the claim, and let π β {0, 1}. We have
Pr π
(π₯ π ) β π¦ π , π¦ π π
π¦ π βπ β π
Γ
Pr π
(π₯ π ) β π¦ π , π¦ π π
π¦ π βπ = π and π β π Β· Pr π
π¦ π βπ = π π
π¦ π βπ β π
=
π
Γ
Pr π
(π₯ π ) β π¦ π , π¦ π π
π¦ π βπ = π Β· Pr π
π¦ π βπ = π π
π¦ π βπ β π
=
π
(π
and π are independent)
Γ
π π Β· Pr π
π¦ π βπ = π π
π¦ π βπ β π (π
is (π0 , π1 )-hiding)
β€
π
= ππ .
We turn to proving Lemma 5.1, restated next.
Lemma 5.1. Let π
be a π-hiding projection from π₯ 1 , . . . , π₯ π to π¦1 , . . . , π¦π . Then, there exists a (4π 2 Β· π)-
fixing projection π
0 from π₯ 1 , . . . , π₯ π to π¦1 , . . . , π¦π and an event β of probability at least 12 such that
π
0 = π
conditioned on β . Furthermore, the event β is independent of π
.
Suppose that π
is a (π0 , π1 )-hiding projection from π₯ 1 , . . . , π₯ π to π¦1 , . . . , π¦π . Let π be a
-random restriction over π¦1 , . . . , π¦π , i. e., π is the random projection from π¦1 , . . . , π¦π to
1 β 2π 1
π¦1 , . . . , π¦π that assigns each π¦ π independently as follows:
ο£² π¦π with probability 1 β 2π ,
ο£±
 1


π(π¦ π ) = 0 with probability 4π ,
1
.

 1
1 with probability 4π
ο£³
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 26
S HRINKAGE UNDER R ANDOM P ROJECTIONS
For convenience, we define π(0) = 0, π(1) = 1, and π(π¦ π ) = π(π¦ π ) for every π β [π]. We now
choose the random projection π
0 to be the composition π β¦ π
, and define the event β to be the
event that π(π¦ π ) = π¦ π for every π β [π]. Observe that the event π
0 = π
occurs whenever β occurs,
and moreover π
π
1 1
Pr[β ] = 1 β β₯ 1β = ,
2π 2π 2
as required. Moreover, the event β is independent of π
, since π is independent of π
. We show
that π
0 is a (4 Β· π 2 Β· π0 , 4 Β· π 2 Β· π1 )-fixing projection. To this end, we should show that for every
projection π0, every bit π β {0, 1}, and every variable π₯ π ,
h i
0 0 0
Pr π
(π₯ π ) β {0, 1} and π
var(π
0(π₯ π ))βπ = π β€ 4 Β· π 2 Β· π π Β· Pr[π
0 = π0] . (5.1)
This is implied by the following inequality, which we will prove for all π β [π]:
h i
Pr π
0(π₯ π ) β π¦ π , π¦ π and π
0π¦ π βπ = π0 β€ 4 Β· π Β· π π Β· Pr[π
0 = π0] .
(5.2)
Let π β [π] and let πβπ be the random projection obtained by restricting π to the do-
main {π¦1 , . . . , π¦π } \ π¦ π . First, observe that if π(π¦ π ) = π then π
0 = π β¦ π
= πβπ β¦ π
π¦ π βπ ,
and therefore
1
Pr[π
0 = π0] β₯ Pr π(π¦ π ) = π and πβπ β¦ π
π¦ π βπ = π0 = Β· Pr πβπ β¦ π
π¦ π βπ = π0 ,
4π
where the equality holds since π(π¦ π ) is independent of πβπ and π
. In the rest of this section
we will prove the following inequality, which together with the last inequality will imply
Equation 5.2:
h i
Pr π
0(π₯ π ) β π¦ π , π¦ π and π
0π¦ π βπ = π0 β€ π π Β· Pr πβπ β¦ π
π¦ π βπ = π0 .
(5.3)
To this end, observe that the event π
0(π₯ π ) β π¦ π , π¦ π happens if and only if π
(π₯ π ) β π¦ π , π¦ π and
π(π¦ π ) = π¦ π , and therefore
h i
Pr π
0(π₯ π ) β π¦ π , π¦ π and π
0π¦ π βπ = π0
h i
= Pr π
(π₯ π ) β π¦ π , π¦ π and π(π¦ π ) = π¦ π and π
0π¦ π βπ = π0 .
Next, observe that if π(π¦ π ) = π¦ π then π
0π¦ π βπ = πβπ β¦ π
π¦ π βπ . It follows that the latter expression is
equal to
Pr π
(π₯ π ) β π¦ π , π¦ π and π(π¦ π ) = π¦ π and πβπ β¦ π
π¦ π βπ = π0
β€ Pr π
(π₯ π ) β π¦ π , π¦ π and πβπ β¦ π
π¦ π βπ = π0
= Pr π
(π₯ π ) β π¦ π , π¦ π πβπ β¦ π
π¦ π βπ = π0 Β· Pr πβπ β¦ π
π¦ π βπ = π0
β€ π π Β· Pr πβπ β¦ π
π¦ π βπ = π0 ,
where the last inequality follows by applying Claim 5.2 with π being the set of projections π
that satisfy πβπ β¦ π = π0. This concludes the proof of Equation 5.3.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 27
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Remark 5.3. We can improve the bound π 2 in the statement of Lemma 5.1 to π π, where π is the
maximal number of variables 1, . . . , π which could appear in any position of π
. The reason
is that in the latter case, the transition from Equation 5.2 to Equation 5.1 incurs a factor of π
rather than π. This is useful, for example, since the random projections π
of Section 8.1 have
the feature that for each π β [π] there is a unique π β [π] such that π
(π₯ π ) β {0, 1, π¦ π , π¦ π }, and so
for these projections π = 1.
6 Joining projections
In this section, we define a join operation on fixing and hiding projections, and show that
it preserves the corresponding properties. This operation provides a convenient tool for
constructing random projections, and will be used in our applications.
Definition 6.1. Let πΌ be a projection from π₯1 , . . . , π₯ π π to π¦1 , . . . , π¦ππ , and let π½ be a projection
from π€ 1 , . . . , π€ ππ to π§ 1 , . . . , π§ ππ . The join πΌ ] π½ is the projection from π₯ 1 , . . . , π₯ π π , π€1 , . . . , π€ ππ to
π¦1 , . . . , π¦ππ , π§1 , . . . , π§ ππ obtained by joining πΌ and π½ together in the obvious way.
Lemma 6.2. Let πΆ and π· be independent random projections. If πΆ and π· are (π0 , π1 )-fixing projections,
then so is πΆ ] π·.
Proof. Let πΆ and π· be (π0 , π1 )-fixing projections, and let πΈ = πΆ ] π·. We prove that πΈ is a (π0 , π1 )-
fixing projection. Let πΌ be a projection from π₯ 1 , . . . , π₯ π π to π¦1 , . . . , π¦ππ , let π½ be a projection from
π€ 1 , . . . , π€ ππ to π§ 1 , . . . , π§ ππ , and let πΎ = πΌ ] π½. We should show that for every π β {0, 1} and every
input variable π’ (either π₯ π or π€ π ) we have
Pr πΈ(π’) β {0, 1} and πΈvar(πΈ(π’))βπ = πΎ β€ π π Β· Pr πΈ = πΎ .
Let π β {0, 1} be a bit, and let π’ be an input variable. Assume that π’ = π₯ π for some π β [π π ] (if
π’ = π€ π for some π β [ππ ], the proof is similar). The above equation is therefore equivalent to
Pr πΆ(π₯ π ) β {0, 1} and πΆ var(πΆ(π₯ π ))βπ = πΌ and π·var(πΆ(π₯ π ))βπ = π½ β€ π π Β· Pr πΆ = πΌ and π· = π½ .
Since the ranges of πΆ and π· are disjoint, the variable var(πΆ(π₯ π )) does not appear in the range of π·,
and therefore π·var(πΆ(π₯ π ))βπ = π·. The independence of πΆ and π· shows that the above inequality is
equivalent to
Pr πΆ(π₯ π ) β {0, 1} and πΆ var(πΆ(π₯ π ))βπ = πΌ Β· Pr π· = π½ β€ π π Β· Pr[πΆ = πΌ] Β· Pr π· = π½ ,
which follows from πΆ being (π0 , π1 )-fixing.
Lemma 6.3. Let πΆ and π· be independent random projections. If πΆ and π· are (π 0 , π1 )-hiding projections,
then so is πΆ ] π·.
Proof. Let πΆ and π· be (π0 , π1 )-hiding projections, and let πΈ = πΆ ] π·. We prove that πΈ is a (π0 , π1 )-
hiding projection. Let πΌ be a projection from π₯ 1 , . . . , π₯ π π to π¦1 , . . . , π¦ππ , let π½ be a projection from
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 28
S HRINKAGE UNDER R ANDOM P ROJECTIONS
π€ 1 , . . . , π€ ππ to π§ 1 , . . . , π§ ππ , and let πΎ = πΌ ] π½. We should show that for every π β {0, 1}, every
input variable π’ (either π₯ π or π€ π ), and every output variable π£ (either π¦ π or π§ π ) we have
Pr πΈ(π’) β {π£, π£} πΈπ£βπ = πΎ β€ π π .
Let π β {0, 1} be a bit, let π’ be an input variable, and let π£ be an output variable. Assume that
π’ = π₯ π for some π β [π π ] (if π’ = π€ π for some π β [ππ ], the proof is similar). In this case, we may
assume that π£ = π¦ π for some π β [π π ], since otherwise the probability on the left-hand side is 0.
Thus, the above equation is equivalent to
Pr πΆ(π₯ π ) β π¦ π , π¦ π πΆ π¦ π βπ = πΌ and π· π¦ π βπ = π½ β€ π π .
Since πΆ and π· are independent, whenever the conditioned event has positive probability, we
have
Pr πΆ(π₯ π ) β π¦ π , π¦ π πΆ π¦ π βπ = πΌ and π· π¦ π βπ = π½ = Pr πΆ(π₯ π ) β π¦ π , π¦ π πΆ π¦π βπ = πΌ β€ π π ,
where the inequality holds since πΆ is (π 0 , π1 )-hiding.
7 Hiding projections from complexity measures
In this section we define a generalization of the unweighted quantum adversary bound due to
Ambainis [2], and show how it can be used for constructing hiding projections. We also derive a
relationship between this measure to the Khrapchenko bound [33], and conclude that the latter
bound can be used for constructing hiding projections as well.
7.1 The soft-adversary method
Ambainis [2] defined the following complexity measure for Boolean functions, called the
unweighted quantum adversary bound, and proved that it is a lower bound on quantum query
complexity (for a definition of quantum query complexity, see [2] or [9]).
Definition 7.1. Let π : {0, 1} π β {0, 1} be a non-constant function. Let π
β π β1 (1) Γ π β1 (0),
and let π΄, π΅ be the projections of π
into the first and second coordinates, respectively. Let
π
(π, π΅) = {π β π΅ : (π, π) β π
}, let π
π (π, π΅) = {π β π΅ : (π, π) β π
, π π β π π }, and define
π
(π΄, π), π
π (π΄, π) analogously. The unweighted quantum adversary bound of π is
v
minπβπ΄ |π
(π, π΅)| Β· minπβπ΅ |π
(π΄, π)|
u
t
Advπ’ ( π ) = max .
π
β π β1 (1)Γ π β1 (0) max πβπ΄ |π
π (π, π΅)| Β· max πβπ΅ |π
π (π΄, π)|
πβ[π] πβ[π]
Theorem 7.2 ([2]). The quantum query complexity of a non-constant function π : {0, 1} π β {0, 1} is
Ξ©(Advπ’ ( π )).
We define the following generalization of the unweighted quantum adversary bound.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 29
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Definition 7.3. Let π : {0, 1} π β {0, 1} be a Boolean function. We define the soft-adversary bound
of π , which we denote Advπ , to be the maximum of the quantity
v
u
t 1 1
min Β· min
πβsupp(π) Pr[π π β π π | π = π] πβsupp(π) Pr[π π β π π | π = π]
πβ[π] πβ[π]
over all distributions (π, π) supported on π β1 (1) Γ π β1 (0).
We chose to call this notion βsoft-adversary boundβ since we view it as a variant of the
unweighted adversary bound in which, instead of choosing for each pair (π, π) whether it
belongs to π
or not (a βhardβ decision), we assign each pair (π, π) a probability of being in the
relation π
(a βsoftβ decision). We now show that this bound indeed generalizes the unweighted
quantum adversary method.
Proposition 7.4. Let π : {0, 1} π β {0, 1} be non-constant function. Then Advπ ( π ) β₯ Advπ’ ( π ).
Proof. Let π
β π β1 (1) Γ π β1 (0) be a relation that attains Advπ’ ( π ). Let (π, π) be a uniformly
distributed element of π
. Using Pr[π π β π π | π = π] = |π
π (π, π΅)|/|π
(π, π΅)| and Pr[π π β π π | π =
π] = |π
π (π΄, π)|/|π
(π΄, π)|, it is easy to check that the quantity in the definition of Advπ ( π ) is at
least Advπ’ ( π ).
Following [53, 38], it can be shown that the soft-adversary bound is a lower bound on
formula complexity. We have the following result, which is proved in Section A.
Proposition 7.5. For any non-constant function π : {0, 1} π β {0, 1} we have πΏ( π ) β₯ Adv2π ( π ).
Our motivation for introducing the soft-adversary bound is that it can be used for constructing
hiding projections. We have the following result.
Lemma 7.6. Let π : {0, 1} π β {0, 1} be a non-constant Boolean function. There is a π-hiding projection
π
to a single variable π¦ such that π = 1/Advπ ( π ) and such that π | π
is a non-constant function with
probability 1.
Proof. Let (π, π) be a distribution supported on π β1 (1) Γ π β1 (0) which attains Advπ ( π ). We
construct a π-hiding projection π
from π₯ 1 , . . . , π₯ π to a single variable π¦ for π = 1/Advπ ( π ), such
that π
π¦β1 = π and π
π¦β0 = π. Note that this implies in particular that π | π
is a non-constant
function, since π β π β1 (1) and π β π β1 (0). Specifically, for every input variable π₯ π of π , we
define π
as follows:
β’ If π π = π π then π
(π₯ π ) = π π .
β’ If π π = 1 and π π = 0 then π
(π₯ π ) = π¦.
β’ If π π = 0 and π π = 1 then π
(π₯ π ) = π¦.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 30
S HRINKAGE UNDER R ANDOM P ROJECTIONS
It is not hard to see that indeed π
π¦β1 = π and π
π¦β0 = π. We now show that π
is 1/Advπ ( π )-hiding.
To this end, we show that π
is (π0 , π1 )-hiding for
π0 = max Pr[π π β π π | π = π], π1 = max Pr[π π β π π | π = π].
πβsupp(π) πβsupp(π)
πβ[π] πβ[π]
β
Note that indeed π0 π1 = 1/Advπ ( π ). In order to prove that π
is (π0 , π1 )-hiding, we need to
prove that
Pr π
(π₯ π ) β {π¦, π¦} π
π¦βπ = π β€ π π
for every π β [π] and every π β {0, 1}. To this end, observe that the event π
(π₯ π ) β {π¦, π¦} occurs if
and only if π π β π π , and that π
π¦β1 , π
π¦β0 are equal to the strings π, π respectively. It follows that
Pr π
(π₯ π ) β {π¦, π¦} π
π¦β1 = π = Pr[π π β π π | π = π] β€ π1
Pr π
(π₯ π ) β {π¦, π¦} π
π¦β0 = π = Pr[π π β π π | π = π] β€ π0
as required.
Remark 7.7. Several generalizations of Ambainisβ original quantum adversary bound have
appeared in the literature. Ε palek and Szegedy [53] showed that all of these methods are
equivalent, and so they are known collectively as the strong quantum adversary bound. The
formulation of the strong quantum adversary bound in Ambainis [3] makes it clear that it
subsumes our soft version. Laplante, Lee and Szegedy [38] showed that the strong quantum
adversary bound is a lower bound on formula complexity, and they came up with an even
stronger measure, maxPI2 , that is still a lower bound on formula complexity, but no longer a
lower bound on quantum query complexity.
HΓΈyer, Lee and Ε palek [28] generalized the strong quantum adversary bound, coming up
with a new measure known as the general adversary bound, which is a lower bound both on
quantum query complexity and (after squaring) on formula complexity. Reichardt [49, 50, 39, 51]
showed that the general adversary bound in fact coincides with quantum query complexity (up
to constant factors). These results are described in a recent survey by Li and Shirley [40].
Remark 7.8. We note that Advπ ( π ) β€ π (always). In order to prove this, we show that each
factor in Definition 7.3 is at most π. In particular, observe that it always holds that π β π, and
hence for every possible value π of π, we have
π
Γ
Pr[π π β π π | π = π] β₯ Pr[βπ β [π] s.t. π π β π π | π = π] = 1.
π=1
By averaging, there must be a coordinate π β [π] such that Pr[π π β π π | π = π] β₯ π1 , and therefore
1
min β€ π.
πβsupp(π) Pr[π π β π π | π = π]
πβ[π]
The upper bound for the other factor in Definition 7.3 can be proved similarly.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 31
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
7.2 Khrapchenko bound
Khrapchenko [33] defined the following complexity measure for Boolean functions, and proved
that it is a lower bound on formula complexity.
Definition 7.9. Let π : {0, 1} π β {0, 1} be a non-constant function. For every two sets π΄ β π β1 (1)
and π΅ β π β1 (0), let πΈ(π΄, π΅) be the set of pairs (π, π) β π΄ Γ π΅ such that π and π differ on exactly
one coordinate. The Khrapchenko bound of π is
|πΈ(π΄, π΅)| 2
Khr( π ) = max .
π΄β π β1 (1),π΅β π β1 (0) |π΄| Β· |π΅|
Theorem 7.10 ([33]). For every non-constant function π : {0, 1} π β {0, 1} we have πΏ( π ) β₯ Khr( π ).
Laplante, Lee, and Szegedy [38] observed that the strong adversary method generalizes the
Khrapchenko bound. We show that this holds even for the unweighted adversary bound, up to
a constant factor. Specifically, we have the following result, which is proved in Section C.
β
π Khr( π )
Proposition 7.11. For any non-constant function π : {0, 1} β {0, 1} we have Advπ’ ( π ) β₯ 4 .
By combining Propositions 7.4 and 7.11 with Lemma 7.6, it follows that the Khrapchenko
bound too can be used for constructing hiding projections.
Corollary 7.12. Let π : {0, 1} π β {0, 1} be p a non-constant Boolean function. There is a π-hiding
projection π
to a single variable π¦ such that π = 4/Khr( π ) and such that π | π
is a non-constant function
with probability 1.
In Section B, we describe a new complexity measure, the min-entropy Khrapchenko bound,
which generalizes the Khrapchenko bound in the same way the soft-adversary bound generalizes
the unweighted adversary bound. As we show there, the min-entropy Khrapchenko bound
provides a simple way to prove lower bounds on the soft-adversary bound.
8 Applications
In this section, we apply our shrinkage theorems to obtain new results regarding the KRW
conjecture and the formula complexity of π¨πͺ 0 . First, in Section 8.1, we prove the KRW conjecture
for inner functions for which the soft-adversary bound is tight. We use this version of the KRW
conjecture in Section 8.2 to prove cubic formula lower bounds for a function in π¨πͺ 0 . Finally, we
rederive some closely related known results in Section 8.3.
8.1 Application to the KRW conjecture
Given two Boolean functions π : {0, 1} π β {0, 1} and π : {0, 1} π β {0, 1}, their (block-)-
π
composition is the function π π : {0, 1} π β {0, 1} defined by
( π π)(π₯1 , . . . , π₯ π ) = π (π(π₯ 1 ), . . . , π(π₯ π )) ,
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 32
S HRINKAGE UNDER R ANDOM P ROJECTIONS
where π₯ 1 , . . . , π₯ π β {0, 1} π . It is easy to see that πΏ( π π) β€ πΏ( π ) Β· πΏ(π). The KRW conjecture [31]
asserts that this is roughly optimal, namely, that πΏ( π π) & πΏ( π ) Β· πΏ(π). In this section, we prove
the following related result.
Theorem 8.1. Let π : {0, 1} π β {0, 1} and π : {0, 1} π β {0, 1} be non-constant Boolean functions.
Then,
1+π β 1
πΏ ( π π) log πΏ( π π)
β₯ π(π
1
4 ) Β· πΏ( π ) Β· Adv π (π).
2
Proof. Let π
be the π-hiding projection constructed for π in Lemma 7.6 with π = 1/Advπ (π), and
recall that π| π
is non-constant with probability 1. Let π
1 , . . . , π
π be independent copies of π
, and
let π
π denote their π-fold join. Observe that π
β π
is π-hiding according to Lemma 6.3. Applying
Corollary 3.13 and using the estimate π₯ + π₯ = π(π₯ + 1), we see that π = πΏ( π π) satisfies
1 1+π β 1
πΌ [πΏ (( π π)| π
π )] = π Β· 4
Β·π log π
+ π(1).
Adv2π (π)
On the other hand, it is not hard to see that
(( π π)| π
π ) (π₯ 1 , . . . , π₯ π ) = π (π| π
1 (π₯1 ), . . . , π| π
π (π₯ π )) ,
and since π| π
1 , . . . , π| π
π are non-constant, the function π reduces to ( π π)| π
π . In particular,
1 1+π β 1
πΏ( π ) β€ πΌ [πΏ (( π π)| π
π )] = π Β· 4
Β·π log π
+ π(1).
Adv2π (π)
This means that there exists a constant πΆ > 0 such that for all non-constant π and π
πΆ
1 1+ β
πΏ( π ) β€ π 4 Β· Β· πΏ( π π) log πΏ( π π)
+ πΆ.
Adv2π (π)
We consider two cases depending on whether πΏ( π ) β₯ 2πΆ or not. If πΏ( π ) β₯ 2πΆ, then the above
inequality implies
πΏ( π ) 1 1+ β πΆ
β€ π4 Β· Β· πΏ( π π) log πΏ( π π)
,
2 Adv2π (π)
and we obtain the theorem by rearranging.
If πΏ( π ) < 2πΆ, then we have πΏ( π π) β₯ πΏ(π) since π or Β¬π is a sub-function of π β¦ π (as follows
from the fact that π is non-constant). This means that in this case
πΏ( π ) Β· πΏ(π) πΏ( π ) Β· Adv2π (π)
πΏ( π π) β₯ πΏ(π) > β₯
2πΆ 2πΆ
which is stronger than the required conclusion.
A direct consequence of the theorem is the following corollary, which is a special case of the
KRW conjecture for inner functions π for which the soft-adversary bound is almost tight.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 33
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Corollary 8.2. Let π : {0, 1} π β {0, 1} and π : {0, 1} π β {0, 1} be non-constant Boolean functions
1βπ β 1
such that Adv2π (π) β₯ πΏ(π) log πΏ(π)
. Then,
1βπ β 1
1βπ β 1
πΏ( π π) β₯ π(π
1
4) Β· πΏ ( π )
log πΏ( π )
Β· πΏ(π) log πΏ(π)
.
Proof. By substituting the assumption on π in the bound of Theorem 8.1, we obtain that
1+π β 1
1βπ β 1
πΏ ( π π) log πΏ( π π)
β₯ π(π
1
4 ) Β· πΏ( π ) Β· (πΏ(π))
log πΏ(π)
.
Moreover, since πΏ( π π) β€ πΏ( π ) Β· πΏ(π), we have
1+π β 1
π β 1
πΏ ( π π) log πΏ( π π)
β€ πΏ( π π) Β· (πΏ( π ) Β· πΏ(π)) log πΏ( π π)
π β 1
π β 1
β€ πΏ( π π) Β· πΏ ( π ) log πΏ( π π)
Β· πΏ (π) log πΏ( π π)
π β 1
π β 1
β€ πΏ( π π) Β· πΏ ( π ) log πΏ( π )
Β· πΏ (π) log πΏ(π)
.
The corollary follows by combining the two bounds.
Using the fact that Khr(π) β€ Adv2π (π) (Proposition 7.11), we obtain the following immediate
corollary.
Corollary 8.3. Let π : {0, 1} π β {0, 1} and π : {0, 1} π β {0, 1} be non-constant Boolean functions.
Then,
1+π β 1
πΏ ( π π) log πΏ( π π)
β₯ π(π
1
4 ) Β· πΏ( π ) Β· Khr(π).
1βπ β 1
Moreover, if Khr(π) β₯ πΏ(π) log πΏ(π)
, then
1βπ β 1
1βπ β 1
πΏ( π π) β₯ π(π
1
4) Β· πΏ ( π )
log πΏ( π )
Β· πΏ(π) log πΏ(π)
.
8.2 Formula lower bounds for π¨πͺ 0
In this section, we derive our second main application: cubic formula lower bounds for π¨πͺ 0 .
Formally, we have the following result.
Theorem 1.1. There exists a family of Boolean functions β π : {0, 1} π β {0, 1} for π β β such that
1. β π can be computed by uniform depth-4 unbounded fan-in formulas of size π(π 3 ).
2. The formula size of β π is at least π 3βπ(1) .
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 34
S HRINKAGE UNDER R ANDOM P ROJECTIONS
The function β π is constructed similarly to the Andreev function [4], with the parity function
replaced with the surjectivity function [6], defined next.
Definition 8.4. Let Ξ£ be a finite alphabet, and let π β β be such that π β₯ |Ξ£|. The surjectivity
function SurjΞ£,π : Ξ£π β {0, 1} interprets its input as a function from [π] to Ξ£, and outputs whether
the function is surjective. In other words, Surj(π1 , . . . , ππ ) = 1 if and only if every symbol in Ξ£
appears in (π1 , . . . , ππ ).
In order to prove Theorem 1.1, we will use Theorem 8.1 with the inner function π being Surjπ .
To this end, we use the following result of [6].
Lemma 8.5 ([6]). For every natural number π such that π β₯ 2, we have Advπ’ (Surj[π ],2π β2 ) = Ξ© π .
Note that we can view the inputs of SurjΞ£,π as binary strings of length π = π Β· log |Ξ£| by
fixing some arbitrary binary encoding of the symbols in Ξ£. In what follows, we extend the
definition of Surj to every sufficiently large input π as follows: Given π β β , we choose π
length
to be the largest number such that (2π β 2) Β· log π β€ π, and define Surjπ to be the function
that interprets its input as a string in [π ]2π β2 and computes Surj[π ],2π β2 on it. Lemma 8.5 now
π
implies that for every sufficiently large π β β we have Advπ’ (Surjπ ) = Ξ© log π . In particular,
π
this implies the same bound for the soft-adversary bound, i. e., Advπ (Surjπ ) = Ξ© log π . For
completeness, we provide an alternative proof for this lower bound on Advπ (Surjπ ) in Section B.
We turn to prove Theorem 1.1 using our special case of the KRW conjecture (Corollary 8.2).
Proof of Theorem 1.1. We would like to construct, for every sufficiently large π β β , a func-
tion β π : {0, 1} π β {0, 1} that is computable by a formula with unbounded fan-in of depth 4
and size π( π3 ) such that πΏ(β π ) = Ξ©(π 3βπ(1) ). We start by constructing the function β π for
3
log π
input lengths π of a special form, and then extend the construction to all sufficiently large input
lengths.
First, assume that the input length π is of the form 2 π Β· (π + 1), where π β β is sufficiently
large such that Surj2π is well-defined. The function β π : {0, 1} π β {0, 1} takes two inputs: the
π
truth table of a function π : {0, 1} π β {0, 1}, and π strings π₯ 1 , . . . , π₯ π β {0, 1}2 . On such inputs,
the function πΉ outputs
β π ( π , π₯1 , . . . , π₯ π ) = ( π Surj2π )(π₯ 1 , . . . , π₯ π ) = π (Surj2π (π₯ 1 ), . . . , Surj2π (π₯ π )) .
It is easy to see that the function β π has input length π. We show that πΉ can be computed by a
formula with unbounded fan-in of depth 4 and size π( π3 ). We start by constructing a formula
3
log π
for Surj2π . Recall that Surj2π takes as input (the binary encoding of) a string (π1 , . . . , ππ ) β [π ]π ,
π
where π, π = π( 2π ). For simplicity, let us assume that every number in [π ] is encoded by exactly
one binary string, and that if the binary input to Surj π contains a binary string in {0, 1} dlog π e
2
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 35
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
that does not encode any number in [π ], then we do not care what the formula outputs. Now,
observe that
π
ΓΓ
Surj2π (π1 , . . . , ππ ) = (π π = πΎ). (8.1)
πΎβ[π ] π=1
It is not hard to see that the expression on the right-hand sidecan be implemented by a formula
with unbounded fan-in of depth 3 and size π Β· π Β· log π = π 2π . Next, observe that
2π
π
" !#
Γ Γ
β π ( π , π₯1 , . . . , π₯ π ) = ( π (π¦) = 1) β§ Surj2π (π₯ π ) = π¦ π .
π¦β{0,1} π π=1
Using the foregoing formula for surjectivity, it is not hard to see that the expression on the
right-hand side can be implemented by a formula with unbounded fan-in of depth 4 and size
!
π3
π22π
π 2 Β·πΒ· = π 23π = π ,
π log3 π
as required.
Finally, we prove that πΏ(β π ) = Ξ©(π 3βπ(1) ). To this end, let us hardwire the input π to be some
2π
function from {0, 1} π to {0, 1} such that πΏ( π ) = Ξ©( log π ) (such a function exists by a well-known
counting argument, see [30, Theorem 1.23]). After hardwiring the input π , the function β π
becomes exactly the function π Surj2π . Now, by observing that Adv2π (Surj2π ) = Ξ©Μ (πΏ(Surj2π )) and
applying Corollary 8.2, it follows that
!
1βπ β 1 1βπ β 1
log πΏ(Surj π )
πΏ(πΉ) β₯ π(π1 4 ) Β· πΏ ( π ) log πΏ( π )
Β· πΏ(Surj2π ) 2
1βπ β1 2π 1βπ β1
2π
π 2 π
β₯ π(π1 4 ) Β· Β·
log π π2
β₯2 3πβπ(1)
= π 3βπ(1) ,
as required.
It remains to deal with input lengths π that are not of the form 2 π Β· (π + 1). For such input
lengths π, we choose π to be the largest natural number such that 2 π Β· (π + 1) β€ π, and proceed
as before. It can be verified that for this choice of π we have 2 π Β· (π + 1) = Ξ(π), and therefore all
the foregoing asymptotic bounds continue to hold.
Remark 8.6. We note that our methods cannot prove formula lower bounds that are better than
cubic. As explained in Remark 7.8, it holds that Adv2π ( π ) β€ π 2 . Thus, one cannot expect to obtain
a lower bound that is better than cubic by combining these measures with Andreevβs argument.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 36
S HRINKAGE UNDER R ANDOM P ROJECTIONS
8.3 Reproving known formula lower bounds
The proof of Theorem 1.1 combines a lower bound on Advπ (Surjπ ) with an upper bound on
πΏ(Surjπ ). More generally, we can prove the following result along similar lines.
Theorem 8.7. Let π : {0, 1} π β {0, 1} be an arbitrary function, and let π be an integer satisfying
π
π = log π + π(1). Let πΉ : {0, 1}2 +π π β {0, 1} be a function on π = Ξ(π log π) variables, whose input
consists of a function π : {0, 1} π β {0, 1} and π strings π₯ 1 , . . . , π₯ π β {0, 1} π , given by
πΉ( π , π₯1 , . . . , π₯ π ) = ( π π)(π₯ 1 , . . . , π₯ π ).
We call πΉ the π-based Andreev function.
1. If either Adv2π (π) or Khr(π) are at least Ξ©(π 2βπ(1) ) then πΏ(πΉ) = Ξ©(π 3βπ(1) ).
2. If πΏ(π) = π(π 2+π(1) ) then πΏ(πΉ) = π(π 3+π(1) ).
The proof of Theorem 8.7 is very similar to the proof of Theorem 1.1, and so we leave it
to the reader (the only difference is that we may use Corollary 8.3 instead of Corollary 8.2).
Using Theorem 8.7, we can derive two known special cases: the original Andreev function (in
which π is Parity), and the variant considered in [17]. In particular, using the known facts that
Khr(Parity) β₯ π 2 and Khr(Majority) β₯ Ξ©(π 2 ) [33], we obtain the following results.
Corollary 8.8. The formula complexity of the Parityπ -based Andreev function is Ξ(π 3Β±π(1) ).
Corollary 8.9. For π β₯ 3 odd, let Majorityπ : {0, 1} π β {0, 1} be the Majority function. The formula
complexity of the Majorityπ -based Andreev function is Ξ©(π 3βπ(1) ).
The best known upper bound on the formula complexity of Majorityπ is only π(π 3.91 ) [52],
and so we do not expect Corollary 8.9 to be tight.
A Proof of Proposition 7.5
In this appendix, we prove Proposition 7.5, restated next.
Proposition 7.5. For any π : {0, 1} π β {0, 1} we have πΏ( π ) β₯ Adv2π ( π ).
Preliminaries
While the proof of Proposition 7.5 is fairly simple, it uses some basic concepts from information
theory which we review next.
Definition A.1. Let π, π, π be discrete random variables.
h i
β’ The entropy of π is π»(π) = πΌπ₯βπ log Pr[π=π₯]
1
.
β’ The conditional entropy of π given π is π»(π | π) = πΌ π¦βπ [π» (π | π = π¦)].
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 37
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
β’ The mutual information between π and π is πΌ(π; π) = π»(π) β π»(π | π).
β’ The conditional mutual information between π and π given π is
πΌ(π; π | π) = π»(π | π) β π»(π | π, π).
We use the following basic facts from information theory (see [11] for proofs).
Fact A.2. Let π, π, π be discrete random variables with finite supports, and let π denote the support of π.
β’ It holds that 0 β€ π»(π) β€ log |π |.
β’ It holds that 0 β€ π»(π | π) β€ π»(π), where π»(π | π) = 0 if and only if π determines π (i. e., π is a
function of π).
β’ If π determines π conditioned on π (i. e., π is a function of π and π) then π»(π | π) β₯ π»(π | π).
β’ It holds that 0 β€ πΌ(π; π) β€ π»(π). Similarly, we have 0 β€ πΌ(π; π | π) β€ π»(π | π), where
πΌ(π; π | π) = 0 if and only if π and π are independent conditioned on any value of π.
β’ It holds that πΌ(π; π) = πΌ(π; π). Similarly, πΌ(π; π | π) = πΌ(π; π | π).
β’ The chain rule: It holds that
πΌ(π; π, π | π) = πΌ(π; π | π) + πΌ(π; π | π, π).
Finally, we use the following result from the theory of interactive information complexity.
Claim A.3 ([5, Fact 4.15]). Let Ξ be a deterministic protocol. Suppose we invoke Ξ on random inputs π
and π for Alice and Bob, respectively, and let β denote the random leaf that Ξ reaches on those inputs.
Then,
πΌ(π, π; β) β₯ πΌ(π; β | π) + πΌ(π; β | π).
Proof sketch. Let π denote the transcript of the protocol that is associated with β. We prove that
πΌ(π, π; π) β₯ πΌ(π; π | π) + πΌ(π; π | π), as this is equivalent to the claim. Suppose Alice speaks first,
and denote the (random) bit she sends by π1 . We show that πΌ(π, π; π1 ) β₯ πΌ(π; π1 | π) + πΌ(π; π1 | π).
Using the chain rule, we can write
πΌ(π, π; π1 ) = πΌ(π; π1 ) + πΌ(π; π1 | π) β₯ πΌ(π; π1 | π) = πΌ(π; π1 | π) + πΌ(π; π1 | π),
where the last equality follows since πΌ(π; π1 | π) = 0, as Aliceβs message π1 is independent of π
given her input π. Proceeding by induction on the coordinates of π using the chain rule finishes
the proof.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 38
S HRINKAGE UNDER R ANDOM P ROJECTIONS
The proof of Proposition 7.5
Our proof of Proposition 7.5 generalizes similar arguments in [32, 19]. Let Ξ be a protocol that
solves KW π . For every leaf β of Ξ , we denote by πβ the output of the protocol at leaf β , so we
have π πβ β π πβ any pair of inputs (π, π) that reach the leaf β . We prove that πΏ(Ξ ) β₯ Adv2π ( π ). Let
(π, π) be a distribution for π that attains Advπ ( π ), and let
π0 = max Pr[π π β π π | π = π], π1 = max Pr[π π β π π | π = π].
πβsupp(π) πβsupp(π)
πβ[π] πβ[π]
Let β be the leaf that Ξ reaches on input (π, π). By Fact A.2, we have
πΌ(β; π, π) β€ π»(β) β€ log |πΏ(Ξ )| .
On the other hand, Claim A.3 implies that
πΌ(β; π, π) β₯ πΌ(β; π | π) + πΌ(β; π | π).
Next, observe that π and π together determine β, and that the event β = β implies that π πβ β π πβ .
It follows that
πΌ(β; π | π) = π»(β | π) β π»(β | π, π) (by definition)
= π»(β | π) (π and π determine β)
1
= πΌβ ββ ,πβπ log π=π (Definition A.1)
Pr[β = β | π = π]
1
β₯ πΌβ ββ ,πβπ log π=π
Pr[π πβ β π πβ | π = π]
1
β₯ πΌβ ββ ,πβπ log π=π (Definition of π0 )
π0
1
= log
π0
Similarly, it can be shown that πΌ(β; π | π) β₯ log π11 . By combining the foregoing equations, it
follows that
1 1
log |πΏ(Ξ )| β₯ πΌ(β; π | π) + πΌ(β; π | π) β₯ log + log ,
π0 π1
and thus
1 1
|πΏ(Ξ )| β₯ Β· = Adv2π ( π ),
π0 π1
as required.
Remark A.4. We note that Proposition 7.5 can also be proved by observing that the soft-adversary
bound is a special case of the weighted adversary bound [3], which was shown to be a lower
bound on formula complexity by [38]. Alternatively, one could prove Proposition 7.5 (up to a
constant factor) by combining Lemma 7.6 along with Lemma 5.1 and Lemma 4.1.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 39
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
B The min-entropy Khrapchenko bound
In this appendix, we describe a generalization of the Khrapchenko bound that provides a simple
lower bound technique for the soft-adversary bound. We then show how this generalization
can be used to give an alternative proof for the lower bound on the soft-adversary bound of the
surjectivity function. We start by recalling the original Khrapchenko bound.
Definition 7.9. Let π : {0, 1} π β {0, 1} be a non-constant function. For every two sets π΄ β π β1 (1)
and π΅ β π β1 (0), let πΈ(π΄, π΅) be the set of pairs (π, π) β π΄ Γ π΅ such that π and π differ on exactly
one coordinate. The Khrapchenko bound of π is
|πΈ(π΄, π΅)| 2
Khr( π ) = max .
π΄β π β1 (1),π΅β π β1 (0) |π΄| Β· |π΅|
The Khrapchenko measure can be viewed as follows: Consider the subgraph of the Hamming
2
cube that consists only of the cut between π΄ and π΅. Then, the measure |πΈ(π΄,π΅)|
|π΄|Β·|π΅|
is the product of
the average degree of a vertex in π΄ (which is |πΈ(π΄,π΅)|
|π΄|
) and the average degree of a vertex in π΅
(which is |πΈ(π΄,π΅)|
|π΅|
). Note that the average degree of π΄ can also be described as the average, over
all strings π β π΄, of the number of coordinates π β [π] such that if we flip the π-th bit of π we get
a string in π΅. We generalize the Khrapchenko measure as follows:
β’ Whereas the Khrapchenko bound maximizes over all cuts (π΄, π΅) of the Hamming cube
with π΄ β π β1 (1) and π΅ β π β1 (0), we are maximizing over all distributions over edges (π, π)
of the Hamming cube, where π β π β1 (1) and π β π β1 (0).
β’ Whereas the Khrapchenko bound considers the average number of coordinates π as
described above, we consider the min-entropy of the coordinate π on which π, π differ.
β’ Whereas the Khrapchenko bound considers functions whose inputs are binary strings, we
consider inputs that are strings over arbitrary finite alphabets.
Our generalization uses the following notion: The conditional min-entropy of a random variable π
conditioned on a random variable π is π»β (π | π) = minπ₯,π¦ log Pr[π=π₯|π=π¦]
1
. We can now define
our generalization formally:
Definition B.1. Let Ξ£ be a finite alphabet, and let π : Ξ£π β {0, 1} be a non-constant Boolean
function. We say that a distribution (π, π) on π β1 (1) Γ π β1 (0) is a Khrapchenko distribution for π if
π and π always differ on a unique coordinate π β [π]. We define the min-entropy Khrapchenko
bound of π , which we denote Khrπ»β ( π ), to be the maximum of the quantity
2π»β (π|π)+π»β (π|π)
over all Khrapchenko distributions (π, π) for π .
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 40
S HRINKAGE UNDER R ANDOM P ROJECTIONS
As noted above, the min-entropy Khrapchenko bound can be used to lower bound the the
soft-adversary bound. In order to define the adversary bound of a function π : Ξ£π β {0, 1}
over a non-boolean alphabet Ξ£, we view π as if its input is a binary string in {0, 1} πΒ· dlog|Ξ£|e that
encodes a string in Ξ£π via some fixed encoding (the choice of the encoding does not matter for
what follows).
Lemma B.2. For every non-constant function π : Ξ£π β {0, 1} we have Advπ ( π ) β₯ Khrπ»β ( π ).
p
Proof. Let (π, π) be a Khrapchenko distribution that attains Khrπ»β ( π ). Let π β [π] be the unique
index at which π, π differ, and let π π,π , π π,π be the π-th bits of the binary encoding of π π , π π β Ξ£.
For any π β supp(π) and (π, π) β [π] Γ [dlog |Ξ£|e], we have
Pr[π π,π β π π,π | π = π] β€ Pr[π π β π π | π = π] = Pr[π = π | π = π] β€ 2βπ»β (π|π) .
It follows that the first factor in the definition of Advπ is at least π»β (π|π) . Similarly, the second
π»
p2
factor is at least 2 β (π|π) , and so the entire expression is at least Khrπ»β ( π ).
Similarly to the original Khrapchenko bound, the min-entropy Khrapcehnko bound too
gives a lower bound on the formula complexity πΏ( π ), which can be proved by combining
Proposition 7.5 and Lemma B.2. Again, in order to define the formula complexity of a function
π : Ξ£π β {0, 1} over a non-boolean alphabet, we view π as if its input is a binary string
in {0, 1} πΒ· dlog|Ξ£|e .
Corollary B.3. Let Ξ£ be a finite alphabet, and let π : Ξ£π β {0, 1} be a non-constant Boolean function.
Then πΏ( π ) β₯ Khrπ»β ( π ).
By combining Lemmas 7.6 and B.2, it also follows that the min-entropy Khrapchenko bound too
can be used for constructing hiding projections.
Corollary B.4. Let Ξ£ be a finite alphabet, let π : Ξ£π β {0, 1} be apnon-constant Boolean function. There
is a π-hiding projection π
to a single variable π¦ such that π = 1/Khrπ»β ( π ) and such that π | π
is a
non-constant function with probability 1.
B.1 Lower bound on the surjectivity function
In order to demonstrate the usefulness of the min-entropy Khrapchenko bound, we use it to
prove a lower bound on the surjectivity function from Section 8.2. This implies a similar bound
on the soft-adversary bound of the surjectivity function, and provides an alternative way for
proving our main theorem.
Definition 8.4. Let Ξ£ be a finite alphabet, and let π β β be such that π β₯ |Ξ£|. The surjectivity
function SurjΞ£,π : Ξ£π β {0, 1} interprets its input as a function from [π] to Ξ£, and outputs whether
the function is surjective. In other words, Surj(π1 , . . . , ππ ) = 1 if and only if every symbol in Ξ£
appears in (π1 , . . . , ππ ).
Lemma B.5. For every π β β , we have Khrπ»β (Surj[2π +1],3π +1 ) = Ξ© π 2 .
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 41
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Proof. Let π β β , let Ξ£ = [2π + 1], and let Surj = SurjΞ£,3π +1 . We define a Khrapchenko distribu-
tion (π, π) for Surj as follows. The input π β Surjβ1 (0) is a uniformly distributed string in Ξ£3π +1
in which π + 1 of the symbols in Ξ£ appear exactly twice, π β 1 of the symbols in Ξ£ appear exactly
once, and the remaining symbol in Ξ£ does not appear at all. The string π is sampled by choosing
uniformly at random a coordinate π such that π π is one of the symbols that appear twice in π,
and replacing π π with the unique symbol in Ξ£ that does not appear in π.
We turn to bound π»β (π | π) and π»β (π | π). First, observe that conditioned on any choice
of π, the coordinate π is uniformly distributed among 2π + 2 coordinates, and therefore
π»β (π | π) = log(2π + 2).
In order to bound π»β (π | π), observe that π is distributed like a uniformly distributed string
in Ξ£3π +1 in which π + 1 of the symbols in Ξ£ appear exactly once, and the remaining π symbols in Ξ£
appear exactly twice. Conditioned on any choice of π, the coordinate π is uniformly distributed
over the π + 1 coordinates of symbols that appear exactly once. It follows that
π»β (π | π) β₯ log(π + 1).
We conclude that
Khrπ»β (Surjπ ) β₯ 2π»β (π|π)+π»β (π|π) β₯ (π + 1) Β· (2π + 2) = Ξ© π 2 .
Lemma B.5 implies, via Corollary B.3 a quadratic lower bound on the formula complexity
of Surj. The same result also follows from the lower bound on the adversary bound of Surj due
to [6] (Lemma 8.5).
B.1.1 Comparison to the Khrapchenko bound
We now further compare the min-entropy Khrapchenko bound to the original Khrapchenko
bound. Observe that when Ξ£ = {0, 1}, the min-entropy π»β (π | π) is exactly the min-entropy of a
random neighbor of π. In particular, when (π, π) is the uniform distribution over a set of edges,
the min-entropy π»β (π | π) is the logarithm of the minimal degree of a vertex π. Moreover, if
the latter set of edges also induces a regular graph, then the measure 2π»β (π|π)+π»β (π|π) coincides
2
exactly with the original measure |πΈ(π΄,π΅)|
|π΄|Β·|π΅|
of Khrapchenko. More generally, when Ξ£ = {0, 1}, the
bound Khrπ»β ( π ) is within a constant factor of the original Khrapchenko bound, as we show in
Section C.
Proposition B.6. For any non-constant function π : {0, 1} π β {0, 1} it holds that
Khr( π )
4 β€ Khrπ»β ( π ) β€ Khr( π ).
Unfortunately, when Ξ£ is a larger alphabet, the connection between Khrπ»β ( π ) and the
2
measure |πΈ(π΄,π΅)|
|π΄||π΅|
is not so clean. Specifically, the min-entropy π»β (π | π) has no clear connection
to the degree of π, since the vertex π may have multiple neighbors that correspond to the same
coordinate π.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 42
S HRINKAGE UNDER R ANDOM P ROJECTIONS
B.1.2 Previous work on the Khrapchenko bound
Several versions of the Khrapchenko bound appeared in the literature: Zwick [58] generalized
the Khrapchenko bound such that different input coordinates can be given different weights,
and Koutsoupias [36] gave a spectral generalization of the bound. The paper of HΓ₯stad [23]
observed that his analogue of Lemma 4.1 can be viewed as a generalization of the Khrapchenko
bound. Ganor, Komargodski, and Raz [18] considered a variant of the Khrapchenko bound
in which the edges of the Boolean hypercube are replaced with random walks on the noisy
hypercube. Of particular relevance is a paper of Laplante, Lee, and Szegedy [38] that defined a
complexity measure that is very similar to our min-entropy Khrapchenko bound, except that
the entropy is replaced with Kolmogorov complexity.
B.1.3 An entropy Khrapchenko bound
It is possible to generalize the complexity measure Khrπ»β by replacing the min-entropy in
Definition B.1 with Shannon entropy. Such a measure would still give a lower bound on formula
complexity β specifically, the proof of Corollary B.3 would go through without a change.
However, we do not know how to use such a measure for constructing hiding projections as in
Corollary B.4. We note that it is easy to prove that such a measure is an upper bound on Khrπ»β .
C Proof of Propositions B.6 and 7.11
In this appendix we prove Propositions B.6 and 7.11, restated next.
Khr( π )
Proposition B.6. For any π : {0, 1} π β {0, 1} we have 4 β€ Khrπ»β ( π ) β€ Khr( π ).
q
Khr( π )
Proposition 7.11. For any π : {0, 1} π β {0, 1} we have 4 β€ Advπ’ ( π ).
We relate Khr( π ) to Khrπ»β ( π ) using an auxiliary measure Khrmin ( π ):
Khrmin ( π ) = max min |πΈ(π, π΅)| Β· min |πΈ(π΄, π)| ,
π΄β π β1 (1),π΅β π β1 (0) πβπ΄ πβπ΅
where πΈ(π, π΅) = πΈ({π}, π΅) and similarly πΈ(π΄, π) = πΈ(π΄, {π}).
We first show that Khrmin ( π ) and Khr( π ) are equal up to constants.
Khr( π )
Claim C.1. For any π : Ξ£π β {0, 1} we have 4 β€ Khrmin ( π ) β€ Khr( π ).
Proof. The inequality Khrmin ( π ) β€ Khr( π ) is simple since minπβπ΄ |πΈ(π, π΅)| β€ |πΈ(π΄, π΅)|/|π΄| and
similarly minπβπ΅ |πΈ(π΄, π)| β€ |πΈ(π΄, π΅)|/|π΅|.
The other direction is more subtle. For ease of notation, for any sets π΄ β π β1 (1) and
π΅ β π β1 (0) we write Khr(π΄, π΅) = |πΈ(π΄,π΅)|
2
|π΄||π΅|
. Thus,
Khr( π ) = max Khr(π΄, π΅). (C.1)
π΄β π β1 (1),π΅β π β1 (0)
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 43
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
Assume that π΄ and π΅ are sets that maximize Khr(π΄, π΅) in Equation C.1. We show that
βπ β π΄ : |πΈ(π, π΅)| β₯ |πΈ(π΄,π΅)|
2|π΄|
, (C.2)
βπ β π΅ : |πΈ(π΄, π)| β₯ |πΈ(π΄,π΅)|
2|π΅|
. (C.3)
In words, the min-degree is at least half the average degree. Before showing why Equations C.2
and C.3 hold, we show that they imply the statement of the claim. Indeed,
Khr( π )
Khrmin ( π ) β₯ min |πΈ(π, π΅)| Β· min |πΈ(π΄, π)| β₯ |πΈ(π΄,π΅)| Β· |πΈ(π΄,π΅)| = Khr(π΄,π΅) = 4 .
πβπ΄ πβπ΅ 2|π΄| 2|π΅| 4
It remains to show that Equations C.2 and C.3 hold. We focus on Equation C.2 due to
symmetry. Assume by contradiction that there exists an π β π΄ for which
|πΈ(π, π΅)| < |πΈ(π΄,π΅)|
2|π΄|
.
It must be the case that |π΄| > 1, as otherwise π΄ contains only one element and |πΈ(π, π΅)| =
|πΈ(π΄, π΅)|/|π΄| for this element. Consider now the set π΄0 = π΄ \ {π} β which is non-empty by
the above discussion. We claim that Khr(π΄0 , π΅) > Khr(π΄, π΅), contradicting the choice of π΄, π΅.
Indeed,
|πΈ(π΄0 , π΅)| 2
Khr(π΄0 , π΅) =
|π΄0 ||π΅|
(|πΈ(π΄, π΅)| β |πΈ(π, π΅)|)2
=
(|π΄| β 1)|π΅|
(|πΈ(π΄, π΅)| β |πΈ(π΄,π΅)| )2
> (By assumption on π)
2|π΄|
(|π΄| β 1)|π΅|
|πΈ(π΄, π΅)| 2 Β· (1 β 2|π΄|
1 2
)
=
|π΄||π΅| Β· (1 β |π΄|
1
)
(1 β 2|π΄|
1 2
)
= Khr(π΄, π΅) Β·
(1 β |π΄|
1
)
> Khr(π΄, π΅).
It turns out that over the binary alphabet, Khrmin ( π ) and Khrπ»β ( π ) coincide.
Claim C.2. For any π : {0, 1} π β {0, 1} we have Khrπ»β ( π ) = Khrmin ( π ).
Proof. We first show that Khrπ»β ( π ) β₯ Khrmin ( π ). Let π΄, π΅ be sets that maximize the expression
(minπβπ΄ |πΈ(π, π΅)| Β· minπβπ΅ |πΈ(π΄, π)|). We take (π, π) to be a uniformly distributed pair in πΈ(π΄, π΅).
It is not hard to see that
2π»β (π|π) = 2π»β (π|π) = min |πΈ(π, π΅)|,
πβπ΄
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 44
S HRINKAGE UNDER R ANDOM P ROJECTIONS
and similarly 2π»β (π|π) = minπβπ΅ |πΈ(π΄, π)|. We thus get Khrπ»β ( π ) β₯ Khrmin ( π ).
Next, we show the other direction, Khrmin ( π ) β₯ Khrπ»β ( π ). Let (π, π) be a random vari-
able distributed according to a Khrapchenko distribution for π that attains the maximum of
2π»β (π|π)+π»β (π|π) over all such distributions. Let π΄ := supp(π) and π΅ := supp(π) be the supports
of π and π, respectively. By definition of π»β we have 2π»β (π|π) = max Pr[π=π|π=π]
1
. Rearranging,
π,π
we get that for any π in the support of π it holds that
Pr[π = π | π = π] β€ 1/2π»β (π|π) .
In particular, since theses probabilities sum to 1, it must be the case that there are at least 2π»β (π|π)
neighbors of π in π΄ β i. e., |πΈ(π΄, π)| β₯ 2π»β (π|π) for all π β π΅. Similarly, |πΈ(π, π΅)| β₯ 2π»β (π|π) for all
π β π΄. The two sets π΄ and π΅ show that Khrmin ( π ) β₯ 2π»β (π|π) Β· 2π»β (π|π) = Khrπ»β ( π ).
Proposition B.6 follows by combining these two claims. Proposition 7.11 follows from
Claim C.2 using the following simple observation.
Claim C.3. For any π : {0, 1} π β {0, 1} we have Advπ’ ( π ) β₯ Khrmin ( π ).
p
Proof. Let π΄, π΅ be sets that attain Khrmin ( π ). Define π
to be the set of pairs (π, π) β π΄ Γ π΅ that
differ at a single coordinate. Thus |π
(π, π΅)| = |πΈ(π, π΅)| and |π
(π΄, π)| = |πΈ(π΄, π)|. Moreover,
|π
π (π, π΅)| β€ 1 since there is a unique π that differs from π only on the π-th coordinate. Similarly,
|π
π (π΄, π)| β€ 1. The claim now immediately follows by comparing the definitions of Khrmin ( π )
and Advπ’ ( π ).
Acknowledgements
A. T. would like to thank Igor Carboni Oliveira for bringing the question of proving formula size
lower bounds for π¨πͺ 0 to his attention. We are also grateful to Robin Kothari for posing this open
question on βTheoretical Computer Science Stack Exchangeβ [35], and to Kaveh Ghasemloo and
Stasys Jukna for their feedback on this question. We would like to thank Anna GΓ‘l for very
helpful discussions, and Gregory Rosenthal for helpful comments on the manuscript. Finally,
we are grateful to anonymous referees for comments that improved the presentation of this
work.
This work was partly carried out while the authors were visiting the Simons Institute for
the Theory of Computing in association with the DIMACS/Simons Collaboration on Lower
Bounds in Computational Complexity, which is conducted with support from the National
Science Foundation.
References
[1] MiklΓ³s Ajtai: Ξ£11 -formulae on finite structures. Ann. Pure Appl. Logic, 24(1):1β48, 1983.
[doi:10.1016/0168-0072(83)90038-6] 4
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 45
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
[2] Andris Ambainis: Quantum lower bounds by quantum arguments. J. Comput. System Sci.,
64(4):750β767, 2002. Special issue on STOC 2000 (Portland, OR). [doi:10.1006/jcss.2002.1826]
4, 29
[3] Andris Ambainis: Polynomial degree vs. quantum query complexity. J. Comput. System Sci.,
72(2):220β238, 2006. [doi:10.1016/j.jcss.2005.06.006] 31, 39
[4] Alexander E. Andreev: On a method for obtaining more than quadratic effective lower
bounds for the complexity of π-schemes. Moscow University Mathematics Bulletin, 42(1):24β
29, 1987. Vestnik Moskov. Univ. Ser. 1: Mat. Mekh. 1987(1) 70β73 (Russian original on
Math-net.ru) MR 0883632. 2, 12, 35
[5] Boaz Barak, Mark Braverman, Xi Chen, and Anup Rao: How to compress interactive
communication. SIAM J. Comput., 42(3):1327β1363, 2013. [doi:10.1137/100811969] 38
[6] Paul Beame and Widad Machmouchi: The quantum query complexity of AC0 . Quantum
Inf. Comput., 12(7β8):670β676, 2012. [doi:10.26421/QIC12.7-8-11] 4, 12, 13, 35, 42
[7] Maria Luisa Bonet and Samuel R. Buss: Size-depth tradeoffs for Boolean formulae. Inform.
Process. Lett., 49(3):151β155, 1994. [doi:10.1016/0020-0190(94)90093-0] 7
[8] Richard P. Brent: The parallel evaluation of general arithmetic expressions. J. ACM,
21(2):201β206, 1974. [doi:10.1145/321812.321815] 7
[9] Harry Buhrman and Ronald de Wolf: Complexity measures and decision tree complexity:
a survey. Theoret. Comput. Sci., 288(1):21β43, 2002. [doi:10.1016/S0304-3975(01)00144-X] 29
[10] Andrew M. Childs, Shelby Kimmel, and Robin Kothari: The quantum query complexity of
read-many formulas. In Proc. 20th Eur. Symp. Algorithms (ESAβ12), pp. 337β348. Springer,
2012. [doi:10.1007/978-3-642-33090-2_30] 4
[11] Thomas M. Cover and Joy A. Thomas: Elements of Information Theory. Wiley-Interscience,
1991. [doi:10.1002/047174882X] 38
[12] Susanna F. de Rezende, Or Meir, Jakob NordstrΓΆm, Toniann Pitassi, and Robert Robere:
KRW composition theorems via lifting. In Proc. 61st FOCS, pp. 43β49. IEEE Comp. Soc.,
2020. [doi:10.1109/FOCS46700.2020.00013, ECCC:TR20-099] 4
[13] Irit Dinur and Or Meir: Toward the KRW composition conjecture: Cubic formula
lower bounds via communication complexity. Comput. Complexity, 27(3):375β462, 2018.
[doi:10.1007/s00037-017-0159-x] 4
[14] Jeff Edmonds, Russell Impagliazzo, Steven Rudich, and JirΓ Sgall: Communication
complexity towards lower bounds on circuit depth. Comput. Complexity, 10(3):210β246, 2001.
[doi:10.1007/s00037-001-8195-x] 4
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 46
S HRINKAGE UNDER R ANDOM P ROJECTIONS
[15] Yuval Filmus, Or Meir, and Avishay Tal: Shrinkage under random projections, and cubic
formula lower bounds for AC0 (extended abstract). In Proc. 12th Innovations in Theoret.
Comp. Sci. Conf. (ITCSβ21), pp. 89:1β7. Schloss DagstuhlβLeibniz-Zentrum fuer Informatik,
2021. [doi:10.4230/LIPIcs.ITCS.2021.89]
[16] Merrick L. Furst, James B. Saxe, and Michael Sipser: Parity, circuits, and the polynomial-
time hierarchy. Math. Sys. Theory, 17(1):13β27, 1984. [doi:10.1007/BF01744431] 4
[17] Anna GΓ‘l, Avishay Tal, and Adrian Trejo NuΓ±ez: Cubic formula size lower bounds
based on compositions with majority. In Proc. 10th Innovations in Theoret. Comp. Sci.
Conf. (ITCSβ19), pp. 35:1β35:13. Schloss DagstuhlβLeibniz-Zentrum fuer Informatik, 2019.
[doi:10.4230/LIPIcs.ITCS.2019.35] 5, 6, 37
[18] Anat Ganor, Ilan Komargodski, and Ran Raz: The spectrum of small De Morgan formulas.
Electron. Colloq. Comput. Complexity, TR12-174, 2012. [ECCC] 43
[19] Dmitry Gavinsky, Or Meir, Omri Weinstein, and Avi Wigderson: Toward better formula
lower bounds: The composition of a function and a universal relation. SIAM J. Comput.,
46(1):114β131, 2017. [doi:10.1137/15M1018319] 3, 4, 39
[20] Mikael Goldmann and Johan HΓ₯stad: A simple lower bound for monotone clique using
a communication game. Inform. Process. Lett., 41(4):221β226, 1992. [doi:10.1016/0020-
0190(92)90184-W] 4
[21] Mika GΓΆΓΆs and Toniann Pitassi: Communication lower bounds via critical block sensitivity.
SIAM J. Comput., 47(5):1778β1806, 2018. [doi:10.1137/16M1082007] 4
[22] Johan HΓ₯stad: Almost optimal lower bounds for small depth circuits. In S. Micali, editor,
Randomness and Computation, volume 5 of Adv. Computing Research, pp. 143β170. JAI Press,
1989. DSpace@MIT. Preliminary version in STOCβ86. 4
[23] Johan HΓ₯stad: The shrinkage exponent of De Morgan formulas is 2. SIAM J. Comput.,
27(1):48β64, 1998. Preliminary version in FOCSβ93. [doi:10.1137/S0097539794261556] 2, 3,
5, 8, 9, 15, 17, 19, 43
[24] Johan HΓ₯stad, Stasys Jukna, and Pavel PudlΓ‘k: Top-down lower bounds for depth-three
circuits. Comput. Complexity, 5(2):99β112, 1995. [doi:10.1007/BF01268140] 3
[25] Johan HΓ₯stad, Benjamin Rossman, Rocco A. Servedio, and Li-Yang Tan: An average-case
depth hierarchy theorem for Boolean circuits. J. ACM, 64(5):35:1β27, 2017. Preliminary
version in FOCSβ15. [doi:10.1145/3095799] 6
[26] Johan HΓ₯stad and Avi Wigderson: Composition of the universal relation. In Jin yi Cai,
editor, Advances In Computational Complexity Theory, volume 13 of DIMACS Ser. in Discr.
Math., pp. 119β134. Amer. Math. Soc., 1993. [doi:10.1090/dimacs/013/07] 4
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 47
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
[27] Johan Torkel HΓ₯stad: Computational limitations of small-depth circuits. Ph. D. thesis, MIT,
1986. Available on authorβs website, published as a book by MIT Press, 1987. 6
[28] Peter HΓΈyer, Troy Lee, and Robert Ε palek: Negative weights make adversaries stronger.
In Proc. 39th STOC, pp. 526β535. ACM Press, 2007. [doi:10.1145/1250790.1250867] 31
[29] Russell Impagliazzo and Noam Nisan: The effect of random restrictions on formula size.
Random Struct. Algor., 4(2):121β134, 1993. [doi:10.1002/rsa.3240040202] 2
[30] Stasys Jukna: Boolean Function Complexity β Advances and Frontiers. Volume 27 of Algorithms
and Combinatorics. Springer, 2012. [doi:10.1007/978-3-642-24508-4] 4, 36
[31] Mauricio Karchmer, Ran Raz, and Avi Wigderson: Super-logarithmic depth lower bounds
via the direct sum in communication complexity. Comput. Complexity, 5(3β4):191β204, 1995.
[doi:10.1007/BF01206317] 3, 4, 33
[32] Mauricio Karchmer and Avi Wigderson: Monotone circuits for connectivity require
super-logarithmic depth. SIAM J. Discr. Math., 3(2):255β265, 1990. [doi:10.1137/0403021] 3,
4, 7, 39
[33] Valerii Mikhailovich Khrapchenko: A method of obtaining lower bounds for the complex-
ity of π-schemes. Math. Notes. Acad. Sci. USSR (English translation), 10:474β479, 1971. Matem.
Zametki 10(1) 83-92, 1971 (Russian original on Math-net.ru). [doi:10.1007/BF01747074] 2,
29, 32, 37
[34] Sajin Koroth and Or Meir: Improved composition theorems for functions and relations.
In Proc. 22nd Internat. Conf. on Randomization and Computation (RANDOMβ18), pp. 48:1β18.
Schloss DagstuhlβLeibniz-Zentrum fuer Informatik, 2018. [doi:10.4230/LIPIcs.APPROX-
RANDOM.2018.48] 4
[35] Robin Kothari: Formula size lower bounds for AC0 functions, 2011. Theoretical Computer
Science Stack Exchange. 45
[36] Elias Koutsoupias: Improvements on Khrapchenkoβs theorem. Theoret. Comput. Sci.,
116(2):399β403, 1993. [doi:10.1016/0304-3975(93)90330-V] 43
[37] Eyal Kushilevitz and Noam Nisan: Communication Complexity. Cambridge Univ. Press,
1997. [doi:10.1017/9781108671644] 6
[38] Sophie Laplante, Troy Lee, and Mario Szegedy: The quantum adversary method and classi-
cal formula size lower bounds. Comput. Complexity, 15(2):163β196, 2006. [doi:10.1007/s00037-
006-0212-7] 30, 31, 32, 39, 43
[39] Troy Lee, Rajat Mittal, Ben W. Reichardt, Robert Ε palek, and Mario Szegedy: Quantum
query complexity of state conversion. In Proc. 52nd FOCS, pp. 344β353. IEEE Comp. Soc.,
2011. [doi:10.1109/FOCS.2011.75] 31
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 48
S HRINKAGE UNDER R ANDOM P ROJECTIONS
[40] Lily Li and Morgan Shirley: The general adversary bound: A survey, 2021.
[arXiv:2104.06380] 31
[41] Or Meir: Toward better depth lower bounds: Two results on the multiplexor relation.
Comput. Complexity, 29(1):4, 2020. [doi:10.1007/s00037-020-00194-8] 4
[42] Ivan Mihajlin and Alexander Smal: Toward better depth lower bounds: The XOR-
KRW conjecture. In Proc. 36th Comput. Complexity Conf. (CCCβ21), pp. 38:1β24. Schloss
DagstuhlβLeibniz-Zentrum fuer Informatik, 2021. [doi:10.4230/LIPIcs.CCC.2021.38] 4
[43] Γduard IvanoviΔ NeΔiporuk: On a Boolean function. Soviet Math. Doklady, 7(4):999β1000,
1966. Doklady Akad. Nauk SSSR 169(4), 1966, 765β766 (Russian original on Math-net.ru). 4
[44] Mike Paterson and Uri Zwick: Shrinkage of De Morgan formulae under restriction. Random
Struct. Algor., 4(2):135β150, 1993. [doi:10.1002/rsa.3240040203] 2
[45] Toniann Pitassi and Robert Robere: Strongly exponential lower bounds for
monotone computation. In Proc. 49th STOC, pp. 1246β1255. ACM Press, 2017.
[doi:10.1145/3055399.3055478] 4
[46] Ran Raz and Pierre McKenzie: Separation of the monotone NC hierarchy. Combinatorica,
19(3):403β435, 1999. [doi:10.1007/s004930050062] 4
[47] Ran Raz and Avi Wigderson: Monotone circuits for matching require linear depth. J. ACM,
39(3):736β744, 1992. [doi:10.1145/146637.146684] 4
[48] Alexander A. Razborov: Applications of matrix methods to the theory of lower bounds in
computational complexity. Combinatorica, 10(1):81β93, 1990. [doi:10.1007/BF02122698] 7
[49] Ben W. Reichardt: Span programs and quantum query complexity: the general adversary
bound is nearly tight for every Boolean function. In Proc. 50th FOCS, pp. 544β551. IEEE
Comp. Soc., 2009. [doi:10.1109/FOCS.2009.55] 31
[50] Ben W. Reichardt: Reflections for quantum query algorithms. In Proc. 22nd
Ann. ACMβSIAM Symp. on Discrete Algorithms (SODAβ11), pp. 560β569. SIAM, 2011.
[doi:10.1137/1.9781611973082.44] 31
[51] Ben W. Reichardt: Span programs are equivalent to quantum query algorithms. SIAM J.
Comput., 43(3):1206β1219, 2014. [doi:10.1137/100792640] 31
[52] Igor Sergeevich Sergeev: Complexity and depth of formulas for symmetric Boolean
functions. Moscow University Mathematics Bulletin, 71(3):127β130, 2016. Vestnik
Moskov. Univ. Ser. 1: Mat. Mekh. 2016(3) 53β57 (Russian original on Math-net.ru).
[doi:10.3103/S0027132216030098] 37
[53] Robert Ε palek and Mario Szegedy: All quantum adversary methods are equivalent. Theory
of Computing, 2(1):1β18, 2006. [doi:10.4086/toc.2006.v002a001] 30, 31
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 49
Y UVAL F ILMUS , O R M EIR , AND AVISHAY TAL
[54] Philip M. Spira: On time-hardware complexity tradeoffs for Boolean functions. In Proc. 4th
Hawaii International Symposium on System Sciences, pp. 525β527. Western Periodicals, 1971.
Google books. 7
[55] Bella Abramovna Subbotovskaya: Realizations of linear functions by formulas using β¨, &,
β . Soviet Math. Doklady, 2:110β112, 1961. Doklady Akad. Nauk SSSR 136(3), 553β555, 1961
(Russian original on Math-net.ru). 2
[56] Avishay Tal: Shrinkage of De Morgan formulae by spectral techniques. In Proc. 55th FOCS,
pp. 551β560. IEEE Comp. Soc., 2014. [doi:10.1109/FOCS.2014.65] 2, 3
[57] Andrew Chi-Chih Yao: Separating the polynomial-time hierarchy by oracles (preliminary
version). In Proc. 26th FOCS, pp. 1β10. IEEE Comp. Soc., 1985. [doi:10.1109/SFCS.1985.49] 4
[58] Uri Zwick: An extension of Khrapchenkoβs theorem. Inform. Process. Lett., 37(4):215β217,
1991. [doi:10.1016/0020-0190(91)90191-J] 43
AUTHORS
Yuval Filmus
Associate Professor
Technion β Israel Institute of Technology
The Henry and Marilyn Taub Faculty of Computer Science
Faculty of Mathematics
Haifa 3200003, Israel
yuvalfi cs technion ac il
https://yuvalfilmus.cs.technion.ac.il/
Or Meir
Associate Professor
Department of Computer Science
University of Haifa
Haifa 3303220, Israel
ormeir cs haifa ac il
https://cs.haifa.ac.il/~ormeir/
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 50
S HRINKAGE UNDER R ANDOM P ROJECTIONS
Avishay Tal
Assistant Professor
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
Berkeley, CA 94720, United States
atal berkeley edu
https://www.avishaytal.org/
ABOUT THE AUTHORS
Yuval Filmus received his Ph. D. from the University of Toronto under the supervision
of Toni Pitassi. Subsequently, he held postdoctoral positions at the Simons
Institute for the Theory of Computing and the Institute for Advanced Study. He
is interested in Boolean function analysis, complexity theory, and combinatorics.
Or Meir received his Ph. D. from the Weizmann Institute of Science under the
supervision of Oded Goldreich. Subsequently, he held postdoctoral positions
at Stanford University, the Institute for Advanced Study, and the Weizmann
Institute of Science. He is interested in complexity theory, and in particular in
circuit complexity, communication complexity, coding theory, probabilistic proof
systems, and the theory of pseudorandomness.
Avishay Tal received his M. Sc. from the Technion under the supervision of
Amir Shpilka, and his Ph. D. from the Weizmann Institute of Science under
the supervision of Ran Raz. Subsequently, he held postdoctoral positions
at the Institute for Advanced Study, the Simons Institute for the Theory of
Computing, and Stanford University. He is interested in Boolean function analysis,
complexity theory and combinatorics, and in particular in circuit complexity,
query complexity, computational learning theory, quantum complexity, and the
theory of pseudorandomness.
T HEORY OF C OMPUTING, Volume 19 (7), 2023, pp. 1β51 51