DOKK Library

Tight Bounds on the Average Sensitivity of k-CNF

Authors Kazuyuki Amano,

License CC-BY-3.0

Plaintext
                               T HEORY OF C OMPUTING, Volume 7 (2011), pp. 45–48
                                          www.theoryofcomputing.org

                                                            NOTE



                            Tight Bounds on the
                        Average Sensitivity of k-CNF
                                                    Kazuyuki Amano∗
                                 Received: November 25, 2010; published: March 15, 2011.



         Abstract: The average sensitivity of a Boolean function is the expectation, given a
         uniformly random input, of the number of input bits which when flipped change the output
         of the function. Answering a question by O’Donnell, we show that every Boolean function
         represented by a k-CNF (or a k-DNF) has average sensitivity at most k. This bound is tight
         since the parity function on k variables has average sensitivity k.

ACM Classification: F.1.3
AMS Classification: 68R05, 68Q15
Key words and phrases: Boolean functions, sensitivity, influence, conjunctive normal form

1      Introduction and Results
For x ∈ {0, 1}n and i ∈ {1, . . . , n}, let xi denote x with the i-th bit flipped. Let f : {0, 1}n → {0, 1} be a
Boolean function on n variables. The sensitivity of f at x, denoted by s( f , x), is the number of bits i for
which f (x) 6= f (xi ). The average sensitivity (also known as total influence) of f , denoted by S( f ), is the
expected sensitivity at a random input:
                                                            1
                                                 S( f ) =        ∑ n s( f , x) .
                                                            2n x∈{0,1}

The average sensitivity is one of the most studied concepts in the analysis of Boolean functions (see,
e. g., [3, 4, 5, 7]).
     ∗ Supported in part by KAKENHI (No. 21500005) from JSPS, Japan.



    2011 Kazuyuki Amano
    Licensed under a Creative Commons Attribution License                              DOI: 10.4086/toc.2011.v007a004
                                                             K AZUYUKI A MANO

    A literal is a Boolean variable or its negation. Let k be a nonnegative integer. A k-clause is a
disjunction of at most k literals, and a k-term is a conjunction of at most k literals. A k-CNF function is a
conjunction of k-clauses, and a k-DNF function is a disjunction of k-terms.
    Boppana [1] proved that S( f ) ≤ 2k for every k-DNF (as well as k-CNF) function f . Recently,
Traxler [9] improved this upper bound to S( f ) ≤ 1.062k. This is nearly optimal, since the parity function
on k variables, which is obviously a k-CNF function, has average sensitivity k.
    In this note, we close the gap by showing:

Theorem 1.1. If f is a k-DNF or k-CNF function, then S( f ) ≤ k.

     This solves an open problem posed by O’Donnell in 2007 (see [6]).


2      Proof of Theorem 1.1
Our proof is a small modification of the proof of the 1.062k upper bound by Traxler [9], which is based
on a clever use of the Paturi-Pudlák-Zane algorithm (PPZ algorithm, in short) for k-SAT [8].
    Let f be a k-CNF function. (The k-DNF case is dual.) Note that Traxler’s bound is in fact

                                                            S( f ) ≤ 2z log2 (1/z)k

where z is the probability that f outputs 1. This upper bound is larger than k when 0.25 < z < 0.5.
    We introduce the distribution D f over {0, 1}n ∪ {⊥}, which is essentially identical to the distribution
used in Traxler’s proof.
    Consider the algorithm eppz( f ) that takes f as input and tries to choose a satisfying assignment for f .
The algorithm first chooses uniformly at random some permutation π on the index set {1, . . . , n} of the
variables. Then, for j = 1, . . . , n, it does the following: it sets the variable xπ( j) to 1 if the single-variable
clause (xπ( j) ) is in f and to 0 if the single-variable clause (xπ( j) ) is in f . We say that in these two
cases “xπ( j) is forced.” Otherwise xπ( j) is set to 0 or 1 uniformly at random. Each time, the formula is
syntactically simplified, i. e., all clauses which became satisfied are deleted. At the end, the algorithm
outputs x. If the algorithm ever produces two contradictory unit clauses, then it just “gives up” and
outputs “⊥.”
    Define D f as

                                                 D f (x) = Pr[eppz( f ) outputs x] ,

where the probability is over all the random choices made in eppz . In what follows, we are only interested
in the value of D f (x) for x ∈ f −1 (1). Note that a similar algorithm was introduced in [2] for obtaining a
lower bound on the success probability of the PPZ algorithm.
     For x ∈ f −1 (1), let t( f , π, x, i) denote the indicator variable for whether xi is forced or not, given that
π was chosen and x output.1 Note that given that π is chosen and x is output, all of the other random
choices of eppz( f ) are fixed; i. e., there is only one outcome that leads to a given π and x.
    1 If we borrow Traxler’s notation [9], t( f , π, x, i) is defined as t( f , π, x, i) = 1 − (` ( f , π, x, i) + ` ( f , π, x, i)).
                                                                                                 0                  1



                                    T HEORY OF C OMPUTING, Volume 7 (2011), pp. 45–48                                                   46
                           T IGHT B OUNDS ON THE AVERAGE S ENSITIVITY OF k-CNF

    The key observation that relates the distribution D f to the sensitivity of f is (essentially from [8])
that, for every x ∈ f −1 (1), if f (x) 6= f (xi ), i. e., f is sensitive at x for the i-th bit, then
                                                                           1
                                                  Eπ [t( f , π, x, i)] ≥     .                                     (1)
                                                                           k
    We include the proof of Eq. (1) for completeness. If 1 = f (x) 6= f (xi ) = 0, then there must be a clause
C such that the only literal in C set to 1 by x is the literal of the i-th variable. The variable xi is forced by
eppz if i appears last in π among all variable indices occurring in C. This happens with probability at
least 1/k since C has at most k literals. This establishes Eq. (1).
    In order to show S( f ) ≤ k, it is enough to show that D f (x) ≥ s( f , x)/2n−1 k for every x ∈ f −1 (1).
This is because we may combine ∑x∈ f −1 (1) D f (x) ≤ 1 (since D f is a distribution) with the elementary fact

                                                           1
                                          S( f ) =                        s( f , x)
                                                          2n−1 x∈ f∑
                                                                   −1 (1)


(see, e. g., [1, Lemma 1(b)]).
    The proof is finished by observing
                       "                      #
                         n
                               1 1−t( f ,π,x,i)
           D f (x) = Eπ ∏
                        i=1 2
                     1    h n                i
                   = n Eπ 2∑i=1 t( f ,π,x,i)
                     2    "                    #
                               n
                     1
                   ≥ n Eπ 2 ∑ t( f , π, x, i)                         (since 2a ≥ 2a for all integers a ≥ 0)
                     2       i=1
                      2 n
                  =      ∑ Eπ [t( f , π, x, i)]                                       (linearity of expectation)
                      2n i=1
                      s( f , x)
                  ≥                                                                               (by Eq. (1)).
                      2n−1 k


This completes the proof of the theorem.


Acknowledgments
The author would like to thank the anonymous referee for his/her helpful comments to improve the
presentation of the paper.


References
[1] R AVI B. B OPPANA: The average sensitivity of bounded-depth circuits. Inform. Process. Lett.,
    63(5):257–261, 1997. [doi:10.1016/S0020-0190(97)00131-2] 46, 47

                              T HEORY OF C OMPUTING, Volume 7 (2011), pp. 45–48                                    47
                                         K AZUYUKI A MANO

[2] C HRIS C ALABRO , RUSSELL I MPAGLIAZZO , VALENTINE K ABANETS , AND R AMAMOHAN PATURI:
    The complexity of unique k-SAT: An isolation lemma for k-CNFs. J. Comput. System Sci., 74(3):386–
    393, 2008. (Conference version in CCC ’03, pp. 135–141, 2003). [doi:10.1016/j.jcss.2007.06.015]
    46

[3] RONALD DE W OLF: A Brief Introduction to Fourier Analysis on the Boolean Cube. Number 1 in
    Graduate Surveys. Theory of Computing Library, 2008. [doi:10.4086/toc.gs.2008.001] 45

[4] E HUD F RIEDGUT: Boolean functions with low average sensitivity depend on few coordinates.
    Combinatorica, 18(1):27–35, 1998. [doi:10.1007/PL00009809] 45

[5] NATHAN L INIAL , Y ISHAY M ANSOUR , AND N OAM N ISAN: Constant depth circuits, Fourier
    transform and learnability. J. ACM, 40(3):607–620, 1993. [doi:10.1145/174130.174138] 45

[6] RYAN O’D ONNELL: The lecture notes of the course “analysis of Boolean Functions”: Lecture 29:
    Open problems. http://www.cs.cmu.edu/~odonnell/boolean-analysis/lecture29.pdf,
    2007. 46

[7] RYAN O’D ONNELL: Some topics in analysis of Boolean functions. In Proc. 40th STOC, pp. 569–578.
    ACM Press, 2008. [doi:10.1145/1374376.1374458] 45

[8] R AMAMOHAN PATURI , PAVEL P UDL ÁK , AND F RANCIS Z ANE: Satisfiability coding lemma.
    Chicago Journal of Theoretical Computer Science, 1999(115):1–18, 1999. (Conference version in
    FOCS ’97, pp. 566–574, 1997). [doi:10.4086/cjtcs.1999.011] 46, 47

[9] PATRICK T RAXLER: Variable influences in conjunctive normal forms. In Proc. 12th Internat. Conf.
    on Theory and Appl. of Satisfiability (SAT’09), volume 5584 of LNCS, pp. 101–113. Springer, 2009.
    [doi:10.1007/978-3-642-02777-2 12] 46


AUTHOR

      Kazuyuki Amano
      associate professor
      Gunma University, Japan
      amano gunma-u ac jp
      http://www.cs.gunma-u.ac.jp/~amano/index-e.html


ABOUT THE AUTHOR

      K AZUYUKI A MANO received his Ph. D. in 1996 from Tohoku University under the super-
         vision of Akira Maruoka. Since 2006, he has been an associate professor at Gunma
         University. His research interests include computational complexity and combinatorics.
         He enjoys writing computer programs to solve puzzles and problems of discrete mathe-
         matics.


                        T HEORY OF C OMPUTING, Volume 7 (2011), pp. 45–48                         48