Authors Pasin Manurangsi, Preetum Nakkiran, Luca Trevisan,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29
www.theoryofcomputing.org
S PECIAL ISSUE : APPROX-RANDOM 2016
Near-Optimal NP-Hardness of Approximating
M AX k-CSPR
Pasin Manurangsi* Preetum Nakkiran† Luca Trevisan‡
Received April 23, 2017; Revised November 7, 2021; Published February 14, 2022
Abstract. We prove almost optimal hardness for M AX k-CSPR . In M AX k-CSPR , we are
given a set of constraints, each of which depends on at most k variables. Each variable can
take any value from 1, 2, . . . , R. The goal is to find an assignment to variables that maximizes
the number of satisfied constraints.
We show that, for any k ≥ 2 and R ≥ 16, it is NP-hard to approximate M AX k-CSPR to
within factor kO(k) (log R)k/2 /Rk−1 . In the regime where 3 ≤ k = o(log R/ log log R), this ratio
improves upon Chan’s O(k/Rk−2 ) factor NP-hardness of approximation of M AX k-CSPR
(J. ACM 2016). Moreover, when k = 2, our result matches the best known hardness result
of Khot, Kindler, Mossel and O’Donnell (SIAM J. Comp. 2007). We remark here that NP-
hardness of an approximation factor of 2O(k) log(kR)/Rk−1 is implicit in the (independent)
work of Khot and Saket (ICALP 2015), which is better than our ratio for all k ≥ 3.
In addition to the above hardness result, by extending an algorithm for M AX 2-CSPR
by Kindler, Kolla and Trevisan (SODA 2016), we provide an Ω(log R/Rk−1 )-approximation
A conference version of this paper appeared in the Proceedings of the 19th International Workshop on Approximation
Algorithms for Combinatorial Optimization Problems (APPROX’16) [22]. This material is based upon work supported by the
National Science Foundation under Grant No. CCF 1540685.
* Work done while the author was at UC Berkeley.
† Work done while the author was at UC Berkeley.
‡ Work done while the author was at UC Berkeley.
ACM Classification: F.2.2
AMS Classification: 68Q17, 68W25
Key words and phrases: constraint satisfaction, hardness of approximation, approximation algorithm
© 2022 Pasin Manurangsi, Preetum Nakkiran, and Luca Trevisan
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2022.v018a003
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
algorithm for M AX k-CSPR . Thanks to Khot and Saket’s result, this algorithm is tight up to
a factor of O(k2 ) when k ≤ RO(1) . In comparison, when 3 ≤ k is a constant, the previously
best known algorithm achieves an O(k/Rk−1 )-approximation for the problem, which is a
factor of O(k log R) from the inapproximability ratio in contrast to our gap of O(k2 ).
1 Introduction
The Maximum Constraint Satisfaction Problem (M AX CSP) is an optimization problem where the inputs
are a set of variables, an alphabet, and a set of constraints. Each variable can be assigned any symbol
from the alphabet and each constraint depends only on the assignment to a subset of variables. The goal
is to find an assignment to the variables that maximizes the number of satisfied constraints.
Many natural optimization problems, such as M AX C UT, M AX k-CUT and M AX k-SAT, can be
formulated as M AX CSP. In addition, M AX CSP has been shown to help approximate other seemingly-
unrelated problems such as D ENSEST k-S UBGRAPH [5]. Due to this, M AX CSP has long been studied by
the approximation algorithms community [28, 14, 6, 21, 20, 12]. Furthermore, its relation to PCPs ensures
that M AX CSP is also well-studied on the hardness-of-approximation side [26, 10, 27, 16, 1, 13, 11, 4].
The main focus of this paper is on M AX k-CSPR , a family of M AX CSPs where the alphabet is
of size R and each constraint depends on only k variables. On the hardness-of-approximation side,
most early work focused on Boolean M AX k-CSP. Samorodnitsky √ and Trevisan first showed that M AX
k-CSP2 is NP-hard to approximate to within a factor of 2O( √
k) /2k [26]. Engebretsen and Holmerin later
improved√
the implicit constant factor in the exponent O( k) but still obtained NP-hardness of a factor
of 2O( k) /2k [11]. To break this barrier, Samorodnitsky and Trevisan proved a Unique-Games-hardness
(UG-hardness) [15] 1 result; they achieved UG-hardness of an approximation ratio of O(k/2k ), which is
tight up to a constant for the Boolean case [27]. Chan later showed that NP-hardness of approximation at
this ratio can be achieved, settling the approximability of M AX k-CSP2 [4].
Unlike in the Boolean case, the approximability of M AX k-CSPR when √ R > 2 is still not resolved.
For this case, Engebretsen showed NP-hardness of approximation ratio R k) /Rk [10]. UG-hardness of
O(
approximation within a factor of O(kR/Rk−1 ) was proven by Austrin and Mossel [1] and, independently,
by Guruswami and Raghavendra [13]. For the case k = 2, results by Khot et al. [16] implicitly demonstrate
UG-hardness of approximation within O(log R/R). This was made explicit in [20]. In the light of the
recent resolution of the 2-to-1 Conjecture [8, 17], this inapproximability result is now an NP-hardness
result. Moreover, Austrin and Mossel proved UG-hardness of an approximation ratio of O(k/Rk−1 )
for infinitely many values of k [1], but only under the condition k ≥ R. Recently, Chan was able to
upgrade these UG-hardness results to NP-hardness [4]. More specifically, Chan showed NP-hardness
of approximation within a factor of O(kR/Rk−1 ) for every k, R and approximation within a factor of
O(k/Rk−1 ) for every k ≥ R. Due to an approximation algorithm with matching approximation ratio by
Makarychev and Makarychev [21], Chan’s result established tight hardness of approximation for k ≥ R.
1 For any r ∈ (0, 1), we say that M AX k-CSP
R is UG-hard to approximate to within an r factor if, for some 0 < γ < ζ < 1
and 0 < θ ≤ 1, there is a polynomial-time reduction from the problem of deciding whether a given unique game has value at
least ζ or at most γ to the problem of deciding whether a given M AX k-CSPR instance has value at least θ or at most θ · r. (For
the definition of Unique Games, please refer to Section 2.2.) We remark that such a UG-hardness result implies that, if the
Unique Games Conjecture holds, then M AX k-CSPR is NP-hard to approximate to within an r factor.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 2
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
On the other hand, when k < R, Chan’s result gives O(kR/Rk−1 ) hardness of approximation whereas
the best known approximation algorithm achieves only Ω(k/Rk−1 ) approximation ratio [21, 12]. In an
attempt to bridge this gap, we prove the following result.
Theorem 1.1 (Main Theorem). It is NP-hard to approximate M AX k-CSPR to within a
kO(k) (log R)k/2 /Rk−1 factor, for any k ≥ 2 and R ≥ 16.
Remark 1.2. We remark that the kO(k) (log R)k/2 /Rk−1 factor in Theorem 1.1 can be greater than one
when R ≤ kc for some constant c. In this regime, the hardness result is vacuous.
Range of k, R NP-Hardness
UG-Hardness Approximation
References
k=2 O logR R – Ω logR R [16, 20]
2
2O(k) log(kR) k log(kR) k
3 ≤ k < Θ(log R) Rk−1
O k−1 Ω Rk−1 [19, 21, 12]
2R
k
k log(kR) k
Θ(log R) ≤ k < R O Rk−2 O Rk−1
Ω Rk−1 [4, 19, 21, 12]
k k
R≤k O Rk−1 – Ω Rk−1 [4, 21]
kO(k) (log R)k/2
Any k ≥ 2, R ≥ 16 Rk−1
– Ω log R
Rk−1
this paper
Figure 1: Comparison between our results and previous results. For each range of k, R, we list the previous
best hardness-of-approximation factors and the previous best approximation ratio. As mentioned earlier,
Khot and Saket’s result [19] subsumes our result for every value of k and R whereas our approximation
algorithm improves on the previously known algorithm when 3 ≤ k = o(log R).
When k = o(log R/ log log R), our result improves upon Chan’s result in terms of the ratio. As noted
in the Abstract, Khot and Saket [19] independently showed UG-hardness of approximation within a
factor of2 O(k2 log(kR)/Rk−1 ) for the problem for any k and R [19]. Furthermore, they can achieve3
NP-hardness of factor 2O(k) log(kR)/Rk−1 . Both of these ratios are better than the ratio we achieve in
Theorem 1.1.
As mentioned earlier, there has also been a long line of work in approximation algorithms for
M AX k-CSPR . In the Boolean case, Trevisan first showed a polynomial-time approximation algorithm
with approximation ratio 2/2k [28]. Hast later improved the ratio to Ω(k/(2k log k)) [14]. Charikar,
Makarychev and Makarychev then came up with an Ω(k/2k )-approximation algorithm [6]. As stated
when discussing hardness of approximation of M AX k-CSP2 , this approximation ratio is tight up to a
constant factor.
For the non-Boolean case, Charikar, Makarychev, and Makarychev’s algorithm achieves a ratio
of Ω(k log R/Rk ) for M AX k-CSPR . Makarychev, and Makarychev later improved the approximation
2 It should be noted that, if the results as stated in [19] are invoked directly, then the hardness of approximation ratio one
can get is O(k3 log R/Rk−1 ), which is a factor of O(k) worse than what is stated here. This is because Khot and Saket prove
the more general statement that any integrality gap for the standard LP relaxation of M AX k-CSPR can be translated into a
UG-hardness of approximation result at the loss of a factor of O(k3 log R) in the ratio. However, it is not hard to see that, when
one is only interested in hardness of approximation (and not the LP), then a factor of k here can be removed. Please refer to
Appendix B for more details regarding this.
3 In [19], only UG-hardness was stated, but NP-hardness (with a worse ratio) is now possible due to the resolution of the
2-to-1 Conjecture [17]. We briefly discuss this in Section 1.1 and Appendix B.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 3
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
ratio to Ω(k/Rk−1 ) when k = Ω(log R) [21]. Additionally, the algorithm was extended by Goldshlager
and Moshkovitz to achieve the same approximation ratio for any k, R [12]. On this front, we show the
following result.
Theorem 1.3. There exists a polynomial-time Ω(log R/Rk−1 )-approximation algorithm for M AX k-CSPR .
In comparison to the previously known algorithms, our algorithm gives a better approximation ratio
when 3 ≤ k = o(log R). We remark that our algorithm is just a simple extension of Kindler, Kolla and
Trevisan’s polynomial-time Ω(log R/R)-approximation algorithm for Max 2-CSPR [20].
1.1 The role of the 2-to-1 Theorem
In the conference version of this paper [22], our hardness result (Theorem 1.1) was shown conditional on
the Unique Games Conjecture (UGC) or the d-to-1 Conjecture [15]. Recent work [9, 8, 2, 17] confirmed
the imperfect completeness version of the 2-to-1 conjecture, and this breakthrough upgrades our hardness
results to NP-hardness without any additional assumptions. We have incorporated this development into
the current version.
The situation is similar for the result of Khot and Saket [19]. In their case, the 2-to-1 Theorem implies
NP-hardness of a somewhat worse inapproximability ratio than stated in their paper, but still stronger
than our bound. We briefly discuss this in Appendix B.
We emphasize that Chan’s proofs of his NP-hardness results [4] do not depend on the d-to-1 Theorem.
1.2 Summary of techniques
Our reduction from Unique Games to M AX k-CSPR follows the reduction of [16] for M AX 2-CSPs.
We construct a k-query PCP using a Unique Games “outer verifier,” and then design a k-query Long
Code test as an “inner verifier.” For simplicity, let us think of k as a constant. We essentially construct a
k-query Dictator-vs.-Quasirandom test for functions f : [R]n → [R], with completeness Ω(1/(log R)k/2 )
and soundness O(1/Rk−1 ). This test satisfies the following completeness and soundness conditions.
• (Completeness) If f is a dictator function, i. e., f (x) = x j for some coordinate j ∈ [n], then the test
passes with a large probability.
• (Soundness) If f is a balanced function with small low-degree influences, then the test passes with
only a small probability.
Our test is structurally similar to the 2-query “noise stability” tests of [16]: first we pick a random
z ∈ [R]n , then we pick k weakly-correlated queries x(1) , . . . , x(k) by choosing each x(i) ∈ [R]n as a noisy
copy of z, i. e., each coordinate (x(i) ) j is chosen as z j with some probability ρ or uniformly at random
otherwise. We accept iff f (x(1) ) = f (x(2) ) = · · · = f (x(k) ). The key technical step is our analysis of the
soundness of this test. We need to show that if f is a balanced function with small low-degree influences,
then the test passes with probability O(1/Rk−1 ). Intuitively, we would like to say that for high enough
noise, the values f (x(i) ) are roughly independent and uniform, so the test passes with probability around
1/Rk−1 . To formalize this intuition, we utilize the Invariance Principle and Hypercontractivity.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 4
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
More precisely, if we let f i (x) : [R]n → {0, 1} be the indicator function for f (x) = i, then the test
accepts iff f i (x(1) ) = · · · = f i (x(k) ) = 1 for some i ∈ [R]. For each i ∈ [R], this probability can be written
as the expectation of the product: E[ f i (x(1) ) f i (x(2) ) . . . f i (x(k) )]. Since x(i) ’s are chosen as noisy copies
of z, this expression is related to the k-th norm of a noisy version of f i . Thus, our problem is reduced
to bounding the k-norm of a noisy function fei : [R]n → [0, 1], which has bounded one-norm E[ fei ] = 1/R
since f is balanced. To arrive at this bound, we first apply the Invariance Principle, which essentially
states that a low-degree low-influence function on [R]n behaves on random input similarly to its “Boolean
analogue” over domain {±1}nR . Here “Boolean analogue” refers to the function over {±1}nR with
matching Fourier coefficients.
Roughly speaking, now that we have transfered to the Boolean domain, we are left to bound the
k-norm of a noisy function on {±1}nR based on its one-norm. This follows from Hypercontractivity in
the Boolean setting, which bounds a higher norm of any noisy function on Boolean domain in terms of a
lower norm.
The description above hides several technical complications. For example, when we pass from a
function [R]n → [0, 1] to its “Boolean analogue” {±1}nR → R, the range of the resulting function is no
longer bounded to [0, 1]. This, along with the necessary degree truncation, means we must be careful
when bounding norms. Further, Hypercontractivity only allows us to pass from k-norms to (1 + ε)-norms
for small ε, so we cannot use the known 1-norm directly. For details on how we handle these issues,
see Section 3. This allows us to prove soundness of our dictator test without passing through results on
Gaussian space, as was done to prove the “Majority is Stablest” conjecture [24] at the core of the [16]
2-query dictator test.
Combining the above test with a standard reduction from Unique Games [16] would immediately
give a UG-hardness result. However, we can get NP-hardness because it suffices for us to use Unique
Games with completeness 1/2 from [8, 17]. This concludes the overview of our proof of Theorem 1.1.
Lastly, for our approximation algorithm, we simply extend Kindler, Kolla and Trevisan’s algorithm
by first creating an instance of M AX 2-CSPR from M AX k-CSPR by projecting each constraint to all
possible subsets of two variables. We then use their algorithm to approximate the instance. Finally, we
set our assignment to be the same as that from the KKT algorithm with some probability. Otherwise,
we pick the assignment uniformly at random from [R]. As we shall show in Section 4, with the right
probability, this technique can extend not only the KKT algorithm but any algorithm for M AX k0 -CSPR to
an algorithm for M AX k-CSPR where k > k0 with some loss in the advantage over the naive randomized
algorithm.
1.3 Organization of the paper
In Section 2, we define notation and list background knowledge that will be used throughout the paper.
Next, we prove our hardness of approximation result (Theorem 1.1) in Section 3. In Section 4, we show
how to extend Kindler et al.’s algorithm to M AX k-CSPR and prove Theorem 1.3. Finally, in Section 5,
we discuss interesting open questions and directions for future research.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 5
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
2 Preliminaries
In this section, we introduce notation and list prior results that will be used to prove our results.
Throughout the paper, we use log to denote the logarithm to base 2.
2.1 M AX k-CSPR
We start by giving a formal definition of M AX k-CSPR , which is the main focus of our paper.
Definition 2.1 (M AX k-CSPR ). An instance (X, C) of (weighted) M AX k-CSPR consists of
• A set X of variables.
• A set C = {C1 , . . . ,Cm } of constraints. Each constraint Ci is a triple (Wi , Si , Pi ) of a positive
weight Wi > 0 such that ∑m i=1 Wi = 1, a subset of variables Si ⊆ X of size k, and a constraint
Pi : [R]Si → {0, 1} that maps each assignment to variables in Si to {0, 1}. Here we use [R]Si to
denote the set of all functions from Si to [R], i. e., [R]Si = {ψ : Si → [R]}.
For each assignment of variables ϕ : X → [R], we define its value to be the total weights of the constraints
i=1 Wi Pi (ϕ |Si ). The goal is to find an assignment ϕ : X → [R] with
satisfied by this assignment, i. e., ∑m
maximum value. We sometimes call the optimum the value of (X, C).
Note that, while some definition of M AX k-CSPR refers to the unweighted version where W1 = · · · =
Wm = 1/m, Crescenzi, Silvestri and Trevisan showed that the approximability of these two cases are
essentially the same [7].4 Hence, it is enough for us to consider just the weight version.
Throughout the analysis, we assume that R ≥ 16 and k ≥ 2. This is without loss of generality, as
otherwise our claimed inapproximability (Theorem 1.1) is trivial.
2.2 Unique Games
In this subsection, we give formal definitions for unique games, d-to-1 games and Khot’s conjectures
about them. First, we give a formal definition of unique games.
Definition 2.2 (Unique Game). A unique game (V,W, E, n, {πe }e∈E ) consists of
• A bipartite graph G = (V,W, E) which is regular5 on the V side.
• Alphabet size n.
• For each edge e ∈ E, a permutation πe : [n] → [n].
For an assignment ϕ : V ∪ W → [n], an edge e = (v, w) is satisfied if πe (ϕ(v)) = ϕ(w). The goal is to
find an assignment that satisfies as many edges as possible. We define the value of an instance to be the
fraction of edges satisfied in the optimum solution.
4 More specifically, they proved that, if the weighted version is NP-hard to approximate to within ratio r, then the unweighted
version is also NP-hard to approximate to within r − on (1) where on (1) represents a function such that on (1) → 0 as n → ∞.
5 The regularity can be assumed without loss of generality; see, for instance, Lemma 3.4 in [18].
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 6
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
The Unique Games Conjecture (UGC), proposed in Khot’s seminal paper [15], states that, for any
sufficiently small η, γ > 0, it is NP-hard to distinguish a unique game where at least a 1 − η fraction of
constraints can be satisfied from a unique game where at most a γ fraction of constraints can be satisfied.
While the UGC itself remains a major open question, recent breakthrough work [9, 8, 2, 17] has shown
NP-hardness of Unique Games when the completeness is arbitrarily close to 1/2, as stated formally below.
Theorem 2.3 ([8, 17]). For any γ > 0 and any ζ ∈ (γ, 1/2), there exists a constant n such that it is
NP-hard to distinguish a unique game with alphabet size n of value at least ζ from one of value at most γ.
2.3 Fourier expansion
For any function g : [q]n → R over a finite alphabet [q], we define the Fourier expansion of g as follows.
Consider the space of all functions [q] → R, with the inner-product hu, vi := Ex∈[q] [u(x)v(x)], where
the expectation is over a uniform x ∈ [q]. Pick an orthonormal basis l1 , . . . , lq for this space li : Σ → R,
such that l1 is the constant function 1. We can now write g in the tensor-product basis, as
n
g(x1 , x2 , . . . , xn ) = ∑ ĝ(s) · ∏ ls(i) (xi ).
s∈[q]n i=1
Since we pick l1 (x) = 1 for all x ∈ [q], we also have Ex∈[q] [li (x)] = hli , l1 i = 0 for every i 6= 1.
Throughout, we use ĝ to refer to the Fourier coefficients of a function g.
For functions g : [q]n → R, the p-norm is defined as
||g|| p = E [|g(x)| p ]1/p .
x∈[q]n
In particular, the squared 2-norm is
||g||22 = E [g(x)2 ] = ∑ ĝ(s)2 .
x∈[q]n s∈[q]n
2.3.1 Noise operators
ρ
For x ∈ [q]n , let y ← x denote that y is a ρ-correlated copy of x. That is, each coordinate yi is independently
chosen to be equal to xi with probability ρ, or chosen uniformly at random otherwise.
Define the noise operator Tρ acting on any function g : [q]n → R as
(Tρ g)(x) = E [g(y)].
ρ
y←x
Notice that the noise operator Tρ acts on the Fourier coefficients on this basis as follows.
n
f (x) = Tρ g(x) = ∑ ĝ(s) · ρ |s| · ∏ ls(i) (xi )
s∈[q]n i=1
where |s| is defined as |{i | s(i) 6= 1}|.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 7
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
2.3.2 Degree truncation
For any function g : [q]n → R, let g≤d denote the (≤ d)-degree part of g, i. e.,
n
g≤d (x) = ∑ ĝ(s) · ∏ ls(i) (xi ),
s∈[q]n ,|s|≤d i=1
and similarly let g>d : [q]n → R denote the (> d)-degree part of g, i. e.,
n
g>d (x) = ∑ ĝ(s) · ∏ ls(i) (xi ).
s∈[q]n ,|s|>d i=1
Notice that degree-truncation commutes with the noise-operator, so writing Tρ g≤d is unambiguous.
Also, notice that truncating does not increase 2-norm:
||g≤d ||22 = ∑ ĝ(s)2 ≤ ∑ ĝ(s)2 = ||g||22 .
s∈[q]n ,|s|≤d s∈[q]n
We frequently use the fact that noisy functions have small high-degree contributions. That is, for any
function g : [q]n → [0, 1], we have
||Tρ g>d ||22 = ∑ ρ 2|s| ĝ(s)2 ≤ ρ 2d ∑ ĝ(s)2 = ρ 2d ||g||22 ≤ ρ 2d .
s∈[q]n ,|s|>d s∈[q]n
2.3.3 Influences
For any function g : [q]n → R, the influence of coordinate i ∈ [n] is defined as
Infi [g] = E [Varxi ∈[q] [g(x) | {x j } j6=i ]].
x∈[q]n
This can be expressed in terms of the Fourier coefficients of g as follows:
Infi [g] = ∑ ĝ(s)2 .
s∈[q]n : s(i)6=1
Further, the degree-d influences are defined as
Inf≤d ≤d
i [g] = Infi [g ] = ∑ ĝ(s)2 .
s∈[q]n :
|s|≤d, s(i)6=1
2.3.4 Binary functions
The previous discussion of Fourier analysis can be applied to Boolean functions, by specializing to
q = 2. However, in this case the Fourier expansion can be written in a more convenient form. Let
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 8
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
G : {+1, −1}n → R be a Boolean function over n bits. We can choose orthonormal basis functions
l1 (y) = 1 and l2 (y) = y, so G can be written as
G(y) = ∑ Ĝ(S) ∏ yi
S⊆[n] i∈S
for some coefficients Ĝ(S).
Degree-truncation then results in
G≤d (y) = ∑ Ĝ(S) ∏ yi ,
S⊆[n]:|S|≤d i∈S
and the noise-operator acts as follows:
Tρ G(y) = ∑ Ĝ(S)ρ |S| ∏ yi .
S⊆[n] i∈S
2.3.5 Boolean analogues
To analyze k-CSPR , we will want to translate between functions over [R]n to functions over {±1}nR . The
following notion of Boolean analogues will be useful.
For any function g : [R]n → R with Fourier coefficients ĝ(s), define the Boolean analogue of g to be a
function {g} : {±1}n×R → R such that
{g}(z) = ∑ ĝ(s) · ∏ zi,s(i) .
s∈[R]n i∈[n],s(i)6=1
Note that
||g||22 = ∑ ĝ(s)2 = ||{g}||22 ,
s∈[R]n
and that
{g≤d } = {g}≤d .
Moreover, the noise operator acts nicely on {g} as follows:
Tρ {g} = {Tρ g}.
For simplicity, we use Tρ to refer to both Boolean and non-Boolean noise operators with correlation ρ.
2.4 Invariance Principle and Mollification Lemma
We start with the Invariance Principle in the form of Theorem 3.18 in [24]:
Theorem 2.4 (General Invariance Principle [24]). Let f : [R]n → R be any function such that Var[ f ] ≤ 1,
Infi [ f ] ≤ δ for all i ∈ [n], and deg( f ) ≤ d. Let F : {±1}nR → R be its Boolean analogue: F = { f }.
Consider any “test-function” ψ : R → R that is C3 , with bounded 3rd-derivative |ψ 000 | ≤ C everywhere.
Then,
√
d d/2
E [ψ(F(y))] − E [ψ( f (x))] ≤ C10 R δ.
y∈{±1}nR x∈[R]n
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 9
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
Note that the above version follows√ directly from Theorem 3.18 and Hypothesis 3 of [24], and the
fact that uniform ±1 bits are (2, 3, 1/ 2)-hypercontractive as described in [24].
As we shall see later, we will want to apply the Invariance Principle for some functions ψ that are not
in C3 . However, these functions will be Lipschitz-continuous with some constant c ∈ R (or “c-Lipschitz”),
meaning that
|ψ(x + ∆) − ψ(x)| ≤ c|∆| for all x, ∆ ∈ R.
e that is C3 , and has
Therefore, similarly to Lemma 3.21 in [24], we can “smooth” it to get a function ψ
arbitrarily small pointwise difference to ψ.
Lemma 2.5 (Mollification Lemma [24]). Let ψ : R → R be any c-Lipschitz function. Then for any ζ > 0,
e : R → R such that
there exists a function ψ
e ∈ C3 ,
• ψ
• ||ψ
e − ψ||∞ ≤ ζ , and,
e 000 ||∞ ≤ Cc
• ||ψ e 3 /ζ 2 .
Here Ce is a universal constant, not depending on ζ , c.
For completeness, the full proof of the lemma can be found in Section A.1.
Now we state the following version of the Invariance Principle, which will be more convenient to
invoke. It can be proved simply by just combining Theorem 2.4 and Lemma 2.5. We give a full proof in
Section A.2.
Lemma 2.6 (Invariance Principle). Let ψ : R → R be one of the following functions:
1. ψ1 (t) := |t|,
k if t ∈ [0, 1],
t
2. ψk (t) := 0 if t < 0,
1 if t ≥ 1.
Let f : [R]n → [0, 1] be any function with all Infi≤d [ f ] ≤ δ . Let F : {±1}nR → R be its Boolean
analogue: F = { f }. Let f ≤d : [R]n → R denote f truncated to degree d, and similarly for F ≤d :
{±1}nR → R.
Then, for parameters d = 10k log R and δ = 1/(R10+100k log(R) ), we have
E [ψ(F ≤d (y))] − E [ψ( f ≤d (x))] ≤ O(1/Rk ).
y∈{±1}nR x∈[R]n
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 10
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
2.5 Hypercontractivity Theorem
Another crucial ingredient in our proof is the Hypercontractivity Theorem (Bonami [3]), which says that,
on the {±1}n domain, the operator Tρ smooths any function so well that the higher norm can be bounded
by the lower norm of the original (unsmoothed) function. Here we use the version of the theorem as
stated in [25, Chap. 9].
q
p−1
Theorem 2.7 (Hypercontractivity Theorem (Bonami)). Let 1 ≤ p ≤ q ≤ ∞. For any ρ ≤ q−1 and for
n
any function h : {±1} → R, the following inequality holds:
||Tρ h||q ≤ ||h|| p .
p
In particular, for choice of parameter ρ = 1/ (k − 1) log R, we have
||T2ρ h||k ≤ ||h||1+ε . (2.1)
where ε = 4/ log(R).
3 Inapproximability of M AX k-CSPR
In this section, we prove Theorem 1.1. Our proof will be presented in Section 3.3. Before that, we first
prove an inequality that is the heart of our soundness analysis in Section 3.2.
3.1 Parameters
We use the following parameters throughout, which we list for convenience here:
p
• Correlation6 : ρ = 1/ (k − 1) log R
• Degree: d = 10k log R
• Low-degree influences: δ = 1/(R10+100k log(R) )
3.2 Hypercontractivity for noisy low-influence functions
Here we show a version of hypercontractivity for noisy low-influence functions over large domains.
Although hypercontractivity does not hold in general for noisy functions over large domains, it turns
out to hold in our setting of high-noise and low-influences. The main technical idea is to use the
Invariance Principle to reduce functions over larger domains to Boolean functions, then apply Boolean
hypercontractivity.
6 Note that for k = 2, this correlation yields a stability of ≈ 1/R for the plurality. That is, Prz,x,y [plur(x1 , . . . , xn ) =
plur(y1 , . . . , yn )] ≈ 1/R where each zi ∈ [R] is iid uniform, and xi , yi are ρ-correlated copies of zi .
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 11
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
Lemma 3.1 (Main Lemma). Let g : [R]n → [0, 1] be any function with Ex∈[R]n [g(x)] = 1/R. Then, for our
choice of parameters ρ, d, δ : If Inf≤d
i [g] ≤ δ for all i, then
k O(k)
E [(Tρ g(x)) ] ≤ 2 /Rk .
x∈[R]n
Before presenting the full proof, we outline the high-level steps below. Let f = Tρ g, and define
Boolean analogues G = {g}, and F = { f }. Let ψk : R → R be defined as in Lemma 2.6. Then,
k ≤d
E [ f (x) ] ≈ E[ψk ( f (x))] (3.1)
x∈[R]n
≈ E [ψk (F ≤d (y))] (Lemma 2.6: Invariance Principle) (3.2)
y∈{±1}nR
≤ ||F ≤d ||kk (Definition of ψk )
= ||Tρ G≤d ||kk (Definition of F)
= ||T2ρ T1/2 G≤d ||kk
≤ ||T1/2 G≤d ||k1+ε (Hypercontractivity, for small ε) (3.3)
≈ 2O(k) ||g||k1 (Invariance, etc.)
O(k) k
=2 /R . (Since E[|g|] = 1/R)
Proof of Lemma 3.1. To establish equation (3.1), first notice that
ψk ( f (x)) = ψk ( f ≤d (x) + f >d (x)) ≤ ψk ( f ≤d (x)) + k| f >d (x)|
where the last inequality is because the function ψk is k-Lipschitz.
Moreover, since g(x) ∈ [0, 1], we have f (x) ∈ [0, 1], so
f (x)k = ψk ( f (x)).
Thus,
k
E[ f (x) ] = E[ψk ( f (x))]
≤ E[ψk ( f ≤d (x))] + k E[| f >d (x)|]
= E[ψk ( f ≤d (x))] + k|| f >d ||1
≤ E[ψk ( f ≤d (x))] + k|| f >d ||2 .
And we can bound the 2-norm of f >d , since f is noisy, we have
|| f >d ||22 = ||Tρ g>d ||22 ≤ ρ 2d ≤ O(1/R2k ).
The last inequality comes from our choice of ρ and d.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 12
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
So equation (3.1) is established:
k ≤d k
E[ f (x) ] ≤ E[ψk ( f (x))] + O(k/R ).
Equation (3.2) follows directly from our version of the Invariance Principle (Lemma 2.6), for the
function ψk :
≤d ≤d k
E [ψk ( f (x))] ≤ E [ψk (F (y))] + O(1/R ).
x∈[R]n y∈{±1}nR
We can now rewrite Ey∈{±1}nR [ψk (F ≤d (y))] as
E [ψk (F ≤d (y))] ≤ E [|F ≤d (y)|k ]
y∈{±1}nR y∈{±1}nR
= ||F ≤d ||kk
= ||Tρ G≤d ||kk
= ||T2ρ T1/2 G≤d ||kk .
Now, from the Hypercontractivity Theorem, equation (2.1), we have
||T2ρ T1/2 G≤d ||k ≤ ||T1/2 G≤d ||1+ε
for ε = 4/ log R. This establishes equation (3.3):
||T2ρ T1/2 G≤d ||kk ≤ ||T1/2 G≤d ||k1+ε = E[|T1/2 G≤d (y)|1+ε ]k/(1+ε) .
To show the remaining steps, we will apply the Invariance Principle once more. Notice that, since
ε ≤ 1, for all t ∈ R : |t|1+ε ≤ |t| + t 2 . Hence, we can derive the following bound:
≤d 1+ε ≤d ≤d 2
E[|T1/2 G (y)| ] ≤ E[|T1/2 G (y)|] + E[(T1/2 G (y)) ]
(Matching Fourier expansion) = E[|T1/2 G≤d (y)|] + E[(T1/2 g≤d (x))2 ]
(Lemma 2.6, Invariance Principle) ≤ E[|T1/2 g≤d (x)|] + E[(T1/2 g≤d (x))2 ] + O(1/Rk ). (3.4)
Here we applied our Invariance Principle (Lemma 2.6) for the function ψ1 as defined in Lemma 2.6. We
will bound each of the expectations on the RHS, using the fact that g is balanced, and T1/2 g is noisy.
First,
≤d >d
E[|T1/2 g (x)|] = E[|T1/2 g(x) − T1/2 g (x)|]
≤ E[|T1/2 g(x)|] + E[|T1/2 g>d (x)|] (Triangle Inequality)
= ||g||1 + ||T1/2 g>d ||1
≤ ||g||1 + ||T1/2 g>d ||2
≤ 1/R + (1/2)d
= O(1/R). (By our choice of d)
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 13
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
Second,
≤d 2
E[(T1/2 g (x)) ] = ∑ (1/2)2|s| ĝ(s)2
s∈[R]n ,|s|≤d
≤ ∑ (1/2)2|s| ĝ(s)2
s∈[R]n
= E[(T1/2 g(x))2 ]
≤ E[T1/2 g(x)] (Since g ∈ [0, 1])
= E[g(x)] = 1/R.
Finally, plugging these bounds into equation (3.4), we find:
||T1/2 G≤d ||k1+ε = E[|T1/2 G≤d (y)|1+ε ]k/(1+ε)
≤ (O(1/R))k/(1+ε)
= 2O(k) /Rk/(1+ε)
≤ 2O(k) /Rk(1−ε)
= 2O(k) /Rk . (Recall ε = 4/ log R)
This completes the proof of the main lemma.
3.3 Reducing Unique Games to M AX k-CSPR
Here we reduce Unique Games to M AX k-CSPR . We will construct a PCP verifier that reads k symbols
of the proof (with an alphabet of size R) with the following properties:
• (Completeness) If the unique game has value at least ζ , then the verifier accepts an honest proof
with probability at least c = ζ k /((log R)k/2 kO(k) ).
• (Soundness) If the unique game has value at most γ = 2O(k) δ 2 /(8dRk ), then the verifier accepts
any (potentially cheating) proof with probability at most s = 2O(k) /Rk−1 .
Since each symbol in the proof can be viewed as a variable and each accepting constraint of the
verifier can be viewed as a constraint of M AX k-CSPR , from Theorem 2.3, this PCP implies NP-
hardness of approximating M AX k-CSPR of factor s/c = kO(k) (log R)k/2 /Rk−1 and, hence, establishes
our Theorem 1.1.
3.3.1 The PCP
Given a unique game (V,W, E, n, {πe }e∈E ), the proof is the truth-table of a function hw : [R]n → [R] for
each vertex w ∈ W . By folding, we can assume hw is balanced, i. e., hw takes on all elements of its range
with equal probability: Prx∈[R]n [hw (x) = i] = 1/R for all i ∈ [R]. 7
7 More precisely, if the truth-table provided is of some function e
hw : [R]n → [R], we define the “folded” function hw as
hw (0, x2 − x1 , . . . , xn − x1 ) + x1 , where the ± is over mod R. Notice that the folded hw is balanced, and
hw (x1 , x2 , x3 , . . . xn ) := e
also that folding does not affect dictator functions. Thus we define our PCP in terms of hw , but simulate queries to hw using the
actual proof e hw .
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 14
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
Notationally, for x ∈ [R]n , let (x ◦ π) denote permuting the coordinates of x as: (x ◦ π)i = xπ(i) . Also,
for an edge e = (v, w), we write πe = πv,w , and define πw,v = πv,w −1 .
The verifier picks a uniformly random vertex v ∈ V , and k independent uniformly random neighbors of
v: w1 , w2 , . . . , wk ∈ W . Then pick z ∈ [R]n uniformly at random, and let x(1) , x(2) , . . . , x(k) be independent
ρ-correlated noisy copies of z (each coordinate xi chosen as equal to zi w.p. ρ, or uniformly at random
otherwise). The verifier accepts if and only if
hw1 (x(1) ◦ πw1 ,v ) = hw2 (x(2) ◦ πw2 ,v ) = · · · = hwk (x(k) ◦ πwk ,v ).
p
To achieve the desired hardness result, we pick ρ = 1/ (k − 1) log R.
3.3.2 Completeness analysis
Let the degree of each vertex in V be ∆.
Suppose that the original unique game has an assignment of value at least ζ . Let us call this assignment
ϕ. The honest proof defines hw at each vertex w ∈ W as the long code encoding of this assignment, i. e.,
hw (x) = xϕ(w) . We can written the verifier acceptance condition as follows:
The verifier accepts ⇔ hw1 (x(1) ◦ πw1 ,v ) = · · · = hwk (x(k) ◦ πwk ,v )
⇔ (x(1) ◦ πw1 ,v )ϕ(w1 ) = · · · = (x(k) ◦ πwk ,v )ϕ(wk )
⇔ (x(1) )πw1 ,v (ϕ(w1 )) = · · · = (x(k) )πw ,v (ϕ(wk )) .
k
Observe that, if the edges (v, w1 ), . . . , (v, wk ) are satisfied by ϕ, then πw1 ,v (ϕ(w1 )) = · · · =
πwk ,v (ϕ(wk )) = ϕ(v). Hence, if the aforementioned edges are satisfied and x(1) , . . . , x(k) are not per-
turbed at coordinate ϕ(v), then (x(1) )πw1 ,v (ϕ(w1 )) = · · · = (x(k) )πw ,v (ϕ(wk )) .
k
For each u ∈ V , let su be the number of satisfied edges touching u. Since w1 , . . . , wk are chosen from
the neighbors of v independently from each other, the probability that the edges (v, w1 ), (v, w2 ), . . . , (v, wk )
are satisfied can be bounded as follows:
Pr [(v, w1 ), . . . , (v, wk ) are satisfied]
v,w1 ,...,wk
=∑ Pr [(v, w1 ), . . . , (v, wk ) are satisfied | v = u] Pr[v = u]
w1 ,...,wk
u∈V
= ∑ (su /∆)k Pr[v = u]
u∈V
h i
= E (su /∆)k
u∈V
≥ E [su /∆]k .
u∈V
Notice that Eu∈V [su /∆] is exactly the value of ϕ, which is at least ζ . As a result,
Pr [(v, w1 ), . . . , (v, wk ) are satisfied] ≥ ζ k .
v,w1 ,...,wk
Furthermore, it is obvious that the probability that x1 , . . . , xk are not perturbed
p at the coordinate ϕ(v)
k k k
is ρ . As a result, the PCP accepts with probability at least ζ ρ . When ρ = 1/ (k − 1) log R and ζ is a
constant not depending on k and R, the completeness is 1/((log R)k/2 kO(k) ).
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 15
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
3.3.3 Soundness analysis
Suppose that the unique game has value at most γ = 2O(k) δ 2 /(4dRk ). We will show that the soundness is
2O(k) /Rk−1 .
Suppose for the sake of contradiction that the probability that the verifier accepts Pr[accept] > t =
2Ω(k) /Rk−1 where Ω(·) hides some large enough constant.
ρ
Let hiw (x) : [R]n → {0, 1} be the indicator function for hw (x) = i and let x ← z denote that x is a
ρ-correlated copy of z. We have
Pr[accept] = Pr[hw1 (x(1) ◦ πw1 ,v ) = · · · = hwk (x(k) ◦ πwk ,v )]
= ∑ Pr[i = hw1 (x(1) ◦ πw1 ,v ) = · · · = hwk (x(k) ◦ πwk ,v )]
i∈[R]
= ∑ E[hiw1 (x(1) ◦ πw1 ,v ) · · · hiwk (x(k) ◦ πwk ,v )]
i∈[R]
= ∑ E E [hiw1 (x(1) ◦ πw1 ,v )] · · · E [hiwk (x(k) ◦ πwk ,v )] .
w1 wk
i∈[R]
Where the last equality follows since the wi ’s are independent, given v.
Now define giv : [R]n → [0, 1] as
giv (x) = E [hiw (x ◦ πw,v )]
w∼v
where w ∼ v denotes neighbors w of v.
We can rewrite Pr[accept] as follows:
Pr[accept] = ∑ E[giv (x(1) )giv (x(2) ) · · · giv (x(k) )]
i∈[R]
" #
= ∑E i
E [gv (x)]
k
(Since x( j) ’s are independent given z)
ρ
i∈[R] x←z
= ∑ E [(Tρ giv (z))k ]
v,z
i∈[R]
" #
i k
=E ∑ E[(Tρ gv (z)) ] . (3.5)
v z
i∈[R]
Next, notice that ∑i∈[R] Ez [(Tρ giv (z))k ] is simply the probability the verifier accepts given it picks
vertex v, and thus this sum is bounded above by 1.
Therefore, since Pr[accept] > t, by equation (3.5), at least t/2 fraction of vertices v ∈ V have
∑ Ez [(Tρ giv (z))k ] ≥ t/2.
i∈[R]
For these “good” vertices, there must exist some i ∈ [R] for which
i k
E[(Tρ gv (z)) ] ≥ t/(2R).
z
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 16
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
Then for “good” v and i as above,
i k
E[(Tρ gv (z)) ] > 2
Ω(k)
/Rk .
z
By Lemma 3.1 (Main Lemma), this means giv has some coordinate j for which
Inf≤d i
j [gv ] > δ
for our choice of d, δ as defined in Section 3.1. Pick this j as the label of vertex v ∈ V .
Now to pick the label of a vertex w ∈ W , define the candidate labels as
Cand[w] = { j ∈ [n] : ∃ i ∈ [R] s.t. Inf≤d i
j [hw ] ≥ δ /2}.
Notice that
∑ Inf≤d i
j [hw ] = ∑ |s|ĥiw (s)2 ≤ d ∑ ĥiw (s)2 = d Var[hiw ] ≤ d.
j∈[n] s∈[R]n : |s|≤d s:|s|>0
So for each i ∈ [R], the projection hiw can have at most 2d/δ coordinates with influence ≥ δ /2.
Therefore the number of candidate labels is bounded:
| Cand[w]| ≤ 2dR/δ .
Now we argue that picking a random label in Cand[w] for w ∈ W is in expectation a good decoding.
We will show that if we assigned label j to a “good” v ∈ V , then πv,w ( j) ∈ Cand[w] for a constant fraction
of neighbors w ∼ v. Note here that πv,w = πw,v−1 .
First, since giv (x) = Ew∼v [hiw (x ◦ πw,v )], the Fourier transform of giv is related to the Fourier transform
of the long code labels hiw as
ĝiv (s) = E [ĥiw (s ◦ πw,v )].
w∼v
Hence, the influence Inf≤k i i ≤d
j [gv ] of gv being large implies the expected influence In f πv,w
−1 [hi ] of its
( j) w
neighbor labels w ∼ v is also large as formalized below.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 17
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
δ < Inf≤k i
j [gv ] = ∑ ĝiv (s)2
s∈[R]n
|s|≤k,s j 6=1
= ∑ E [ĥiw (s ◦ πw,v )]2
w∼v
≤ ∑ E [ĥiw (s ◦ πw,v )2 ]
w∼v
= E[ ∑ ĥiw (s ◦ πw,v )2 ]
w∼v
s∈[R]n
|s|≤k,s j 6=1
= E[ ∑ ĥiw (s)2 ]
w∼v
s∈[R]n
|s|≤k,sπ −1 ( j) 6=1
w,v
−1
(Since πv,w = πw,v )= E[ ∑ ĥiw (s)2 ]
w∼v
s∈[R]n
|s|≤k,sπv,w ( j) 6=1
= E [In fπ≤d
v,w ( j)
[hiw ]]
w∼v
Therefore, at least δ /2 fraction of neighbors w ∼ v must have In fπ≤dv,w ( j)
[hiw ] ≥ δ /2, and so πv,w ( j) ∈
Cand[w] for at least δ /2 fraction of neighbors of “good” vertices v.
Finally, recall that at least (t/2) fraction of vertices v ∈ V are “good”. These vertices have at least
(δ /2) fraction of neighbors w ∈ W with high-influence labels and the matching label w ∈ W is picked
with probability at least δ /(2dR). Moreover, as stated earlier, we can assume that the graph is regular on
V side. Hence, the expected fraction of edges satisfied by this decoding is at least
(t/2)(δ /2)(δ /2dR) = tδ 2 /(8dR) = 2Ω(k) δ 2 /(8dRk ) > γ,
which contradicts our assumption that the unique game has value at most γ. Hence, we can conclude that
the soundness is at most 2O(k) /Rk−1 as desired.
4 Ω(log R/Rk−1 )-approximation algorithm for M AX k-CSPR
Instead of just extending the KKT algorithm to work with M AX k-CSPs, we will show a more general
statement that any algorithm that approximates M AX CSPs with small arity can be extended to approxi-
0
mate M AX CSPs with larger arities. In particular, we show how to extend any f (R)/Rk -approximation
0 0
algorithm for M AX k0 -CSPR to an ( f (R)/2O(min{k ,k−k }) )/Rk -approximation algorithm for M AX k-CSPR
where k > k0 .
Since the naive algorithm that assigns every variable randomly has an approximation ratio of 1/Rk ,
we think of f (R) as the advantage of algorithm A over the randomized algorithm. From this perspective,
0 0
our extension lemma preserves the advantage up to a factor of 1/2O(min{k ,k−k }) .
The extension lemma and its proof are stated formally below.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 18
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
Lemma 4.1. Suppose that there exists a polynomial-time approximation algorithm A for M AX k0 -CSPR
0
that outputs an assignment with expected value at least f (R)/Rk times the optimum. For any k > k0 , we
can construct a polynomial-time approximation algorithm B for M AX k-CSPR that outputs an assignment
0 0
with expected value at least ( f (R)/2O(min{k ,k−k }) )/Rk times the optimum.
Proof. The main idea of the proof is simple. We turn an instance of M AX k-CSPR to an instance of M AX
0
k0 -CSPR by constructing kk0 Rk−k new constraints for each original constraint; each new constraint is a
projection of the original constraint to a subset of variables of size k0 . We then use A to solve the newly
constructed instance. Finally, B simply assigns each variable with the assignment from A with a certain
probability and assigns it randomly otherwise.
0
k . We define B on input (X, C) as follows:
For convenience, let α be k−k
1. Create an instance (X, C0 ) of M AX k0 -CSPR with the same variables and, for each C = (W, S, P) ∈ C
0 0
and for every subset S0 of S with |S0 | = k0 and every τ ∈ [R]S−S , create a constraint CS ,τ = (W 0 , S0 , P0 )
0
in C0 where W 0 = k Wk−k0 and P0 : [R]S → {0, 1} is defined by
(k0 )R
P0 (ψ) = P(ψ ◦ τ).
Here ψ ◦ τ is defined as follows:
(
ψ(x) if x ∈ S0 ,
ψ ◦ τ(x) =
τ(x) otherwise.
2. Run A on input (X, C0 ). Denote the output of A by ϕA .
3. For each x ∈ X, with probability α, pick ϕB (x) randomly from [R]. Otherwise, let ϕB (x) be ϕA (x).
4. Output ϕB .
0 0
We now show that ϕB has expected value at least ( f (R)/2O(min{k ,k−k }) )/Rk times the optimum.
0
First, observe that the optimum of (X, C0 ) is at least 1/Rk−k times that of (X, C). To see that this is
true, consider any assignment ϕ : X → [R] and any constraint C =(W, S, P). Its weighted contribution in
(X, C) is W P(ϕ|S ). On the other hand, k Wk−k0 P(ϕ|S ) appears kk0 times in (X, C0 ), once for each subset
(k0 )R
0
S ⊆ S of size k . Hence, the value of ϕ with respect to (X, C0 ) is at least 1/Rk−k times its value with
0 0
respect to (X, C).
0
Recall that the algorithm A gives an assignment of expected value at least f (R)/Rk times the optimum
of (X, C0 ). Hence, the expected value of ϕA is at least f (R)/Rk times the optimum of (X, C).
Next, we will compute the expected value of ϕB (with respect to (X, C)). We start by computing the
expected value of ϕB with respect to a fixed constraint C = (W, S, P) ∈ C, i. e., EϕB [W P(ϕB |S )]. For each
S0 ⊆ S of size k0 , we define DS0 as the event where, in Step 3, ϕB (x) is assigned to be ϕA (x) for all x ∈ S0
and ϕB (x) is assigned randomly for all x ∈ S − S0 .
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 19
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
Since DS0 is disjoint for all S0 ⊆ S of size k0 , we have the following inequality.
E [W P(ϕB |S )] ≥ ∑ Pr[DS0 ] E [W P(ϕB |S ) | DS0 ]
ϕB S0 ⊆S
ϕB
|S0 |=k0
0 0
= α k−k (1 − α)k ∑ W ϕE [P(ϕB |S ) | DS ], 0
S0 ⊆S B
|S0 |=k0
0 0
where the equality follows from Pr[DS0 ] = α k−k (1 − α)k .
Moreover, since every vertex in S − S0 is randomly assigned when DS0 occurs, E[P(ϕB |S ) | DS0 ] can be
0
view as the average value of P((ϕA |S0 ) ◦ τ) over all τ ∈ [R]S−S . Hence, we can derive the following:
1
E [P(ϕB |S ) | DS0 ] = k−k0 E ∑ P((ϕA |S0 ) ◦ τ) .
ϕB R ϕA S−S0 τ∈[R]
As a result, we have
α k−k0(1 − α) k0
E [W P(ϕB |S )] ≥ W P((ϕA |S0 ) ◦ τ) . (4.1)
0 E ∑ ∑
ϕB Rk−k ϕA S0 ⊆S 0
τ∈[R]S−S
|S0 |=k0
By summing equation (4.1) over all constraints C ∈ C, we arrive at the following inequality.
" #
E ∑ W P(ϕB |S )
ϕB
C=(W,S,P)∈C
0 0
α k−k (1 − α)k
≥ W P((ϕA |S0 ) ◦ τ)
Rk−k
0 E
ϕA
∑ ∑ ∑
C=(W,S,P)∈C S0 ⊆S τ∈[R]S−S0
|S0 |=k0
k 0 0 W
= 0 α k−k (1 − α)k E P((ϕA |S0 ) ◦ τ)
k ∑ ∑ ∑ k
k−k0
k0 R
ϕA S0 ⊆S 0
C=(W,S,P)∈C τ∈[R]S−S
|S0 |=k0
" #
k k−k0 k0 0 0
= 0 α (1 − α) E ∑ W P (ϕA |S0 )
k ϕA
C0 =(W 0 ,S0 ,P0 )∈C
0 0
The first expression is the expected value of ϕB whereas the last is kk0 α k−k (1 − α)k times the
expected value of ϕA . Since the expected value of ϕA is at least f (R)/Rk times the optimum of (X, C),
0 0
k
(1 − α)k )( f (R)/Rk ) times the optimum of (X, C).
k−k
the expected value of ϕB is at least ( k0 α
0
Finally, we substitute α = k−kk in to get
0 0
k − k0 k−k k0 k
k k−k0 k0 k
α (1 − α) = .
k0 k0 k k
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 20
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
Let l = min{k0 , k − k0 }. We then have
0 0
k − k0 k−k k0 k k − l k−l l l
k k
=
k0 k k l k k
l k−l l
k k−l l
≥
l k k
k
k−l
≥
k
2l
= (1 − l/k)0.5k/l
≥ 1/22l ,
where the last inequality follows from Bernoulli’s inequality and from l ≤ 0.5k.
Hence, ϕB has expected value at least ( f (R)/2O(l) )/Rk times the optimum of (X, C), which completes
the proof of this lemma.
Finally, Theorem 1.3 is an immediate consequence of applying Lemma 4.1 to the algorithm from [20]
with k0 = 2 and f (R) = Ω(R log R).
5 Conclusion and open question
In this article we provide a hardness result and an approximation algorithm for M AX k-CSPR . The
former is subsumed by independent work of Khot and Saket [19] whereas the latter remains the best
known algorithm in the regime 3 ≤ k < o(log R). Even with our results, current inapproximability
results do not match the best known approximation ratio achievable in polynomial time when 3 ≤ k < R.
Hence, it is intriguing to ask what the right ratio that M AX k-CSPR becomes NP-hard to approximate is.
Since Khot and Saket’s hardness factor O(k2 log(kR)/Rk−1 ) [19] does not match Chan’s hardness factor
O(k/Rk−2 ) [4] when k = R, it is plausible that there is a k between 3 and R − 1 such that a drastic change
in the hardness factor, and the proof technique, occurs.
A Proofs of preliminary results
For completeness, we prove some of the preliminary results, whose formal proofs were not found in the
literature by the authors.
A.1 Mollification Lemma
Below is the proof of the Mollification Lemma. We remark that, while its main idea is explained in [25],
the full proof is not shown there. Hence, we provide the proof here for completeness.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 21
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
Proof. (of Lemma 2.5) Let p : R → R be a C4 function supported only on [−1, +1], such that p(y) forms
2
a probability distribution. (For example, an appropriately normalized version of e−1/(1+y ) for |y| ≤ 1).
Define pλ (y) to be rescaled to have support [−λ , +λ ] for some λ > 0:
pλ (y) := (1/λ )p(y/λ ).
Let Yλ be a random variable with distribution pλ (y), supported on [−λ , +λ ]. We will set λ = ζ /c.
Now, define
e := E [ψ(x +Yλ )].
ψ
Yλ
This is pointwise close to ψ, since ψ is c-Lipschitz:
|ψ(x)
e − ψ(x)| = | E [ψ(x +Yλ ) − ψ(x)]| ≤ E [|ψ(x +Yλ ) − ψ(x)|] ≤ E [c|Yλ |] ≤ cλ = ζ .
Yλ Yλ Yλ
e is C3 , because ψ(x)
Further, ψ e can be written as a convolution:
ψ(x)
e e 000 = (ψ ∗ pλ )000 = (ψ ∗ p000
= (ψ ∗ pλ )(x) =⇒ ψ λ ).
e 000 is bounded, for a fixed x ∈ R, define the constant z := ψ(x). Then,
To see that ψ
e 000 (x)| = |(ψ ∗ p000
|ψ λ )(x)|
= |(ψ ∗ p000 0 00
λ − z ∗ pλ )(x)| (z is constant, so z0 = 0)
= |(ψ ∗ p000 000
λ − z ∗ pλ )(x)|
= |((ψ − z) ∗ p000
λ )(x)|
Z +∞
= p000
λ (y)(ψ(x − y) − z)dy
−∞
Z +∞
= p000
λ (y)(ψ(x − y) − ψ(x))dy
−∞
Z +λ
≤ |p000
λ (y)||ψ(x − y) − ψ(x)|dy
−λ
Z +λ
≤ ||p000
λ ||∞ |cy|dy (c-Lipschitz)
−λ
= ||p000
λ ||∞ cλ .
2
Define the universal constant Ce := ||p000 ||∞ . We have
p000 4 000 000 4 e
λ (y) = (1/λ )p (y/λ ) =⇒ ||pλ ||∞ ≤ (1/λ )C.
e 000 (x)| ≤ Cc
With our choice of λ = ζ /c, this yields |ψ e 3 /ζ 2 , which completes the proof of Lemma 2.5.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 22
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
A.2 Proof of Lemma 2.6
Below we show the proof of Lemma 2.6.
Proof of Lemma 2.6. First, we “mollify” the function ψ to construct a C3 function ψ,
e by applying
k
Lemma 2.5 for ζ = 1/R . Notice that both choices of ψ are k-Lipschitz. Therefore the Mollification
Lemma guarantees that |ψe 000 (x)| ≤ Ck
e 3 R2k for some universal constant C.
e
k
Since ψ is pointwise close to ψ, with deviation at most 1/R , we have
e
E [ψ(F ≤d (y))] − E [ψ( f ≤d (x))] ≤ E e ≤d (y))] − E [ψ(
[ψ(F e f ≤d (x))] + O(1/Rk ).
y∈{±1}nR x∈[R]n y∈{±1}nR x∈[R]n
Applying the General Invariance Principle (Theorem 2.4) with the function ψ,
e we have
√
E e ≤d (y))] − E [ψ(
[ψ(F e f ≤d (x))] ≤ Ck
e 3 R2k 10d Rd/2 δ .
y∈{±1}nR x∈[R]n
By our choice of parameters d, δ , this is O(1/Rk ).
B Khot–Saket dictatorship test
Khot and Saket [19] showed that any integrality gap instance for an LP relaxation of M AX k-CSPR with
completeness
c and soundness s can be used to construct a k-query dictatorship test with completeness
c
Ω k3 log R and soundness O(s). Using a standard reduction (see, e. g., Section 3.3.1), this in turn implies
3
UG-hardness of an approximation ratio of O( sk log
c
R
) for M AX k-CSPR . It is not hard to see that such an
k−1
integrality gap with c = 1 and s = O(1/R ) for M AX k-CSPR exists. Plugging this into Khot and Saket’s
result directly yields UG-hardness of an approximationratio of O(k3 log R/Rk−1 ) for M AX k-CSPR , as
1
well as a k-query dictatorship test with completeness Ω k3 log R
and soundness O(1/Rk−1 ). We remark
that their result also immediately implies NP-hardness of an approximation ratio of O(2O(k) log R/Rk−1 )
for M AX k-CSPR thanks to the new 2-to-1 Theorem; similarly to Section 3.3.2, the extra factor 2O(k)
comes from the fact that the completeness from Theorem 2.3 is arbitrarily close to 1/2 instead of 1.
Below, we sketch the proof of the aforementioned dictatorship test. In fact, analyzing this test
directly
1
(instead of through LP integrality gap as in [19]) improves the completeness slightly, to Ω k2 log(kR) .
This also leads to a slightly improved UG-hardness-of-approximation factor O(k2 log(kR)/Rk−1 ).
Notation. For any function f : [R]n → [R], and any i ∈ [R], let f i : [R]n → {0, 1} denote the indicator
function f i (x) := 1[ f (x) = i].
Theorem B.1. For any k, R ≥ 2, there exist constants d, δ (depending only on k, R) for a k-query
nonadaptive Dictator-vs.-Quasirandom test with the following guarantees:
• (Completeness) If f is a dictator, then the test passes with probability at least
1
ρ =Ω 2 .
k log(kR)
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 23
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
• (Soundness) If f has Inf≤d i
j [ f ] ≤ δ for all coordinates j ∈ [n] and all projections i ∈ [R], then the
test passes with probability at most
O(1/Rk−1 ).
O(k) log(kR)
Before sketching the proof of the theorem, we note that it also implies NP-hardness of factor 2 Rk−1
and UG-hardness of factor O(k2 log(kR)/Rk−1 ) for M AX k-CSPR :
O(k) log(kR)
Corollary B.2. It is NP-hard to approximate M AX k-CSPR to within a 2 Rk−1
factor, for any k, R ≥ 2.
Corollary B.3. It is UG-hard to approximate M AX k-CSPR to within a O(k2 log(kR)/Rk−1 ) factor, for
any k, R ≥ 2.
Corollary B.2 and Corollary B.3 follow from applying a standard reduction (similar to Section 3.3.1)
from Unique Games using the dictator test from Theorem B.1. Similarly to Section 3.3.2, the extra factor
2O(k) in Corollary B.2 comes from the fact that the completeness from Theorem 2.3 is arbitrarily close to
1/2 instead of 1. This results in a completeness of ρ/2O(k) for M AX k-CSPR after the reduction.
Proof Sketch of Theorem B.1. We may assume that f is balanced (E[ f i ] = 1/R∀i).
The test is defined as follows. We will generate k queries x(1) , . . . x(k) ∈ [R]n . The first coordinates
(1) (k)
x1 , . . . , x1 are picked by:
1. Pick z ∈ [R].
2. With probability ρ, set all queries to be equal to z at this coordinate:
(1) (k)
x1 = · · · = x1 = z.
(1) (k)
3. Otherwise, set all x1 , . . . , x1 ∈ [R] independently at random.
All coordinates j ∈ [n] are picked similarly (independently of other coordinates).
The verifier accepts iff
f (x(1) ) = f (x(2) ) = · · · = f (x(k) ).
Remark B.4. The difference between this dictatorship test and the test of Theorem B.1 is that here, with
probability ρ we set all the queries equal to z, instead of setting each query equal to z with probability
ρ independently. This means the queries are correlated even given z, so our previous technique of
hypercontractivity does not work directly. But it is possible to use a more powerful version of invariance
that can deal with this directly.
The completeness analysis is obvious.
Soundness Analysis
Pr[accept] = Pr[ f (x(1) ) = f (x(2) ) = · · · = f (x(k) )]
= ∑ Pr[ f (x(1) ) = f (x(2) ) = · · · = f (x(k) ) = i]
i∈[R]
= ∑ E[ f i (x(1) ) f i (x(2) ) . . . f i (x(k) )]. (B.1)
i∈[R]
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 24
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
We now invoke Proposition 6.4 of [23] to bound this expectation in terms of a related quantity in
Gaussian space. In particular, since each coordinate of our queries form a “ρ-correlated space”, it follows
from Proposition 6.4 that there exists constants 8 δ , d (independent of n) such that if
Inf≤d i
j (f ) ≤ δ
for all coordinates j, then
" #
k
E ∏ f i (x(`) ) ≤ Γρ (1/R, . . . , 1/R) + 1/Rk .
`=1
Where the function Γρ (1/R) is roughly the probability that k ρ-correlated Gaussians all simultane-
ously satisfy an event that has marginal probability 1/R for a single Gaussian (see the formal definition in
Definition 1.12 of [23]).
Now we conclude by the quantitative bounds on Γ given by Lemma 2 in [19]. In Lemma 2, set
ε = 1/k, and µi = 1/R. Then for
1
ρ= 2
Ck log(kR)
for some sufficiently large universal constant C, Lemma 2 gives
k
Γρ (1/R, . . . , 1/R) ≤ (1 + ε)k−1 ∏ µi ≤ 4/Rk .
i=1
Thus, continuing from equation (B.1):
Pr[accept] = ∑ E[ f i (x(1) ) f i (x(2) ) . . . f i (x(k) )]
i∈[R]
≤ ∑ 5/Rk
i∈[R]
= O(1/Rk−1 ).
This completes the soundness analysis.
Acknowledgments.
We thank Rishi Saket for bringing [19] to our attention. We also thank the anonymous reviewers for
providing useful comments and suggestions.
8 In the notation of Proposition 6.4, we set ε = 1/Rk , and have α = 1/Rk , δ = τ (given in Prop 6.4), d = k log(1/δ )/ log(R).
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 25
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
References
[1] P ER AUSTRIN AND E LCHANAN M OSSEL: Approximation resistant predicates from pairwise
independence. Comput. Complexity, 18(2):249–271, 2009. [doi:10.1007/s00037-009-0272-6] 2
[2] B OAZ BARAK , P RAVESH K. KOTHARI , AND DAVID S TEURER: Small-set expansion in
shortcode graph and the 2-to-2 conjecture. In Proc. 10th Innovations in Theoret. Comp.
Sci. conf. (ITCS’19), pp. 9:1–12. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2019.
[doi:10.4230/LIPIcs.ITCS.2019.9] 4, 7
[3] A LINE B ONAMI: Étude des coefficients Fourier des fonctions de L p (G). Ann. Inst. Fourier,
20(2):335–402, 1970. EuDML. 11
[4] S IU O N C HAN: Approximation resistance from pairwise-independent subgroups. J. ACM,
63(3):27:1–32, 2016. [doi:10.1145/2873054] 2, 3, 4, 21
[5] M OSES C HARIKAR , M OHAMMAD TAGHI H AJIAGHAYI , AND H OWARD K ARLOFF: Improved
approximation algorithms for Label Cover problems. Algorithmica, 61(1):190–206, 2011.
[doi:10.1007/s00453-010-9464-3] 2
[6] M OSES C HARIKAR , KONSTANTIN M AKARYCHEV, AND Y URY M AKARYCHEV: Near-optimal
algorithms for maximum constraint satisfaction problems. ACM Trans. Algorithms, 5(3):32:1–14,
2009. [doi:10.1145/1541885.1541893] 2, 3
[7] P IERLUIGI C RESCENZI , R ICCARDO S ILVESTRI , AND L UCA T REVISAN: On weighted vs un-
weighted versions of combinatorial optimization problems. Inform. Comput., 167(1):10–26, 2001.
Preliminary version in ISTCS’96. [doi:10.1006/inco.2000.3011] 6
[8] I RIT D INUR , S UBHASH K HOT, G UY K INDLER , D OR M INZER , AND M ULI S AFRA: Towards
a proof of the 2-to-1 Games Conjecture? In Proc. 50th STOC, pp. 376–389. ACM Press, 2018.
[doi:10.1145/3188745.3188804] 2, 4, 5, 7
[9] I RIT D INUR , S UBHASH K HOT, G UY K INDLER , D OR M INZER , AND M ULI S AFRA: On non-
optimally expanding sets in Grassmann graphs. Israel J. Math., 243:377–420, 2021. Preliminary
version in STOC’18. [doi:10.1007/s11856-021-2164-7] 4, 7
[10] L ARS E NGEBRETSEN: The nonapproximability of non-Boolean predicates. SIAM J. Discr. Math.,
18(1):114–129, 2004. [doi:10.1137/S0895480100380458] 2
[11] L ARS E NGEBRETSEN AND J ONAS H OLMERIN: More efficient queries in PCPs for NP and
improved approximation hardness of maximum CSP. Random Struct. Algor., 33(4):497–514, 2008.
[doi:10.1002/rsa.20226] 2
[12] G IL G OLDSHLAGER AND DANA M OSHKOVITZ: Approximating kCSP for large alphabets. Preprint,
2015. Available at author’s website. 2, 3, 4
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 26
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
[13] V ENKATESAN G URUSWAMI AND P RASAD R AGHAVENDRA: Constraint satisfaction over a non-
Boolean domain: Approximation algorithms and Unique-Games hardness. In Proc. 11th Internat.
Workshop on Approximation Algorithms for Combinat. Opt. Probl. (APPROX’08), pp. 77–90.
Springer, 2008. [doi:10.1007/978-3-540-85363-3_7] 2
[14] G USTAV H AST: Approximating Max kCSP – outperforming a random assignment with almost
a linear factor. In Proc. 32nd Internat. Colloq. on Automata, Languages, and Programming
(ICALP’05), pp. 956–968. Springer, 2005. [doi:10.1007/11523468_77] 2, 3
[15] S UBHASH K HOT: On the power of unique 2-prover 1-round games. In Proc. 34th STOC, pp.
767–775. ACM Press, 2002. [doi:10.1145/509907.510017] 2, 4, 7
[16] S UBHASH K HOT, G UY K INDLER , E LCHANAN M OSSEL , AND RYAN O’D ONNELL: Optimal
inapproximability results for MAX-CUT and other 2-variable CSPs? SIAM J. Comput., 37(1):319–
357, 2007. [doi:10.1137/S0097539705447372] 2, 3, 4, 5
[17] S UBHASH K HOT, D OR M INZER , AND M ULI S AFRA: Pseudorandom sets in Grassmann graph
have near-perfect expansion. In Proc. 59th FOCS, pp. 592–601. IEEE Comp. Soc., 2018.
[doi:10.1109/FOCS.2018.00062] 2, 3, 4, 5, 7
[18] S UBHASH K HOT AND O DED R EGEV: Vertex cover might be hard to approximate to within 2 − ε.
J. Comput. System Sci., 74(3):335–349, 2008. [doi:10.1016/j.jcss.2007.06.019] 6
[19] S UBHASH K HOT AND R ISHI S AKET: Approximating CSPs using LP relaxation. In Proc. 42nd
Internat. Colloq. on Automata, Languages, and Programming (ICALP’15), pp. 822–833. Springer,
2015. [doi:10.1007/978-3-662-47672-7_67] 3, 4, 21, 23, 25
[20] G UY K INDLER , A LEXANDRA KOLLA , AND L UCA T REVISAN: Approximation of non-Boolean
2CSP. In Proc. 27th Ann. ACM–SIAM Symp. on Discrete Algorithms (SODA’16), pp. 1705–1714.
SIAM, 2016. [doi:10.1137/1.9781611974331.ch117] 2, 3, 4, 21
[21] KONSTANTIN M AKARYCHEV AND Y URY M AKARYCHEV: Approximation algo-
rithm for non-Boolean Max-k-CSP. Theory of Computing, 10(13):341–358, 2014.
[doi:10.4086/toc.2014.v010a013] 2, 3, 4
[22] PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN: Near-optimal UGC-
hardness of approximating Max k-CSPR . In Proc. 19th Internat. Workshop on Approximation
Algorithms for Combinat. Opt. Probl. (APPROX’16), pp. 15:1–28. Schloss Dagstuhl–Leibniz-
Zentrum fuer Informatik, 2016. [doi:10.4230/LIPIcs.APPROX-RANDOM.2016.15] 1, 4
[23] E LCHANAN M OSSEL: Gaussian bounds for noise correlation of functions and tight analysis of long
codes. In Proc. 49th FOCS. IEEE Comp. Soc., 2008. [doi:10.1109/FOCS.2008.44] 25
[24] E LCHANAN M OSSEL , RYAN O’D ONNELL , AND K RZYSZTOF O LESZKIEWICZ: Noise stability
of functions with low influences: Invariance and optimality. Ann. Math., 171(1):295–341, 2010.
[doi:10.4007/annals.2010.171.295] 5, 9, 10
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 27
PASIN M ANURANGSI , P REETUM NAKKIRAN , AND L UCA T REVISAN
[25] RYAN O’D ONNELL: Analysis of Boolean Functions. Cambridge Univ. Press, 2014. CUP. 11, 21
[26] A LEX S AMORODNITSKY AND L UCA T REVISAN: A PCP characterization of NP with op-
timal amortized query complexity. In Proc. 32nd STOC, pp. 191–199. ACM Press, 2000.
[doi:10.1145/335305.335329] 2
[27] A LEX S AMORODNITSKY AND L UCA T REVISAN: Gowers uniformity, influence of variables,
and PCPs. SIAM J. Comput., 39(1):323–360, 2009. Preliminary version in STOC’06.
[doi:10.1137/070681612] 2
[28] L UCA T REVISAN: Parallel approximation algorithms by positive linear programming. Algorithmica,
21(1):72–88, 1998. [doi:10.1007/PL00009209] 2, 3
AUTHORS
Pasin Manurangsi
Senior Research Scientist
Google Research
Mountain View, CA, USA
pasin google com
https://pasin30055.github.io/
Preetum Nakkiran
Postdoctoral researcher
Halıcıoğlu Data Science Institute
University of California, San Diego
La Jolla, CA, USA
preetum ucsd edu
https://preetum.nakkiran.org/
Luca Trevisan
Professor of Computer Science
Department of Decision Sciences
Bocconi University
Milan, Italy
l.trevisan unibocconi it
https://lucatrevisan.github.io/
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 28
N EAR -O PTIMAL NP-H ARDNESS OF A PPROXIMATING M AX k-CSPR
ABOUT THE AUTHORS
PASIN M ANURANGSI received his Ph. D. from the University of California, Berkeley in
2019; his advisors were Prasad Raghavendra and Luca Trevisan. His thesis was on the
intersection between hardness of approximation and fine-grained complexity. Prior to
that, he received bachelor’s and master’s degrees from MIT, where he was advised by
Dana Moshkovitz. He is currently a research scientist at Google Research, primarily
working on theoretical and practical aspects of algorithmic privacy.
P REETUM NAKKIRAN is a postdoctoral researcher at the University of California, San
Diego. He is hosted by Mikhail Belkin, and part of the NSF/Simons Collaboration on the
Theoretical Foundations of Deep Learning. He currently works on scientific aspects of
machine learning, trying to understand when and why deep learning works. He obtained
his Ph. D. in 2021 from Harvard, advised by Madhu Sudan and Boaz Barak. He did his
undergraduate work in EECS at the University of California, Berkeley.
L UCA T REVISAN is a professor of computer science at Bocconi University. Luca received
his Ph. D. from the Sapienza University of Rome, advised Pierluigi Crescenzi. After
graduating, Luca was a postdoc at MIT and at DIMACS, and he was on the faculty of
Columbia University, U.C. Berkeley, and Stanford, before returning to Berkeley in 2014
and, at long last, returning to Italy in 2019.
Luca’s research is in theoretical computer science, with a focus on computational com-
plexity, on the analysis of algorithms, on the foundations of cryptography, and on topics
at the intersection of theoretical computer science and pure mathematics.
Luca received the STOC’97 Danny Lewin (best student paper) award, the 2000 Oberwol-
fach Prize, a 2000 Sloan Fellowship and an NSF CAREER Award. He was an invited
speaker at the 2006 International Congress of Mathematicians. In 2019, he received an
ERC Advanced Grant.
T HEORY OF C OMPUTING, Volume 18 (3), 2022, pp. 1–29 29