Plaintext
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23
www.theoryofcomputing.org
On a Generalization of Iterated and
Randomized Rounding
Nikhil Bansalโ
Received April 24, 2019; Revised November 18, 2022; Published December 5, 2024
Abstract. We give a general method for rounding linear programs, that combines
the commonly used iterated rounding and randomized rounding techniques. In
particular, we show that whenever iterated rounding can be applied to a problem with
some slack, there is a randomized procedure that returns an integral solution that
satisfies the guarantees of iterated rounding and also has concentration properties.
We use this to give new results for several classical problems such as rounding
column-sparse LPs, makespan minimization on unrelated machines and degree-
bounded spanning trees.
1 Introduction
A powerful approach in approximation algorithms is to formulate the problem at hand as a
0-1 integer program and consider some efficiently solvable relaxation for it. Then, given some
fractional solution ๐ฅ โ [0, 1]๐ to this relaxation, apply a suitable rounding procedure to ๐ฅ to
An extended abstract of this paper appeared in the Proceedings of the 51st Annual ACM Symposium on Theory
of Computing (STOCโ19) [7].
โ Supported by the ERC Consolidator Grant 617951 and the NWO VICI grant 639.023.812.
ACM Classification: F.2.2, G.1.6
AMS Classification: 68W25, 68W20
Key words and phrases: approximation, randomized rounding, scheduling, discrepancy,
semidefinite programming, random walks
ยฉ 2024 Nikhil Bansal
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2024.v020a006
N IKHIL BANSAL
obtain an integral 0-1 solution. Arguably, the two most basic and extensively studied techniques
for rounding such relaxations are randomized rounding and iterated rounding.
Randomized rounding. Here, the fractional values ๐ฅ ๐ โ [0, 1] are interpreted as probabilities,
and used to round the variables independently to 0 or 1. A key property of this rounding is that
each linear constraint is preserved in expectation, and its value is tightly concentrated around
its mean as given by Chernoff bounds, or more generally Bernsteinโs inequality. Randomized
rounding is well-suited to problems where the constraints do not have much structure, or when
they are soft and some error can be tolerated. Sometimes these errors can be fixed by applying
problem-specific alteration steps. We refer to [36, 35] for various applications of randomized
rounding.
Iterated rounding. This technique is quite different from randomized rounding and is useful
for problems that may have some hard combinatorial constraints that must be maintained, e. g.,
if the final solution must be a spanning tree or a matching. It is also useful for problems where
the constraints may have some other interesting structural property such as column-sparsity
that we may wish to exploit.
The rounding is based on linear algebra and it proceeds in several iterations ๐ = 1, 2, . . .,
until all variables are eventually rounded to 0 or 1. More specifically, we start with ๐ฅ (0) = ๐ฅ
initially, and let ๐ฅ (๐โ1) โ โ ๐ be the solution at the beginning of iteration ๐ and ๐ ๐ denote the
number of fractional variables in ๐ฅ (๐โ1) (i. e., those strictly between 0 and 1). Then one cleverly
chooses some collection of linear constraints on these ๐ ๐ fractional variables, say specified by
rows of the matrix ๐ (๐) with rank(๐ (๐) ) โค ๐ ๐ โ 1, and updates the solution as ๐ฅ (๐) = ๐ฅ (๐โ1) + ๐ฆ (๐)
by some non-zero vector ๐ฆ (๐) satisfying ๐ (๐) ๐ฆ (๐) = 0, so that some fractional variable reaches 0
or 1. Such a ๐ฆ (๐) exists as the null space of ๐ (๐) has dimension ๐ ๐ โ rank(๐ (๐) ) โฅ 1. Notice that
once a variable reaches 0 or 1 it stays fixed.
Despite its simplicity, this method is extremely powerful and most fundamental results
in combinatorial optimization such as the integrality of matroid, matroid-intersection and
non-bipartite matching polytopes follow very cleanly using this approach. Similarly, several
breakthrough results for problems such as degree-bounded spanning trees, survivable network
design and rounding for column-sparse LPs were obtained by this method. An excellent
reference is [23].
1.1 Need for combining the approaches
In many problem settings, however, one needs a rounding that combines the features of both
randomized and iterated rounding [15, 16, 19, 3]. We give several examples in Section 1.3, but a
typical scenario is where the problem involves finding an object with specific combinatorial
constraints that cannot be violated, e. g., a spanning tree to connect nodes in a network, or
a one-sided matching (assignment) of jobs to machines; and additionally a list of other soft
side-constraints, e. g., a bound on the maximum degree of the spanning tree to prevent any
particular node from being overloaded, or perhaps edges are of several types and we wish to
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 2
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
purchase a certain minimum number of each type due to fairness considerations, or there may
be multiple budget constraints for various subsets of edges.
As the soft constraints are typically arbitrary and lack structure, essentially the best one can
hope for is to satisfy them fractionally and then apply randomized rounding. On the other
hand, randomized rounding can be quite bad at satisfying the hard combinatorial constraints,
and iterated rounding is the right approach to handle them. So given a problem with both hard
and soft constraints, either technique by itself does not suffice and one would like a rounding
that simultaneously does as well as iterated rounding on the hard constraints and as well as
randomized rounding on the soft constraints.
Dependent rounding. Motivated by such problems, there has been extensive work on devel-
oping dependent rounding techniques. Roughly speaking, these techniques round the fractional
solution in some random but correlated way to satisfy the hard constraints and also ensure some
concentration properties for the soft constraints. Some examples of such methods include swap
rounding [15, 16], randomized pipage rounding [2, 33, 19, 20], maximum-entropy sampling
[3, 32, 4], rounding via discrepancy [26, 29, 11] and Gaussian random walks [28].
A key idea here is that the weaker property of negative dependence (instead of independence)
also suffices to get concentration. There is a rich and deep theory of negative dependence
and various notions such as negative correlation, negative cylinder dependence, negative
association, strongly Rayleigh property and determinantal measures, that imply interesting
concentration properties [27, 13, 17]. This insight has been extremely useful and for many
general problems such as those involving assignment or matroid polytopes, one can exploit
the underlying combinatorial structure to design rounding approaches that ensure negative
dependence between all or some suitable collection of random variables.
Limitations. Even though these dependent rounding methods are powerful and ingenious,
they are also limited by the fact that requiring negative dependence substantially restricts the
kinds of rounding steps that can be designed, and the type of problems that they can be applied
to. Moreover, even when such a rounding is possible, it typically requires a lot of creativity and
careful understanding of the problem structure to come up with the rounding for the problem
at hand.
1.2 Our results
Our main result is a new and general dependent rounding approach that we call sub-isotropic
rounding. In particular, it combines iterated and randomized rounding in a completely generic
way and significantly extends the scope of previous dependent rounding techniques. Before
describing our result, we need some definitions.
Let ๐1 , . . . , ๐๐ be independent 0-1 random variables with means ๐ฅ ๐ = ๐ผ[๐๐ ] and ๐ 1 , . . . , ๐ ๐
be arbitrary reals (possibly negative). It is well known [14] that the sum ๐ = ๐ ๐ ๐ ๐๐ satisfies the
ร
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 3
N IKHIL BANSAL
following tail bound for any ๐ก โฅ 0:
!
๐ก2
(Bernsteinโs inequality) Pr[๐ โ ๐ผ[๐] โฅ ๐ก] โค exp โ ร 2 (1.1)
2 ๐ ๐ ๐ (๐ฅ ๐ โ ๐ฅ 2๐ ) + 2๐๐ก/3
where ๐ = max๐ |๐ ๐ |. The lower tail follows by applying the above to โ๐ ๐ , and the standard
Chernoff bounds correspond to (1.1) when ๐ ๐ โ [0, 1] for ๐ โ [๐].
The following relaxation of Bernsteinโs inequality will be highly relevant for us.
Definition 1.1 (๐ฝ-concentration). Let ๐ฝ โฅ 1. For a vector-valued random variable ๐ =
(๐1 , . . . , ๐๐ ) where ๐๐ are possibly dependent 0-1 random variables, we say that ๐ is ๐ฝ-
concentrated around its mean ๐ฅ = (๐ฅ 1 , . . . , ๐ฅ ๐ ) where ๐ฅ ๐ = ๐ผ[๐๐ ], if for every ๐ โ โ ๐ , the
real-valued random variable h๐, ๐i satisfies Bernsteinโs inequality up to a factor of ๐ฝ in the
exponent, i. e.,
!
๐ก 2 /๐ฝ
Pr[h๐, ๐i โ ๐ผ[h๐, ๐ฅi] โฅ ๐ก] โค exp โ ร 2 (1.2)
2 ๐ ๐ ๐ (๐ฅ ๐ โ ๐ฅ 2๐ ) + 2๐๐ก/3
where ๐ = max๐ |๐ ๐ |.
Main result. We show that whenever iterated rounding can be applied to a problem such
that in iteration ๐, there is some slack in the sense that rank(๐ (๐) ) โค (1 โ ๐ฟ)๐ ๐ for some fixed
๐ฟ > 0, then ๐(1/๐ฟ)-concentration can be achieved for free. More precisely, we have the following
result.
Theorem 1.2. For any fixed ๐ฟ โ (0, 1), let us formalize an iterated rounding algorithm as follows. Given
a starting solution ๐ฅ, initialize ๐ฅ (0) = ๐ฅ. In each step ๐, for ๐ โฅ 1, the algorithm selects a matrix ๐ (๐) with
rank(๐ (๐) ) โค (1 โ ๐ฟ)๐ ๐ . Now it can pick any ๐ฆ (๐) satisfying ๐ (๐) ๐ฆ (๐) = 0 and set ๐ฅ (๐) = ๐ฅ (๐โ1) + ๐ฆ (๐) ,
and iterate until ๐ฅ (final) is in {0, 1} ๐ . Let ๐ โ {0, 1} ๐ be the set of outcomes that can be reached by the
iterated rounding algorithm.
Then the sub-isotropic rounding algorithm outputs a random vector ๐ satisfying
1. ๐ โ ๐ with probability 1, and
2. ๐ผ[๐] = ๐ฅ and ๐ is ๐ฝ-concentrated around ๐ฅ with ๐ฝ = 20/๐ฟ.
Remark 1.3. A simple example shows that the dependence ๐ฝ = ฮฉ(1/๐ฟ) in Theorem 1.2 cannot
be improved. Let ๐ฟ = 1/๐ก for some integer ๐ก, and consider ๐ variables ๐ฅ 1 , . . . , ๐ฅ ๐ , partitioned
into ๐/๐ก blocks ๐ต1 , . . . , ๐ต๐/๐ก where block ๐ต ๐ = {๐ฅ (๐โ1)๐ก+1 , . . . , ๐ฅ ๐๐ก }. For each ๐ต ๐ there are ๐ก โ 1
constraints ๐ฅ(๐โ1)๐ก+1 = ๐ฅ (๐โ1)๐ก+2 = ยท ยท ยท = ๐ฅ ๐๐ก , and hence there are (๐ก โ 1)(๐/๐ก) = ๐(1 โ ๐ฟ) constraints
in total. Consider the starting solution with all ๐ฅ ๐ = 1/2. Now, no matter what random choices
the algorithm makes, the variables within a block evolve identically and all reach the same value
0 or 1. So the linear function ๐ = ๐ฅ 1 + ยท ยท ยท + ๐ฅ ๐ will only be 1/๐ฟ-concentrated.
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 4
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
The generality of Theorem 1.2 directly gives new results for several problems where iterated
rounding gives useful guarantees. All one needs to show is that the original iterated rounding
argument for the problem can be applied with some slack, which is often straightforward and
only worsens the approximation guarantee slightly. In particular, note that Theorem 1.2 makes
no assumption about the combinatorial structure of the problem and by working with the more
relaxed notion of ๐ฝ-concentration, we can completely avoid the need for negative dependence.
1.3 Motivating problems and applications
We now describe several applications of our result and also briefly discuss why they seem
beyond the reach of current dependent rounding methods.
1.3.1 Rounding for column-sparse LPs
Let ๐ฅ โ [0, 1]๐ be some fractional solution satisfying ๐ด๐ฅ = ๐, where ๐ด โ โ ๐ร๐ is an ๐ ร ๐
matrix. The celebrated BeckโFiala algorithm [12] (see also [21] for a related result) uses iterated
rounding to produce an integral solution ๐ satisfying k๐ด(๐ โ ๐ฅ)k โ < ๐ก, where ๐ก is the maximum
โ 1 norm of the columns of ๐ด. This is substantially better than randomized rounding for small ๐ก,
where the error for any row is typically its โ 2 norm which can be substantially larger than ๐ก.
Many problems, however, involve both some column-sparse constraints that come from the
underlying combinatorial problem, and some general arbitrary constraints which might not
have much structure. This motivates the following natural question.
Question 1.4. Let ๐ be a linear system with two sets of constraints given by matrices ๐ด and ๐ต,
where ๐ด is column-sparse, while ๐ต is arbitrary. Given some fractional solution ๐ฅ, can we round
it to get error ๐(๐ก) for the rows of ๐ด, while doing no worse than randomized rounding for the
constraints in ๐ต?
Remark 1.5. Note that simply applying iterated rounding on the rows of ๐ด gives no control on
the error for ๐ต. Similarly, just doing randomized rounding will not give ๐(๐ก) error for ๐ด. Also
as ๐ด and ๐ต are arbitrary, negative dependence based techniques do not seem to apply.
We show that a direct modification of the BeckโFiala argument gives slack ๐ฟ, for any
๐ฟ โ [0, 1), while worsening the error bound slightly to ๐ก/(1 โ ๐ฟ). Setting, say ๐ฟ = 1/2 and
applying Theorem 1.2 gives ๐ โ {โ1, 1} ๐ that (i) has error at most 2๐ก for rows of ๐ด, (ii) satisfies
๐ผ[๐๐ ] = ๐ฅ ๐ and is ๐(1)-concentrated, thus giving similar guarantees as randomized rounding
for the rows of ๐ต. In fact, the solution produced by the algorithm will satisfy concentration for
all linear constraints and not just for the rows of ๐ต.
We also consider an extension to the Komlรณs setting, where the error depends on the
maximum โ2 norm of columns of ๐ด. These results are described in Section 4.1.
1.3.2 Makespan minimization on unrelated machines
The classical problem of makespan minimization on unrelated machines is the following. Given
๐ jobs and ๐ machines, where each job ๐ โ [๐] has arbitrary size ๐ ๐๐ on machine ๐ โ [๐], assign
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 5
N IKHIL BANSAL
the jobs to machines to minimize the maximum machine load. In a celebrated result, [24] gave
a rounding method with additive error at most ๐ max := max๐๐ ๐ ๐๐ , i. e., it gives an assignment
with makespan at most Opt + ๐ max where Opt is the value of an optimum LP solution. In
many practical problems, however, there are other soft resource constraints and side constraints
that are added to the fractional formulation. So it is useful to find a rounding that satisfies
these approximately but increases the makespan by only ๐(๐ max ). This motivates the following
natural problem.
Question 1.6. Given a fractional assignment ๐ฅ, find an integral assignment ๐ with additive
error ๐(๐max ) and that also satisfies ๐ผ[๐๐๐ ] = ๐ฅ ๐๐ and concentration for all linear functions of ๐ฅ ๐๐ ,
i. e., for all {๐ ๐๐ } ๐๐ , with high probability it holds that ๐๐ ๐ ๐๐ ๐๐๐ โ ๐๐ ๐ ๐๐ ๐ฅ ๐๐ .
ร ร
Questions related to finding a good assignment with some concentration properties have
been studied before [19, 2, 16], and several methods such as randomized pipage rounding and
swap rounding have been developed for this. However, these methods crucially rely on the
underlying matching structure and round the variables alternately along cycles, which limits
them in various ways: either they give partial assignments, or only get concentration for edges
incident to a vertex.
We show that the iterated rounding proof of the result of [24] can be easily modified
to work for any slack ๐ฟ โ [0, 1/2) while giving additive error ๐ max /(1 โ 2๐ฟ). Theorem 1.2
(say, with ๐ฟ = 1/4), thus gives a solution that has additive error at most 2๐ max and satisfies
๐(1)-concentration. The result also extends naturally to the ๐-resource setting, where ๐ ๐๐ is a
๐-dimensional vector. These results are described in Section 4.2.
1.3.3 Degree-bounded spanning trees and thin trees
In the minimum cost degree-bounded spanning tree problem, we are given an undirected graph
๐บ = (๐ , ๐ธ) with edge costs ๐ ๐ โฅ 0 for ๐ โ ๐ธ, and integer degree bounds ๐ ๐ฃ for ๐ฃ โ ๐, and the goal
is to find a minimum cost spanning tree satisfying the degree bounds. In a breakthrough result,
Singh and Lau [31] gave an iterated rounding algorithm that given any fractional spanning tree
๐ฅ, finds a spanning tree ๐ with cost at most h๐, ๐ฅi and a degree violation of plus one.
The celebrated thin-tree conjecture asks1 if given a fractional spanning tree ๐ฅ, there is a
spanning tree ๐ satisfying ฮ๐ (๐) โค ๐ฝฮ๐ฅ (๐) for every ๐ โ ๐, where ๐ฝ = ๐(1). Here ฮ๐ (๐) is
the number of edges of ๐ crossing ๐, and ฮ๐ฅ (๐) is the ๐ฅ-value crossing ๐. This conjecture has
received a lot of attention recently, due to its connection to the asymmetric travelling salesman
problem (ATSP) [3, 1]. Despite the recent breakthrough on ATSP [34], the thin-tree conjecture
remains open.
If one only considers single vertex sets ๐ = {๐ฃ}, the result of [31] implies that ฮ๐ (๐ฃ) โค 2ฮ๐ฅ (๐ฃ)
for each vertex ๐ฃ (as ฮ๐ฅ (๐ฃ) โฅ 1 in any fractional spanning tree ๐ฅ). On the other hand for general
sets ๐, the best known algorithmic results give ๐ฝ = ๐(log ๐/log log ๐) [3, 15, 32, 20]. These
algorithms crucially rely on the negative dependence properties of spanning trees, which do
1Equivalently, any ๐-edge-connected graph ๐บ has a spanning tree satisfying ฮ๐ (๐) = ๐(1/๐)ฮ๐บ (๐) for every
๐ โ ๐.
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 6
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
not give anything better for single vertex cuts (e. g., even if ๐ ๐ฃ = 2 for all ๐ฃ, by a balls-and-bins
argument a random tree will have maximum degree ฮ(log ๐/log log ๐)).
The motivates the following natural question as a first step toward the thin-tree conjecture.
Question 1.7. Can we find a spanning tree that achieves ๐ฝ = ๐(1) for single vertex cuts and
๐ฝ = ๐(log ๐/log log ๐) for general cuts?
We show that the iterated rounding algorithm of [31] can be easily modified to create slack
๐ฟ โ (0, 1/2) while violating the degree bounds additively by less than 2/(1 โ 2๐ฟ). Applying
Theorem 1.2 with ๐ฟ = 1/6, the degree violation is strictly below 3 and this gives a distribution
supported on trees with a degree violation of plus 2 and satisfies ๐(1)-concentration. By a
standard cut counting argument [3], the concentration property implies ๐(log ๐/log log ๐)-
thinness for every cut. We describe these results in Section 4.3. In fact, we consider the more
general setting of the minimum cost degree-bounded matroid-basis problem.
1.4 Overview of techniques
We now give a high level overview of our algorithm and analysis. The starting observation is
that randomized rounding can be viewed as an iterative algorithm by doing a discrete version
of the standard Brownian motion on the cube as follows. Given ๐ฅ (0) as the starting fractional
solution, consider a random walk in the [0, 1]๐ cube starting at ๐ฅ (0) , with tiny step size ยฑ๐พ
chosen independently for each coordinate, where upon reaching a face of the cube (i. e., some
๐ฅ ๐ reaches 0 or 1) the walk stays on that face. The process stops upon reaching some vertex
๐ = (๐1 , . . . , ๐๐ ) of the cube. By the martingale property of random walks, the probability
(0)
that ๐๐ = 1 is exactly ๐ฅ ๐ and as the walk in each coordinate is independent, ๐ has the same
distribution on {0, 1} ๐ as under randomized rounding.
A first attempt. Now consider iterated rounding, and recall that here the update ๐ฆ (๐) at iteration
๐ must lie in the nullspace of ๐ (๐) . So a natural first idea to combine this with randomized
rounding, is to do a random walk in the null space of ๐ (๐) until some variable reaches 0 or 1. The
slack condition rank(๐ (๐) ) โค (1 โ ๐ฟ)๐ ๐ implies that the nullspace has at least ๐ฟ๐ ๐ dimensions,
which could potentially give โenough randomness" to the random walk.
It turns out, however, that doing a standard random walk in the null space of ๐ (๐) does
not work. The problem is that as the constraints defining ๐ (๐) can be completely arbitrary in
our setting, the random walk can lead to very high correlations between certain subsets of
coordinates causing the ๐ฝ-concentration property to fail. For example, suppose ๐ฟ = 1/2 and ๐ (0)
consists of the constraints ๐ฅ ๐ = ๐ฅ ๐+1 for ๐ = 1, . . . , ๐/2 โ 1. Then the random walk will update
๐ฅ ๐/2 , . . . , ๐ฅ ๐ independently, but for ๐ฅ 1 , . . . , ๐ฅ ๐/2 the updates must satisfy ฮ๐ฅ1 = . . . = ฮ๐ฅ ๐/2 , and
hence will be completely correlated. So the linear function ๐ฅ 1 + . . . + ๐ฅ ๐/2 will not have any
concentration (as all the variables will simultaneously rise by โ๐ฟ or by +๐ฟ).
A different random walk. To get around this problem, we design a different random walk
in the null space of ๐ (๐) , which looks similar to an independent walk in every direction even
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 7
N IKHIL BANSAL
though the coordinates are correlated. More formally, consider a random vector ๐ = (๐1 , . . . , ๐๐ ),
where ๐๐ are mean-zero random variables. For a parameter ๐ โฅ 1, we say that ๐ is ๐-weakly
pairwise independent if for every ๐ = (๐ 1 , . . . , ๐ ๐ ) โ โ ๐ ,
hร i ร
๐ผ[h๐, ๐i 2 ] = ๐ผ ( ๐ ๐ ๐๐ )2 โค ๐ ๐ 2๐ ๐ผ[๐๐2 ].
๐ ๐
If ๐1 , . . . , ๐๐ are pairwise independent, note that the above holds as equality with ๐ = 1, and
hence this can be viewed as a relaxation of pairwise independence. We show that whenever
rank(๐ (๐) ) โค (1 โ ๐ฟ)๐ ๐ , there exist ๐-weakly pairwise independent random updates ๐ฆ (๐) that
lie in the null space of ๐ (๐) (which has dimension at least ๐ฟ๐ ๐ ) with ๐ โ 1/๐ฟ. Moreover these
updates can be found by solving a semidefinite program (SDP).
Next, using a variant of Freedmanโs martingale analysis [18], we show that applying these
๐-weakly pairwise independent random updates (with small increments) until all the variables
reach 0-1, gives an integral solution that satisfies ๐(๐)-concentration.
These techniques are motivated by our recent works on algorithmic discrepancy [8, 9]. While
discrepancy is closely related to rounding [25, 29], a key difference in discrepancy is that the
error for rounding a linear system ๐ด๐ฅ = ๐ depends on the โ 2 norms of the coefficients of the
constraints and not on ๐. E. g., suppose ๐ฅ โ [0, 1]๐ satisfies ๐ฅ 1 + . . . + ๐ฅ ๐ = log ๐, then the
sum stays ๐(log ๐) upon randomized
โ rounding with high probability, while using discrepancy
methods directly gives ฮฉ( ๐) error, which would be unacceptably large in this setting. So our
results can be viewed as using techniques from discrepancy theory to obtain bounds that are
sensitive to ๐ฅ. Recently, this direction was explored in [11] but their method gave much weaker
results and applied to very limited settings.
2 Technical preliminaries
2.1 Tail bounds for supermartingales
We will need the following tail bound for supermartingales with a strong negative drift.
Theorem 2.1. Let ๐ผ โ (0, 1). Let {๐ ๐ : ๐ = 0, 1, . . . , } be a sequence of random variables with
๐๐ := ๐ ๐ โ ๐ ๐โ1 , such that ๐0 is constant and ๐๐ โค 1 for all ๐ โฅ 1. If
๐ผ ๐โ1 [๐๐ ] โค โ๐ผ ๐ผ ๐โ1 [๐๐2 ]
for all ๐ โฅ 1, where ๐ผ ๐โ1 [ ยท ] denotes ๐ผ[ ยท |๐1 , . . . , ๐ ๐โ1 ], then for ๐ก โฅ 0,
Pr[๐ ๐ โ ๐0 โฅ ๐ก] โค exp(โ๐ผ๐ก).
Before proving Theorem 2.1, we first need a simple lemma.
Lemma 2.2. Let ๐ be a random variable satisfying ๐ โค 1. Then for any ๐ > 0,
๐๐ ๐
๐ผ[e ] โค exp ๐๐ผ[๐] + (e โ ๐ โ 1)๐ผ[๐ ] . 2
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 8
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
Proof. Let ๐ (๐, ๐ฅ) = (e๐๐ฅ โ ๐๐ฅ โ 1)/๐ฅ 2 , where we set ๐ (๐, 0) = ๐2 /2. By standard integration, it
โซ๐โซ๐
is easily verified that ๐ (๐, ๐ฅ) = 0 0 e๐ก๐ฅ ๐๐ก ๐๐ . As e๐ก๐ฅ is non-decreasing in ๐ฅ for all ๐ก โฅ 0, this
implies that ๐ (๐, ๐ฅ) is non-decreasing in ๐ฅ. In particular, ๐ (๐, ๐ฅ) โค ๐ (๐, 1) for any ๐ฅ โค 1 and
hence
e๐๐ฅ = 1 + ๐๐ฅ + ๐ (๐, ๐ฅ)๐ฅ 2 โค 1 + ๐๐ฅ + ๐ (๐, 1)๐ฅ 2 = 1 + ๐๐ฅ + (e๐ โ ๐ โ 1)๐ฅ 2 .
Taking expectations and using that 1 + ๐ฅ โค e๐ฅ for all ๐ฅ โ โ gives,
๐๐ ๐ ๐
๐ผ[e ] โค 1 + ๐๐ผ[๐] + (e โ ๐ โ 1)๐ผ[๐ ] โค exp ๐๐ผ[๐] + (e โ ๐ โ 1)๐ผ[๐ ] .
2 2
Proof of Theorem 2.1. By Markovโs inequality,
๐ผ[exp(๐ผ(๐ ๐ โ ๐0 ))]
Pr[๐ ๐ โ ๐0 โฅ ๐ก] = Pr[exp(๐ผ(๐ ๐ โ ๐0 )) โฅ exp(๐ผ๐ก)] โค
exp(๐ผ๐ก)
so it suffices to show that ๐ผ[exp(๐ผ(๐ ๐ โ ๐0 ))] โค 1. As ๐0 is deterministic, this is same as
๐ผ[exp(๐ผ๐ ๐ )] โค exp(๐ผ๐0 ). Now,
h i
๐ผ ๐โ1 e๐ผ๐ ๐ = e๐ผ๐ ๐โ1 ๐ผ ๐โ1 e๐ผ(๐ ๐ โ๐ ๐โ1 ) = e๐ผ๐ ๐โ1 ๐ผ ๐โ1 e๐ผ๐๐
โค e๐ผ๐ ๐โ1 exp ๐ผ๐ผ ๐โ1 [๐๐ ] + (e๐ผ โ ๐ผ โ 1)๐ผ ๐โ1 ๐๐2 (Lemma 2.2)
โค e๐ผ๐ ๐โ1 exp (e๐ผ โ ๐ผ2 โ ๐ผ โ 1)๐ผ ๐โ1 ๐๐2
โค e๐ผ๐ ๐โ1 (as e๐ผ โค 1 + ๐ผ + ๐ผ2 for 0 โค ๐ผ โค 1).
As this holds for all ๐, the result follows by the property of Iterated Expectations.
2.2 Semidefinite matrices
Let ๐๐ denote the class of all symmetric ๐ ร ๐ matrices with real entries. For two matrices
๐ด, ๐ต โ โ ๐ร๐ , the trace inner product of ๐ด and ๐ต is defined as h๐ด, ๐ตi = tr(๐ด๐ ๐ต) = ๐๐=1 ๐๐=1 ๐ ๐๐ ๐ ๐๐ .
ร ร
A matrix ๐ โ ๐๐ is positive semidefinite (PSD) if all its eigenvalues are non-negative and we
denote this by ๐ 0. Equivalently, ๐ 0 iff ๐ ๐ ๐ ๐ = h๐๐ ๐ , ๐i โฅ 0 for all ๐ โ โ ๐ .
For, ๐ 0 let ๐ 1/2 = ๐ ๐ ๐ ๐ข๐ ๐ข๐๐ , where ๐ = ๐ ๐ ๐ ๐ข๐ ๐ข๐๐ is the spectral decomposition of ๐
ร 1/2 ร
with orthonormal eigenvectors ๐ข๐ . Then ๐ 1/2 is PSD and ๐ = ๐ ๐ ๐ for ๐ = ๐ 1/2 . For ๐, ๐ โ ๐๐ ,
we say that ๐ ๐ if ๐ โ ๐ 0.
2.3 Approximate independence and sub-isotropic random variables
Let ๐ = (๐1 , . . . , ๐๐ ) be a random vector with ๐1 , . . . , ๐๐ possibly dependent.
Definition 2.3 ((๐, ๐) sub-isotropic random vector). For ๐ โ (0, 1] and ๐ โฅ 1, we say that ๐ is
(๐, ๐) sub-isotropic if it satisfies the following conditions.
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 9
N IKHIL BANSAL
ร๐
1. ๐ผ[๐๐ ] = 0 and ๐ผ[๐๐2 ] โค 1 for all ๐ โ [๐], and ๐=1 ๐ผ[๐๐ ] โฅ ๐๐.
2
2. For all ๐ = (๐ 1 , . . . , ๐ ๐ ) โ โ ๐ it holds that
๐
hร i ๐
ร
๐ผ ( ๐ ๐ ๐๐ )
2
โค๐ ๐ 2๐ ๐ผ[๐๐ ]2 . (2.1)
๐=1 ๐=1
Note that if ๐1 , . . . , ๐๐ are pairwise independent then (2.1) holds with equality for ๐ = 1.
Let ๐ โ ๐๐ be the ๐ ร ๐ covariance matrix of ๐1 , . . . , ๐๐ , i. e., ๐ ๐๐ = ๐ผ[๐๐ ๐๐ ]. Every covariance
matrix is PSD as ๐ ๐ ๐ ๐ = ๐ผ[( ๐ ๐ ๐ ๐๐ )2 ] โฅ 0 for all ๐ โ โ ๐ . Let diag(๐) denote the diagonal ๐ ร ๐
ร
matrix with entries ๐ ๐๐ , then (2.1) can be written as ๐ ๐ (๐ diag(๐) โ ๐)๐ โฅ 0 for every ๐ โ โ ๐ ,
and hence equivalently expressed as
๐ ๐ diag(๐).
Generic construction. We will use the following generic way to produce (๐, ๐) sub-isotropic
vectors. Let ๐ be a ๐ ร ๐ PSD matrix satisfying: ๐ ๐๐ โค 1, tr(๐) โฅ ๐๐ and ๐ ๐ diag(๐). Let
๐ โ โ ๐ be a random vector where each coordinate is independently and uniformly ยฑ1. Then the
random vector ๐ = ๐ 1/2 ๐ has covariance ๐ผ[๐๐ ๐ ] = ๐ 1/2 ๐ผ[๐๐ ๐ ](๐ ๐ )1/2 = ๐, and the properties
of ๐ imply that ๐ is (๐, ๐) sub-isotropic.
Remark 2.4. In other similar constructions, ๐ is often chosen as a random Gaussian, but we
prefer to choose it as random ยฑ1 as it is bounded, and this makes some technical arguments
easier later on.
We will need the following result from [9], about finding sub-isotropic random vectors
orthogonal to a subspace.
Theorem 2.5 ([9], Theorem 6). Let ๐บ โ โ ๐ be an arbitrary subspace with dimension dim(๐บ) = โ = ๐ฟ๐.
Then for any ๐ > 0 and ๐ > 1 satisfying 1/๐ + ๐ โค ๐ฟ, there is a ๐ ร ๐ PSD matrix ๐, that is computable
in polynomial time, and satisfies the following properties:
(i) hโ โ ๐ , ๐i = 0 for all โ โ ๐บ โฅ , where ๐บโฅ is the subspace orthogonal to ๐บ.
(ii) ๐ ๐๐ โค 1 for all ๐ โ [๐].
(iii) tr(๐) โฅ ๐๐.
(iv) ๐ ๐ diag(๐).
The first condition gives that the range space of ๐ is contained in the subspace ๐บ, as
hโ โ ๐ , ๐i = 0 implies that k๐ 1/2 โ k 2 = 0 and hence โ ๐ ๐ 1/2 = 0. So, for any vector ๐ โ โ ๐ the
vector ๐ = ๐ 1/2 ๐ lies in ๐บ (as for every โ โ ๐บ โฅ , โ ๐ ๐ = โ ๐ ๐ 1/2 ๐ = h0, ๐i = 0). So this gives a
polynomial time algorithm to generate a (๐, ๐) sub-isotropic random vector ๐ โ ๐บ.
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 10
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
3 The algorithm and analysis
Recall that by iterated rounding we refer to any procedure that given some starting fractional
solution ๐ฅ, sets ๐ฅ (0) = ๐ฅ, and applies a sequence of updates as follows. Given the vector ๐ฅ (๐โ1) at
(๐โ1)
the beginning of iteration ๐, call a variable ๐ โ [๐] frozen if ๐ฅ ๐ is 0 or 1, and alive otherwise.
Let ๐ ๐ denote the number of alive variables. Based on ๐ฅ (๐โ1) , the algorithm picks a set of
constraints of rank at most ๐ ๐ โ 1, given by the rows of some matrix ๐ (๐) , and finds some
(๐)
non-zero vector ๐ฆ (๐) satisfying ๐ (๐) ๐ฆ (๐) = 0 and ๐ฆ ๐ = 0 for ๐ frozen. The solution is updated as
๐ฅ (๐) = ๐ฅ (๐โ1) + ๐ฆ (๐) .
Let ๐ฟ > 0 be the slack parameter. We assume that the problem to be solved has an
iterated rounding procedure where in each iteration ๐ one can choose some matrix ๐ (๐) with
rank(๐ (๐) ) โค (1 โ ๐ฟ)๐ ๐ . We now describe the rounding algorithm.
3.1 The algorithm
Initialize ๐ฅ (0) = ๐ฅ, where ๐ฅ in the starting fractional solution given as input. For each iteration
๐ = 1, . . . , repeat the following until all the variables reach 0 or 1.
Iteration ๐. Let ๐ฅ (๐โ1) be the solution at the beginning of iteration ๐, and let ๐ด ๐ โ [๐] denote the
(๐โ1)
set of alive coordinates ๐ with ๐ฅ ๐ โ (0, 1). Only the variables in ๐ด ๐ will be updated henceforth.
So for ease of notation we assume that ๐ด ๐ = [๐ ๐ ].
1. Apply Theorem 2.5, with ๐ = ๐ ๐ , ๐บ the nullspace of ๐ (๐) , ๐ = ๐ฟ/10 and ๐ = 10/(9๐ฟ) to find
the covariance matrix ๐.
2. Let ๐พ = 1/(2๐ 3/2 ). Let ๐ ๐ โ โ ๐ ๐ be a random vector with independent ยฑ1 entries. Set
๐ฅ (๐) = ๐ฅ (๐โ1) + ๐ฆ (๐) with ๐ฆ (๐) = ๐พ๐ ๐ 1/2 ๐ ๐ ,
where ๐พ๐ is the largest value such that (i) ๐พ๐ โค ๐พ and (ii) both ๐ฅ (๐โ1) + ๐ฆ (๐) and ๐ฅ (๐โ1) โ ๐ฆ (๐)
lie in [0, 1]๐ ๐ .
3.2 Analysis
Let ๐ = (๐1 , . . . , ๐๐ ) denote the final 0-1 solution returned by the algorithm. The property that
๐ผ[๐๐ ] = ๐ฅ ๐ follows directly as the update ๐ฆ (๐) at each time step has mean zero in each coordinate.
As the algorithm always moves in the nullspace of ๐ (๐) , clearly it will also satisfy the iterated
rounding guarantee with probability 1.
To analyze the running time, we note that whenever ๐พ๐ < ๐พ, there is at least 1/2 probability
that some new variable will reach 0 or 1 after the update (as the new solution is either ๐ฅ (๐) + ๐ฆ (๐)
or ๐ฅ (๐) โ ๐ฆ (๐) with probability half each). So, in expectation there are at most 2๐ such steps. So
let us focus on the iterations where ๐พ๐ = ๐พ.
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 11
N IKHIL BANSAL
(๐)
Let us define the energy of ๐ฅ (๐) as ๐ธ (๐) := ๐ (๐ฅ ๐ )2 . During the update, conditioned on all
ร
the randomness up to time ๐ โ 1, ๐ธ (๐) rises in expectation by at least ๐พ 2 ๐ ๐ ๐ as,
hร i
(๐โ1) (๐) (๐)
๐ผ ๐โ1 ๐ธ(๐) โ ๐ธ(๐โ1) = ๐พ2 ๐ผ ๐โ1 ๐ฆ ๐ + (๐ฆ ๐ )2 )
(2๐ฅ ๐
๐
ร
= ๐พ ๐ผ ๐โ1 (๐ฆ ๐(๐) )2 = ๐พ2 tr(๐) โฅ ๐พ2 ๐๐ ๐ .
2
๐
(๐)
where the second equality uses that ๐ผ ๐โ1 ๐ฆ ๐ = 0 for each ๐. As the final energy can be at most
๐, standard arguments [6, 26] imply that, with constant probability, the algorithm terminates in
๐(๐ log ๐ + log ๐/๐พ2 ) time steps.
It remains to show that the rounding satisfies the concentration property, which we do next.
3.2.1 Isotropic updates imply concentration
Let ๐ = (๐1 , . . . , ๐๐ ) denote the final 0-1 solution returned by the algorithm. Fix any ๐ =
(๐ 1 , ๐2 , . . . , ๐ ๐ ) โ โ ๐ . We will show that
!
๐ก 2 /๐ฝ
Pr[h๐, ๐ฅi โ ๐ผ[h๐, ๐ฅi] โฅ ๐ก] โค exp โ ร 2
2( ๐ ๐ ๐ (๐ฅ ๐ โ ๐ฅ 2๐ ) + ๐๐ก/3)
with ๐ฝ = 18๐ which equals 20/๐ฟ by our choice of the parameters, and ๐ = max๐ |๐ ๐ |.
Proof. By scaling ๐ ๐ โs and ๐ก, we can assume that ๐ = 1. Let us define the random variable
ร ร
(๐) (๐) (๐)
๐๐ = ๐๐ ๐ฅ๐ + ๐ ๐ 2๐ ๐ฅ ๐ (1 โ ๐ฅ ๐ ),
๐ ๐
where the parameter ๐ โค 1 will be optimized later.
(๐) (๐)
It is useful to think of ๐ ๐ as ๐ ๐ + ๐๐ฃ ๐ , where ๐ ๐ = h๐, ๐ฅ (๐) i and ๐ฃ ๐ = ๐ ๐ 2๐ ๐ฅ ๐ (1 โ ๐ฅ ๐ ) are
ร
the mean and variance if we would randomly round the coordinates of ๐ฅ (๐) to 0-1. Initially,
๐0 = ๐ + ๐๐ฃ where ๐ = ๐ ๐ ๐ ๐ฅ ๐ and ๐ฃ = ๐ ๐ 2๐ (๐ฅ ๐ โ ๐ฅ 2๐ ).
ร ร
We will show that ๐ ๐ satisfies the conditions of Theorem 2.1 for an appropriate ๐ผ, and use
it to show the required tail bound. Roughly speaking, as the algorithm proceeds, ๐ ๐ has a
strong negative drift as the energy term ๐ฃ ๐ decreases in expectation and ๐ ๐ does not change in
expectation. In the proof of Theorem 2.1, this negative drift offsets the positive terms that arise
while bounding the exponential moment of ๐ ๐ .
We now give the details. We first compute ๐๐ := ๐ ๐ โ ๐ ๐โ1 and show that it is bounded.
Using ๐ฅ (๐) = ๐ฅ (๐โ1) + ๐ฆ (๐) , we have
๐๐ = ๐ ๐ โ ๐ ๐โ1
ร ร
(๐) (๐โ1) (๐) (๐) (๐โ1) (๐โ1)
= ๐ ๐ (๐ฅ ๐ โ ๐ฅ ๐ )+ ๐๐ 2๐ (๐ฅ ๐ (1 โ ๐ฅ ๐ ) โ ๐ฅ ๐ (1 โ ๐ฅ ๐ ))
๐ ๐
ร ร
(๐) (๐) (๐โ1) (๐)
= ๐ ๐ ๐ฆ๐ + ๐๐ 2๐ ๐ฆ ๐ (1 โ 2๐ฅ ๐ โ ๐ฆ๐ ) . (3.1)
๐ ๐
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 12
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
Claim 3.1. For all ๐, the update ๐ฆ (๐) satifies k๐ฆ (๐) k 2 โค ๐พ ๐ = 2โ1 ๐ .
Proof. Recall that ๐ฆ (๐) = ๐พ๐ 1/2 ๐ ๐ . Let ๐ 1/2 (๐) denote the ๐-th column of ๐ 1/2 . As
h๐ 1/2 (๐), ๐ 1/2 (๐)i = ๐ ๐๐ โค 1,
the columns of ๐ 1/2 have length at most 1. Let ๐ ๐ (๐) denote the ๐-th entry of ๐ ๐ . Applying the
triangle inequality to the columns of ๐ 1/2 ,
ร
k๐ 1/2 ๐ ๐ k 2 โค |๐ ๐ (๐)|k๐ 1/2 (๐)k 2 โค k๐ ๐ k 1 โค ๐.
๐
This gives that k๐ฆ (๐) k 2 โค ๐พ๐.
Claim 3.2. For all ๐, |๐๐ | โค 1.
(๐)
๐ |๐ ๐ ๐ฆ ๐ |. This follows as |๐ ๐ | โค |๐ ๐ |
ร
Proof. First we note that the second term in (3.1) is at most 2
(๐โ1) (๐) (๐โ1)
(as ๐ = 1), ๐ โค 1 by our assumption, and 1 โ 2๐ฅ ๐ โ ๐ฆ ๐ โ [โ1, 1] (as 1 โ ๐ฅ ๐ โ [0, 1] and
(๐โ1) (๐) (๐)
๐ฅ๐ + ๐ฆ ๐ = ๐ฅ ๐ โ [0, 1]). As k๐ k โ โค 1 and using the bound on k๐ฆ k 2 in Claim 3.1, we have
(๐)
ร ร ร
(๐) (๐) (๐)
|๐๐ | โค ๐ ๐ ๐ฆ๐ + |๐ ๐ ๐ฆ ๐ | โค 2 |๐ฆ ๐ |
๐ ๐ ๐
(๐) (๐)
= 2k๐ฆ k 1 โค 2๐ 1/2
k๐ฆ k 2 โค 2๐พ๐ 3/2 = 1.
We now upper bound the negative drift of ๐ ๐ .
Claim 3.3. ๐ผ ๐โ1 [๐๐ ] โค โ(๐/8๐)๐ผ ๐โ1 ๐๐2
(๐) (๐โ1)
Proof. As ๐ผ ๐โ1 ๐ฆ ๐ = 0 for all ๐, and as ๐ฅ ๐ is deterministic conditioned on the randomness
until ๐ โ 1, taking expectations ๐ผ ๐โ1 [ยท] in (3.1) gives
ร
(๐)
๐ 2๐ ๐ผ ๐โ1 (๐ฆ ๐ )2 .
๐ผ ๐โ1 [๐๐ ] = โ๐ (3.2)
๐
We now upper bound ๐ผ ๐โ1 [๐๐2 ]. Using (๐ + ๐)2 โค 2๐ 2 + 2๐ 2 twice for the expression in (3.1),
!2
ร ร
(๐) (๐) (๐โ1) (๐)
๐๐2 โค 2( ๐ ๐ ๐ฆ ๐ )2 + 2๐2 ๐ 2๐ ๐ฆ ๐ (1 โ 2๐ฅ ๐ โ ๐ฆ๐ )
๐ ๐
!2 !2
ร ยฉ ร 2 (๐) ร
(๐) (๐โ1) (๐)
โค 2( ๐ ๐ ๐ฆ ๐ )2 + 4๐2 ยญ ๐ ๐ ๐ฆ (1 โ 2๐ฅ ๐ ) + ๐ 2๐ (๐ฆ ๐ )2 ยฎ
ยช
(3.3)
๐ ยซ ๐ ๐ ยฌ
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 13
N IKHIL BANSAL
Taking expectations ๐ผ ๐โ1 [ยท] in (3.3), we now upper bound the terms on the right. As ๐ฆ (๐) is (๐, ๐)
sub-isotropic, by (2.1), the first term satisfies
hร i ร
(๐) (๐)
๐ผ ๐โ1 ( ๐ ๐ ๐ฆ ๐ )2 โค ๐ ๐ 2๐ ๐ผ ๐โ1 (๐ฆ ๐ )2 .
๐ ๐
Similarly, by the sub-isotropic property, the second term satisfies
hร i ร ร
(๐) (๐โ1) 2 (๐โ1) 2 (๐) (๐)
๐ 2๐ ๐ฆ ๐ (1 โ 2๐ฅ ๐ โค๐ ๐ 4๐ (1 โ 2๐ฅ ๐ ๐ 2๐ ๐ผ ๐โ1 (๐ฆ ๐ )2 ,
๐ผ ๐โ1 ( )) ) ๐ผ ๐โ1 (๐ฆ ๐ )2 โค
๐ ๐ ๐
(๐โ1)
where the last step uses that |๐ ๐ | โค 1 and |1 โ 2๐ฅ ๐ | โค 1.
(๐) 2
๐ (๐ฆ ๐ ) โค ๐พ๐ โค 1/2 by Claim 3.1, the third term can be bounded as
ร
Finally, as |๐ ๐ | โค 1 and
!2
ร ร
(๐) (๐)
๐ 2๐ (๐ฆ ๐ )2 โค (1/2) ๐ 2๐ (๐ฆ ๐ )2 . (3.4)
๐ ๐
Plugging these bounds and using that ๐ โค 1, (3.3) gives that,
ร (๐)
๐ผ ๐โ1 ๐๐2 โค 8๐ ๐ 2๐ ๐ผ ๐โ1 (๐ฆ ๐ )2 = โ(8๐/๐)๐ผ[๐๐ ].
๐
where the last equality uses (3.2).
By Claim 3.3, we can apply Theorem 2.1 with ๐ผ = ๐/8๐, provided that the conditions for
Theorem 2.1 hold. Indeed, ๐ผ โค 1 holds as ๐ โค 1 and ๐ โฅ 1, and |๐๐ | < 1 holds by Claim 3.1.
Applying Theorem 2.1 now gives that Pr[๐๐ โ ๐0 โฅ ๐ก 0] โค exp(โ๐๐ก 0/8๐). As ๐0 = ๐ + ๐๐ฃ, this
gives
Pr[๐๐ โ ๐ โ ๐๐ฃ โฅ ๐ก 0] โค exp(โ๐ก 0๐/4๐). (3.5)
Setting ๐ = ๐ก 0/(๐ก 0 + 2๐ฃ) (note that this satisfies our assumption ๐ โค 1), so that ๐๐ฃ = ๐ก 0 ๐ฃ/(๐ก 0 + 2๐ฃ) โค
๐ก 0/2, (3.5) implies that
Pr[๐๐ โ ๐ โฅ 3๐ก 0/2] โค exp(โ๐ก 0๐/4๐).
Setting ๐ก = 3๐ก 0/2 and plugging the value of ๐ gives the desired result that
๐ก 2 /(18๐)
Pr[๐๐ โ ๐ โฅ ๐ก] โค exp โ .
2(๐ฃ + ๐ก/3)
4 Applications
4.1 Rounding column-sparse LPs
Let ๐ฅ โ [0, 1]๐ be a fractional solution to ๐ด๐ฅ = ๐, where ๐ด โ โ ๐ร๐ is an arbitrary ๐ ร ๐ matrix.
Let ๐ก = max ๐โ[๐] ( ๐โ[๐] |๐ ๐๐ |) be the maximum โ 1 -norm of the columns of ๐ด. Beck and Fiala [12]
ร
gave a rounding method to find ๐ โ {0, 1} ๐ so that the maximum rounding error for any row
satisfies k๐ด๐ โ ๐ k โ = k๐ด(๐ โ ๐ฅ)k โ < ๐ก.
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 14
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
BeckโFiala rounding. We first recall the iterated rounding algorithm in [12]. The algorithm
starts with ๐ฅ 0 = ๐ฅ and proceeds in iterations. Consider some iteration ๐, and let ๐ด ๐ denote the
matrix ๐ด restricted to the alive coordinates. Call a row big if its โ 1 -norm in ๐ด ๐ is strictly more
than ๐ก. The key point is that by an averaging argument, the number of big rows is strictly less
than ๐ ๐ as each column has โ 1 -norm at most ๐ก and thus the total โ 1 -norm of all entries ๐ด ๐ is at
most ๐ก๐ ๐ . The algorithm chooses ๐ (๐) to be matrix consisting of the big rows of ๐ด ๐ and applies
the iterated rounding update.
Let us now analyze the error. Fix some row ๐. As long as this row ๐ is big, its rounding error
is 0 during the update steps. Consider the first iteration when this row is no longer big. Then,
no matter how the remaining alive variables are rounded in subsequent iterations, the error
incurred will be (strictly) less than its โ1 -norm, which is at most ๐ก.
Introducing slack. To apply Theorem 1.2, we can easily introduce ๐ฟ-slack for any 0 โค ๐ฟ < 1,
as follows. In iteration ๐, call a row big if its โ 1 norm exceeds ๐ก/(1 โ ๐ฟ), and by the argument
above the number of big rows is strictly less than ๐ ๐ (1 โ ๐ฟ). Theorem 1.2 now directly gives the
following result.
Theorem 4.1. Given a matrix ๐ด with maximum โ1 -norm of any column at most ๐ก, and any ๐ฅ โ [0, 1]๐ ,
then for any 0 โค ๐ฟ < 1 the algorithm returns ๐ โ {0, 1} ๐ such that k๐ด(๐ โ ๐ฅ)k โ โค ๐ก/(1 โ ๐ฟ),
๐ผ[๐๐ ] = ๐ฅ ๐ and ๐ satisfies ๐(1/๐ฟ)-concentration.
This implies the following useful corollary.
Corollary 4.2. Let ๐ be a matrix, and ๐ด be some subset of rows of ๐ so that the columns of ๐ restricted
to ๐ด have โ1 -norm at most ๐ก. Setting ๐ฟ = 1/2, the rounding error is at most 2๐ก for rows of ๐ด, while the
other rows of ๐ have error similar to that as under randomized rounding.
Komlรณs setting. For a ๐ ร ๐ matrix ๐ด, let ๐ก = max ๐โ[๐] ( ๐โ[๐] ๐ 2๐๐ )1/2 denote the maximum
ร
โ 2 -norm of the columns of ๐ด. The long-standing Komlรณs conjecture (together with a connection
between hereditary discrepancy and rounding due to [25]) states that any ๐ฅ โ [0, 1]๐ can be
rounded to ๐ โ {0, 1}p ๐ so that k๐ด(๐ โ ๐ฅ)k = ๐(๐ก). Currently, the best known bound on this
โ
rounding error is ๐(๐ก log ๐) [5, 8].
An argument similar to that for Theorem 4.1 gives the following result in this setting.
Theorem 4.3. If ๐ด has maximum column โ 2 -norm ๐
๐
p ๐ก, then given any ๐ฅ โ [0, 1] , the algorithm returns
an ๐ โ {0, 1} satisfying k๐ด(๐ โ ๐ฅ)k โ = ๐(๐ก log ๐) and the ๐(1)-concentration property.
Proof. We will apply Theorem 1.2 with ๐ฟ = 1/2. During any iteration ๐, call row ๐ big if its
squared โ 2 norm in ๐ด ๐ exceeds 2๐ก 2 . As the sum of squared entries of ๐ด ๐ is at most ๐ก 2 ๐ ๐ , the
number big rows is at most ๐ ๐ /2 and we set ๐ (๐) to ๐ด ๐ restricted to the big rows.
The ๐(1)-concentration follows directly from Theorem 1.2. To bound the error for rows of ๐ด,
we argue as follows. Fix a row ๐. Clearly, row ๐ incurs zero error while it is big. Let ๐ be the first
iteration when row ๐ is not big, and condition on the randomness up to this point. Call an (alive)
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 15
N IKHIL BANSAL
coordinate ๐ large if |๐ ๐๐ | โฅ ๐ก/ log ๐, and let ๐ฟ ๐ denote the set of large coordinates in row ๐. Let
p
๐ห ๐ denote the row ๐ ๐ with the coordinates in ๐ฟ removed. As ๐ ๐ 2๐๐ โค 2๐ก we have |๐ฟ ๐ | โค 2 log ๐
ร
and so the rounding error due to the coordinates in ๐ฟ ๐ can be at most
ร ร p
|๐ ๐๐ | โค |๐ฟ ๐ | 1/2 ( |๐ ๐๐ | 2 )1/2 = ๐(๐ก log ๐).
๐โ๐ฟ ๐ ๐โ๐ฟ ๐
Applying the ๐(1)-concentration property of the rounded solution ๐, the error due to the entries
of ๐ห ๐ satisfies
๐ 2 ๐ก 2 log ๐
๏ฃฎร ๏ฃน
(๐)
p
๐ ๐๐ (๐ ๐ โ ๐ฅ ๐ ) โฅ ๐๐ก log ๐ ๏ฃบ๏ฃบ = exp ยญโ๐ 0 ร
๏ฃฏ ๏ฃบ ยฉ ยช
Pr ๏ฃฏ
๏ฃฏ
๐โ๐ฟ ๐ ๐๐ + ๐๐๐ก log ๐
2
p ยฎ
๏ฃฏ ๐โ๐ฟ ๏ฃบ
๏ฃฐ ๏ฃป ยซ ยฌ
for some fixed constant ๐ 0.
As ๐โ๐ฟ ๐ 2๐๐ โค 2๐ก 2 and ๐ โค ๐ก/ log ๐, the right hand side is exp(โฮฉ(๐๐ 0 log ๐)). Choosing ๐
ร p
large enough so that this is 1/๐, the result follows by a union bound over the rows.
4.2 Makespan minimization on unrelated machines
In the unrelated machine setting, there are ๐ jobs and ๐ machines, and each job ๐ โ [๐] has
size ๐ ๐๐ on a machine ๐ โ [๐]. The goal is to assign all the jobs to machines to minimize the
maximum machine load.
LP formulation. The standard LP relaxation has fractional assignment variables ๐ฅ ๐๐ โ [0, 1]
for ๐ โ [๐] and ๐ โ [๐]. Consider the smallest target makespan ๐ for which the following LP is
feasible.
ร
๐ ๐๐ ๐ฅ ๐๐ โค๐ โ๐ โ [๐] (load constraints)
๐โ[๐]
ร
๐ฅ ๐๐ =1 โ๐ โ [๐] (assignment constraints)
๐โ[๐]
๐ฅ ๐๐ =0 โ๐, ๐ such that ๐ ๐๐ > ๐
The last constraint is valid for any integral solution, and so we can assume that ๐ max := max๐๐ ๐ ๐๐
is at most ๐. In a celebrated result, [24] gave a rounding procedure that produces an integral
solution with makespan at most ๐ + ๐ max . We now sketch the iterated rounding based proof of
this result from [23].
Iterated rounding proof. As always, we start with ๐ฅ (0) = ๐ฅ and fix the variables that get
rounded to 0 or 1. Consider some iteration ๐. Let ๐ ๐ denote the number of fractional variables,
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 16
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
and let ๐
๐ denote the set of jobs that are still not integrally assigned to some machine. For a
machine ๐, define the excess as ร
(๐)
๐ ๐ := (1 โ ๐ฅ ๐๐ ), (4.1)
(๐)
๐โ๐
๐ : ๐ฅ ๐๐ >0
and note that ๐ ๐ is simply the maximum (fractional) number of extra jobs that can be possibly
assigned to ๐ if all the non-zero variables are rounded to 1. An elegant counting argument in
[23] shows that if ๐ (๐) consists of load constraints for machines with ๐ ๐ > 1, and assignment
constraints for jobs in ๐
๐ , then rank(๐ (๐) ) < ๐ ๐ .
Introducing slack. We now extend the argument of [23] to introduce some slack so that we
can apply Theorem 1.2. This will give the following result.
Theorem 4.4. Given any ๐ฟ โ [0, 1/2), and a fractional solution ๐ฅ to the problem, there is a rounding
where the integral solution ๐ increases the load on any machine by ๐ max /(1 โ 2๐ฟ), satisfies ๐ผ[๐๐๐ ] = ๐ฅ ๐๐
for all ๐, ๐ and has ๐(1/๐ฟ)-concentration.
(๐)
Proof. Consider some iteration ๐, and let ๐ ๐ denote the number of fractional variables ๐ฅ ๐๐ โ (0, 1),
and let ๐
๐ denote the jobs that are still not integrally assigned. Let ๐ ๐ = |๐
๐ |. For a machine ๐,
we define the excess ๐ ๐ as in (4.1). Let ๐ ๐ denote the set of machines with ๐ ๐ > 1/(1 โ 2๐ฟ).
๐ (๐) will consist of load constraints for machines in ๐ ๐ and assignment constraints for jobs
(๐) (๐)
in ๐
๐ . More precisely, the update ๐ฆ ๐๐ will satisfy the following two conditions: (i) ๐ ๐ ๐๐ ๐ฆ ๐๐ = 0
ร
(๐)
for all ๐ โ ๐ ๐ and (ii) ๐ ๐ฆ ๐๐ = 0, for all ๐ โ ๐
๐ . We say that machine ๐ is protected in iteration ๐ if
ร
๐ โ ๐ ๐ . For a protected machine, the fractional load does not change after an update. When a
machine ceases to be protected for the first time, the definition of excess ensures that its extra
load in subsequent iterations can be at most ๐ max /(1 โ 2๐ฟ).
It remains to show that rank(๐๐ ) โค (1 โ ๐ฟ)๐ ๐ . As each job in ๐
๐ contributes at least two
fractional variables to ๐ ๐ , we first note that
2๐ ๐ โค ๐ ๐ . (4.2)
Let ๐ ๐ = |๐ ๐ |. Then we also have the following.
Claim 4.5. ๐ ๐ โค (1 โ 2๐ฟ)(๐ ๐ โ ๐ ๐ ).
Proof. Clearly ๐ ๐ /(1 โ 2๐ฟ) โค ๐โ๐ ๐ ๐ ๐ as each ๐ โ ๐ ๐ has excess more than 1/(1 โ 2๐ฟ). Next,
ร
ร ร ร ร ร
(๐) (๐)
๐๐ = (1 โ ๐ฅ ๐๐ ) โค (1 โ ๐ฅ ๐๐ ) = ๐ ๐ โ ๐ ๐ ,
๐โ๐ ๐ ๐โ๐ ๐ ๐โ๐
:๐ฅ (๐) >0 ๐โ๐ ๐โ๐
:๐ฅ (๐) >0
๐ ๐๐ ๐ ๐๐
where the first equality uses the definition of ๐ ๐ and second uses the definition of ๐ ๐ and that
(๐)
๐โ๐ ๐ฅ ๐๐ = 1 for each job ๐ โ ๐
๐ . Together this gives ๐ ๐ โค (1 โ 2๐ฟ)(๐ ๐ โ ๐ ๐ ).
ร
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 17
N IKHIL BANSAL
Multiplying (4.2) by ๐ฟ and adding to the inequality in Claim 4.5 gives ๐ ๐ + ๐ ๐ โค (1 โ ๐ฟ)๐ ๐ ,
which implies the result as rank(๐๐ ) โค ๐ ๐ + ๐ ๐ .
Remark 4.6. Setting ๐ฟ = 0 recovers the additive ๐ max result of [24]. Theorem 4.4 also generalizes
directly to ๐ resources, where job ๐ has load vector ๐ ๐๐ = (๐ ๐๐ (1), . . . , ๐ ๐๐ (๐)) on machine ๐, and
the goal is to find an assignment ๐ด of jobs to machines to minimize max โ,๐ ( ๐:๐ด(๐)=๐ ๐ ๐๐ (โ)). A
ร
direct modification of the proof above gives an additive ๐๐max /(1 โ 2๐ฟ) error and the ๐(1/๐ฟ)-
concentration property.
4.3 Minimum cost degree-bounded matroid basis
Instead of just the degree-bounded spanning tree problem, we consider the more general
matroid setting as all the arguments apply directly without additional work.
Minimum cost degree-bounded matroid-basis problem (DegMat). The input is a matroid ๐
defined on elements ๐ with costs ๐ : ๐ โ โ+ and ๐ degree constraints specified by (๐ ๐ , ๐ ๐ ) for
๐ โ [๐], where ๐ ๐ โ ๐ and ๐ ๐ โ โค+ . The goal is to find a minimum-cost base ๐ผ in ๐ satisfying
the degree bounds, i. e., |๐ผ โฉ ๐ ๐ | โค ๐ ๐ for all ๐ โ [๐]. The matroid ๐ is given implicitly, by an
independence oracle (which given a query ๐ผ, returns whether ๐ผ is an independent set or not).
Iterated rounding algorithm. The natural LP formulation for the problem has the variables
๐ฅ ๐ โ [0, 1] for each element ๐ โ ๐ and the goal is to minimize the cost ๐ ๐ ๐ ๐ฅ ๐ , subject to the
ร
following constraints.
ร
๐ฅ๐ โค ๐(๐) โ๐ โ ๐ (rank constraints)
๐โ๐
ร
๐ฅ๐ = ๐(๐) (matroid base constraint)
๐โ๐
ร
๐ฅ๐ โค ๐๐ โ๐ โ [๐] (degree constraints)
๐โ๐ ๐
Here ๐(ยท) is the rank function of ๐.
Given a feasible LP solution with cost ๐ โ , [22, 10] gave an iterated rounding algorithm that
finds a solution with cost at most ๐ โ and an additive degree violation of at most ๐ โ 1. Here
๐ = max๐ |{๐ : ๐ โ ๐ ๐ } is the maximum number of sets that contain any element ๐. Note that
๐ = 2 for the degree bounded spanning tree problem, as the elements here are edges and the
sets ๐ ๐ consist of edges incident to a vertex, so that each edges can lie in at most two such sets.
We briefly sketch the argument in [22, 10]. The algorithm starts with ๐ฅ (0) = ๐ฅ and applies
iterated rounding as follows. Consider some iteration ๐. Let ๐ด ๐ denote the set of fractional
variables and let ๐ ๐ = |๐ด ๐ |. For a set ๐ ๐ , define the excess as
ร
(๐)
๐ ๐ := (1 โ ๐ฅ ๐ ), (4.3)
๐โ๐ด ๐ โฉ๐ ๐
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 18
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
the maximum degree violation for ๐ ๐ even if all current fractional variables are rounded to 1.
Let ๐ท ๐ be the set of indices ๐ of degree constraints with excess ๐ ๐ โฅ ๐. The algorithm chooses
๐ to consist of the degree constraints in ๐ท ๐ (call these protected constraints) and some
(๐)
basis for the tight matroid rank constraints. An elegant counting argument then shows that
rank(๐๐ ) โค ๐ ๐ โ 1. The correctness follows since if a degree constraint is no longer protected,
then its excess is strictly below ๐, and by integrality of ๐ ๐ and the final rounded solution, the
degree violation can be at most ๐ โ 1.
Introducing slack. We will extend the argument above in a straightforward way to introduce
some slack, and then apply Theorem 1.2 to obtain the following result.
Theorem 4.7. For any 0 < ๐ฟ < 1, there is an algorithm for the DegMat problem that produces a basis
with additive degree violation strictly less than ๐/(1 โ 2๐ฟ) and satisfies ๐(1/๐ฟ)-concentration.
Setting ๐ฟ = 1/6 so that 2/(1 โ 2๐ฟ) = 3, and noting that the degree violation is strictly less
than this bound, gives the following.
Corollary 4.8. For the minimum cost degree bounded spanning tree problem, given a fractional
solution ๐ฅ there is an algorithm to find a spanning tree with degree violation of plus two and satisfying
๐(1)-concentration.
We now describe the argument. Consider iteration ๐. Let ๐ด ๐ be the set of fractional variables
and ๐ ๐ = |๐ด ๐ |. We need to specify how to choose ๐ (๐) and show that rank(๐ (๐) ) โค (1 โ ๐ฟ)๐ ๐ .
Let ๐ท ๐ denote the set of indices ๐ of degree constraints with excess ๐ ๐ โฅ ๐/(1 โ 2๐ฟ). Let โฑ denote
the family of the tight matroid constraints that hold with equality, i. e., ๐โ๐โฉ๐ด ๐ ๐ฅ ๐ = ๐ ๐ (๐), where
ร
๐ ๐ is the rank function of the matroid ๐ ๐ obtained from ๐ by deleting elements with ๐ฅ ๐ = 0
and contracting those with ๐ฅ ๐ = 1. It is well-known [30] that there is a chain family of tight sets
๐ = {๐ถ1 , . . . , ๐ถโ }, with ๐ถ1 โ ๐ถ2 โ ยท ยท ยท โ ๐ถโ , such that the rank constraint of every ๐ โ โฑ lies in
the linear span of the constraints for sets in ๐. Let ๐ ๐ = |๐| and ๐ ๐ = |๐ท ๐ |. We set ๐ (๐) to be the
degree constraints in ๐ท ๐ and the rank constraints in ๐.
Claim 4.9. rank(๐ (๐) ) โค (1 โ ๐ฟ)๐ ๐ .
Proof. It suffices to show that ๐ ๐ + ๐ ๐ โค (1 โ ๐ฟ)๐ ๐ . As each ๐ฅ ๐ is fractional and as the ranks
๐ ๐ (๐ถ) are integral, it follows that any two sets in chain family differ by at least two elements,
i. e., |๐ถ ๐+1 \ ๐ถ ๐ | โฅ 2. This implies that ๐ ๐ โค ๐ ๐ /2. We also note that ๐ ๐ (๐ถ1 ) < ๐ ๐ (๐ถ2 ) ยท ยท ยท < and in
particular the rank ๐(๐ถ ๐ ๐ ) of the largest set in ๐ is at least ๐ ๐ . This gives that ๐โ๐ด ๐ ๐ฅ ๐ โฅ ๐ ๐ .
ร
Next, as ๐ ๐ โฅ ๐/(1 โ 2๐ฟ) for each ๐ โ ๐ท ๐ , we have that ๐๐ ๐ โค (1 โ 2๐ฟ) ๐โ๐ท๐ ๐ ๐ . Moreover, by
ร
definition of ๐ ๐ , ร ร ร ร
๐๐ = (1 โ ๐ฅ ๐ ) = ๐ ๐ (1 โ ๐ฅ ๐ )
๐โ๐ท ๐ ๐โ๐ท ๐ ๐โ๐ด ๐ โฉ๐ ๐ ๐โ๐ด ๐
where ๐ ๐ = |{๐ : ๐ โ ๐ ๐ , ๐ โ ๐ท ๐ }| is the number of tight degree constraints in ๐ท ๐ that contain
element ๐. As ๐ ๐ โค ๐, the above is at most ๐ ๐โ๐ด ๐ (1 โ ๐ฅ ๐ ) โค ๐๐ ๐ โ ๐๐ ๐ , where we use that
ร
๐โ๐ด ๐ ๐ฅ ๐ โฅ ๐ ๐ , and ๐โ๐ด ๐ 1 = |๐ด ๐ | = ๐ ๐ .
ร ร
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 19
N IKHIL BANSAL
Together this gives that ๐ ๐ โค (1 โ 2๐ฟ)(๐ ๐ โ ๐ ๐ ), and adding 2๐ฟ times the inequality ๐ ๐ โค ๐ ๐ /2
to this gives that ๐ ๐ + ๐ ๐ โค (1 โ ๐ฟ)๐ ๐ , which proves the desired claim.
The degree violation property follows as before, since if a degree constraint is no longer
protected, then its excess is strictly below ๐/(1 โ 2๐ฟ).
Finally, we remark that as the underlying LP has exponential size, some care is needed in
implementing the rounding algorithm, in particular in maintaining the chain family and in
computing the step size of the walk. These issues are discussed in [11].
Acknowledgements
We thank the referees for their extremely thorough reading of the manuscript, catching various
errors, and for several useful comments and suggestions that significantly improved the
presentation.
References
[1] Nima Anari and Shayan Oveis Gharan: Effective-resistance-reducing flows, spectrally
thin trees, and asymmetric TSP. In Proc. 56th FOCS, pp. 20โ39. IEEE Comp. Soc., 2015.
[doi:10.1109/FOCS.2015.11] 6
[2] Sanjeev Arora, Alan M. Frieze, and Haim Kaplan: A new rounding procedure for the
assignment problem with applications to dense graph arrangement problems. Math.
Programming, 92(1):1โ36, 2002. [doi:10.1007/s101070100271] 3, 6
[3] Arash Asadpour, Michel X. Goemans, Aleksander Madry, Shayan Oveis Gharan, and
Amin Saberi: An ๐(log ๐/log log ๐)-approximation algorithm for the asymmetric traveling
salesman problem. INFORMS, 65(4):1043โ1061, 2017. Preliminary version in SODAโ10.
[doi:10.1287/opre.2017.1603] 2, 3, 6, 7
[4] Arash Asadpour and Amin Saberi: An approximation algorithm for max-min fair allocation
of indivisible goods. SIAM J. Comput., 39(7):2970โ2989, 2010. [doi:10.1137/080723491] 3
[5] Wojciech Banaszczyk: Balancing vectors and Gaussian measures of ๐-dimensional
convex bodies. Random Struct. Algor., 12(4):351โ360, 1998. [doi:10.1002/(SICI)1098-
2418(199807)12:4<351::AID-RSA3>3.0.CO;2-S] 15
[6] Nikhil Bansal: Constructive algorithms for discrepancy minimization. In Proc. 51st FOCS,
pp. 3โ10. IEEE Comp. Soc., 2010. [doi:10.1109/FOCS.2010.7] 12
[7] Nikhil Bansal: On a generalization of iterated and randomized rounding. In Proc. 51st
STOC, pp. 1125โ1135. ACM Press, 2019. [doi:10.1145/3313276.3316313] 1
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 20
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
[8] Nikhil Bansal, Daniel Dadush, and Shashwat Garg: An algorithm for Komlรณs
Conjecture matching Banaszczykโs bound. SIAM J. Comput., 48(2):534โ553, 2019.
[doi:10.1137/17M1126795] 8, 15
[9] Nikhil Bansal and Shashwat Garg: Algorithmic discrepancy beyond partial coloring. In
Proc. 49th STOC, pp. 914โ926. ACM Press, 2017. [doi:10.1145/3055399.3055490] 8, 10
[10] Nikhil Bansal, Rohit Khandekar, and Viswanath Nagarajan: Additive guarantees
for degree-bounded directed network design. SIAM J. Comput., 39(4):1413โ1431, 2010.
[doi:10.1137/080734340] 18
[11] Nikhil Bansal and Viswanath Nagarajan: Approximation-friendly discrepancy rounding.
In Proc. 18th Integer Prog. Combinat. Optim. (IPCOโ16), volume 9682 of LNCS, pp. 375โ386.
Springer, 2016. [doi:10.1007/978-3-319-33461-5_31] 3, 8, 20
[12] Jรณzsef Beck and Tibor Fiala: โInteger-makingโ theorems. Discr. Appl. Math., 3(1):1โ8, 1981.
[doi:10.1016/0166-218X(81)90022-6] 5, 14, 15
[13] Julius Borcea, Peter Brรคndรฉn, and Thomas M. Liggett: Negative dependence and the
geometry of polynomials. J. AMS, 22(2):521โ567, 2009. [doi:10.1090/S0894-0347-08-00618-8]
3
[14] Stรฉphane Boucheron, Gรกbor Lugosi, and Pascal Massart: Concentration In-
equalities: A Nonasymptotic Theory of Independence. Oxford Univ. Press, 2013.
[doi:10.1093/acprof:oso/9780199535255.001.0001] 3
[15] Chandra Chekuri, Jan Vondrรกk, and Rico Zenklusen: Dependent randomized rounding
via exchange properties of combinatorial structures. In Proc. 51st FOCS, pp. 575โ584. IEEE
Comp. Soc., 2010. [doi:10.1109/FOCS.2010.60] 2, 3, 6
[16] Chandra Chekuri, Jan Vondrรกk, and Rico Zenklusen: Multi-budgeted matchings and
matroid intersection via dependent rounding. In Proc. 22nd Ann. ACMโSIAM Symp. on
Discrete Algorithms (SODAโ11), pp. 1080โ1097. SIAM, 2011. [doi:10.1137/1.9781611973082.82]
2, 3, 6
[17] Devdatt P. Dubhashi and Alessandro Panconesi: Concentration of Measure for the Analysis of
Randomized Algorithms. Cambridge Univ. Press, 2009. [doi:10.1017/CBO9780511581274] 3
[18] David A. Freedman: On tail probabilities for martingales. Ann. Probab., 3(1):100โ118, 1975.
[doi:10.1214/aop/1176996452] 8
[19] Rajiv Gandhi, Samir Khuller, Srinivasan Parthasarathy, and Aravind Srinivasan: Depen-
dent rounding and its applications to approximation algorithms. J. ACM, 53(3):324โ360,
2006. [doi:10.1145/1147954.1147956] 2, 3, 6
[20] Nicholas J. A. Harvey and Neil Olver: Pipage rounding, pessimistic estimators and matrix
concentration. In Proc. 25th Ann. ACMโSIAM Symp. on Discrete Algorithms (SODAโ14), pp.
926โ945. SIAM, 2014. [doi:10.1137/1.9781611973402.69] 3, 6
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 21
N IKHIL BANSAL
[21] Richard M. Karp, Frank Thomson Leighton, Ronald L. Rivest, Clark D. Thompson,
Umesh V. Vazirani, and Vijay V. Vazirani: Global wire routing in two-dimensional arrays.
Algorithmica, 2:113โ129, 1987. [doi:10.1007/BF01840353] 5
[22] Tamรกs Kirรกly, Lap Chi Lau, and Mohit Singh: Degree bounded matroids and submodular
flows. Combinatorica, 32(6):703โ720, 2012. [doi:10.1007/s00493-012-2760-6] 18
[23] Lap-Chi Lau, R. Ravi, and Mohit Singh: Iterative Methods in Combinatorial Optimization.
Cambridge Univ. Press, 2011. [doi:10.1017/CBO9780511977152] 2, 16, 17
[24] Jan Karel Lenstra, David B. Shmoys, and รva Tardos: Approximation algorithms
for scheduling unrelated parallel machines. Math. Programming, 46:259โ271, 1990.
[doi:10.1007/BF01585745] 6, 16, 18
[25] Lรกszlรณ Lovรกsz, Joel H. Spencer, and Katalin Vesztergombi: Discrepancy of set-systems
and matrices. Europ. J. Combinat., 7(2):151โ160, 1986. [doi:10.1016/S0195-6698(86)80041-5]
8, 15
[26] Shachar Lovett and Raghu Meka: Constructive discrepancy minimization by walking on
the edges. SIAM J. Comput., 44(5):1573โ1582, 2015. [doi:10.1137/130929400] 3, 12
[27] Robin Pemantle: Towards a theory of negative dependence. J. Math. Phys., 41(3):1371โ1390,
2000. [doi:10.1063/1.533200] 3
[28] Yuval Peres, Mohit Singh, and Nisheeth K. Vishnoi: Random walks in poly-
topes and negative dependence. In Proc. 8th Innovations in Theoret. Comp. Sci.
Conf. (ITCSโ17), pp. 50:1โ10. Schloss DagstuhlโLeibniz-Zentrum fuer Informatik, 2017.
[doi:10.4230/LIPIcs.ITCS.2017.50] 3
[29] Thomas Rothvoร: The entropy rounding method in approximation algorithms. In Proc.
23rd Ann. ACMโSIAM Symp. on Discrete Algorithms (SODAโ12), pp. 356โ372. SIAM, 2012.
[doi:10.1137/1.9781611973099.32] 3, 8
[30] Alexander Schrijver: Combinatorial Optimization: Polyhedra and Efficiency. Volume B.
Springer, 2003. Springer. 19
[31] Mohit Singh and Lap Chi Lau: Approximating minimum bounded degree spanning trees
to within one of optimal. J. ACM, 62(1):1:1โ19, 2015. [doi:10.1145/2629366] 6, 7
[32] Mohit Singh and Nisheeth K. Vishnoi: Entropy, optimization and counting. In Proc. 46th
STOC, pp. 50โ59. ACM Press, 2014. [doi:10.1145/2591796.2591803] 3, 6
[33] Aravind Srinivasan: Distributions on level-sets with applications to approxima-
tion algorithms. In Proc. 42nd FOCS, pp. 588โ597. IEEE Comp. Soc., 2001.
[doi:10.1109/SFCS.2001.959935] 3
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 22
O N A G ENERALIZATION OF I TERATED AND R ANDOMIZED ROUNDING
[34] Ola Svensson, Jakub Tarnawski, and Lรกszlรณ A. Vรฉgh: A constant-factor approximation
algorithm for the asymmetric traveling salesman problem. J. ACM, 67(6):37:1โ53, 2020.
[doi:10.1145/3424306] 6
[35] Vijay V. Vazirani: Approximation Algorithms. Springer, 2001. ACM DL. 2
[36] David P. Williamson and David B. Shmoys: The Design of Approximation Algorithms. Cam-
bridge Univ. Press, 2011. CUP. 2
AUTHOR
Nikhil Bansal
Department of Computer Science
University of Michigan, Ann Arbor
bansal gmail com
https://bansal.engin.umich.edu/
ABOUT THE AUTHOR
Nikhil Bansal is a professor in the Department of Computer Science at University of
Michigan, Ann Arbor. He attended the Indian Institute of Technology, Mumbai
for his B. Tech. degree, and received his Ph. D. from Carnegie Mellon University,
Pittsburgh. He got fascinated by algorithms after taking an undergraduate class
by Ajit A. Diwan, and has worked on various algorithmic questions since then.
During his free time, he enjoys reading, hiking and doing Yoga.
T HEORY OF C OMPUTING, Volume 20 (6), 2024, pp. 1โ23 23