Authors Linh Tran, Van Vu,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21
www.theoryofcomputing.org
S PECIAL ISSUE : APPROX-RANDOM 2020
Reaching a Consensus on Random
Networks: The Power of Few
Linh Tran Van Vu
Received March 1, 2021; Revised July 20, 2022; Published November 14, 2023
Abstract. A community of π individuals splits into two camps, Red and Blue. The
individuals are connected by a social network, which influences their colors. Every
day each person changes their color according to the majority of their neighbors.
Red (Blue) wins if everyone in the community becomes Red (Blue) at some point.
We study this process when the underlying network is the random ErdΕsβRΓ©nyi
graph πΊ(π, π). With a balanced initial state (π/2 persons in each camp), it is clear
that each color wins with the same probability.
Our study reveals that for any constants π and π, there is a constant π such that
if one camp has at least π/2 + π individuals at the initial state, then it wins with
probability at least 1 β π. The surprising fact here is that π does not depend on π,
the population of the community. When π = 1/2 and π = .1, one can set π = 5,
meaning one camp has π/2 + 5 members initially. In other words, it takes only 5
extra people to win an election with overwhelming odds. We also generalize the
result to π = π π = o(1) in a separate paper.
A preliminary version of this paper appeared in the Proceedings of the 24th International Conference on
Randomization and Computation (RANDOMβ20) [16].
ACM Classification: G.3, G.2.2
AMS Classification: 37E25, 05C57, 05C80, 82B43, 68R10, 60K35, 60K37
Key words and phrases: random graphs, ErdΕs-RΓ©nyi model, majority dynamics, consensus
Β© 2023 Linh Tran and Van Vu
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2023.v019a006
L INH T RAN AND VAN V U
1 Introduction
1.1 The opinion exchange dynamics
Building mathematical models to explain how collective opinions are formed is an important
and interesting task (see [13] for a survey on the topic, with examples from various fields,
economy, sociology, statistical physics, to mention a few).
Obviously, our opinions are influenced by people around us, and this motivates the study
of the following natural and simple model: A community of π individuals splits into two
camps, Red and Blue, representing two competing opinions, which can be on any topic such
as brand competition, politics, ethical issues, etc. The individuals are connected by a social
network, which influences their opinion on a daily basis (by some specific rule). We say that
Red (respectively Blue) wins if everyone in the community becomes Red (respectively Blue) at
some point.
We study this process when the underlying network is random. In this paper, we focus on
the ErdΕsβRΓ©nyi random graph πΊ(π, π), which is the most popular model of random graphs
[4, 10]. We use the majority rule, which is a natural choice. When a new day comes, a vertex
scans its neighborsβ colors from the previous day and adopts the dominant one. If there is a tie,
it keeps its color.
Definition 1.1. The random graph πΊ(π, π) on π β β vertices and density π β (0, 1) is obtained
by putting an edge between any two vertices with probability π, independently.
1.2 Results
With a balanced initial state (π/2 persons in each camp), by symmetry, each color wins with the
same probability π < 1/2, regardless of π. (Notice that there are graphs, such as the empty and
complete graphs, on which no one wins.) Our study reveals that for any given π and π, there
is a constant π such that if one camp has at least π/2 + π individuals at the initial state, then it
wins with probability at least 1 β π. The surprising fact here is that π does not depend on π, the
population of the community. When π = 1/2 and π = .1, one can set π as small as 5.
Theorem 1.2 (The power of few). Consider the (majority) process on πΊ(π, 1/2). Assume that the Red
camp has at least π/2 + 5 vertices at the initial state, where π β₯ 1000. Then Red wins after the fourth
day with probability at least 90%.
This result can be stated without the ErdΕsβRΓ©nyi πΊ(π, 1/2) model; one can state an equivalent
theorem by choosing the network uniformly, from the set of all graphs on π vertices.
This result reveals an interesting phenomenon, which we call βthe power of few.β The
collective outcome can be extremely sensitive, as a modification of the smallest scale in the initial
setting leads to the opposite outcome.
Our result applies in the following equivalent settings.
Model 1. We fix the two camps, of sizes π/2 + π and π/2 β π, and draw a random graph
πΊ βΌ πΊ(π, π) over their union.
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 2
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
Models 2. We draw a random graph πΊ βΌ πΊ(π, π) first, let Red be a random subset of π/2 + π
vertices (chosen uniformly from all subsets of that size), and Blue be the rest.
Model 3. We split the society into two camps of size π/2 each, then draw the random graph
πΊ βΌ πΊ(π, π) on their union, then recolor π randomly selected Blue vertices to Red.
Model 4. Split the society into two camps (Red and Blue) of size π/2 β π each and a βswingβ
group (with no color yet) of 2π individuals. Draw the random graph on their union. Now let
the swing group join the Red camp.
With Model 3, we can imagine a balanced election process at the beginning. Then π = 5
people move to the other camp. Theorem 1.2 asserts that this tiny group already guarantees the
final win with overwhelming odds. Similarly, Model 4 implies that a swing group of size 10
decides the outcome.
Our result can also be used to model the phenomenon that outcomes in seemingly identical
situations become opposite. Consider two communities, each has exactly π individuals, sharing
the same social network. In the first community, the Red camp has size π/2 + π, and the Blue
camp has π/2 β π. In the second community, the Blue camp has π/2 + π and the Red camp has
π/2 β π. If π is large, there is no way to tell the difference between the two communities. Even if
we record everyoneβs initial opinion, clerical errors will surely swallow the tiny difference of 2π.
However, at the end, the collective opinion will be opposite, with high probability.
Now we state the general result for arbitrary constant density π.
Theorem 1.3 (Asymptotic bound). Let π be a constant in (0, 1) and π π be a positive integer which may
depend on π . Assume that Red has π/2 + π π in day 0 and the random graph is πΊ(π, π). Then Red wins
after four days with probability at least 1 β πΎ(π) max{π β1 , π πβ2 }, where πΎ(π) depends only on π.
Both results follow from Theorem 1.6, which, in a slightly technical form, describes how the
process evolves day by day. Our results can be extended to cover the case when there are more
than 2 opinions; details will appear in a later paper [17].
1.3 Related results
Our problem is related to a well-studied class of opinion exchange dynamics problems. In the field
of Computer Science, loosely-related processes are studied in population protocols [2, 1], where
individuals/agents/nodes choose their next state based on that of their neighbors. The most
separating difference is the network, as connections in these models often randomly change with
time, while our study concerns a fixed network (randomly generated before the process begins).
The survey by Mossel and Tamuz [13] discussed several models for these problems, including
the DeGroot model [6], where an individualβs next state is a weighted average of its neighborsβ
current states, the voter model [5], where individuals change states by emulating a random
neighbor each day. The majority dynamics model is in fact the same as ours, and is also more
popular than the other two, having been studied in [12, 8, 3]. The key difference, as compared to
our study, is in the setups. In these earlier papers, each individual chooses his/her initial color
uniformly at random. The central limit theorem thus guarantees
β that with high probability, the
initial difference between the two camps is of order Ξ( π). Therefore, these papers did not
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 3
L INH T RAN AND VAN V U
touch upon the βpower of fewβ phenomenon, which is our key message. On the other hand,
they considered sparse random graphs where the density π = π π β 0 as π β +β.
In [3], Benjamini, Chan, OβDonnell, Tamuz, and Tan considered random graphs with
π β₯ ππ β1/2 , where π is a sufficiently large constant, and showed that the dominating color wins
with probability at least .4 [3, Theorem 1.2], while conjecturing that this probability in fact tends
to 1 as π β β. This conjecture was proved by Fountoulakis, Kang, and Makai [8, Theorem 1.1].
Theorem 1.4 (Fountoulakis, Kang, Makai). For any 0 < π β€ 1 there is π = π(π) such that the
following holds for π β₯ ππ β1/2 : With probability at least 1 β π, over the choice of the random graph
πΊ(π, π) and the choice of the initial state, the dominating color wins after four days.
For related results on random regular graphs, see [12, 13].
1.4 Extension for sparse random graphs
Note that the results presented in this paper only apply to a constant π, which, in the context of
πΊ(π, π), produces dense graphs. For sparse graphs, i. e., when π = π π depends on π and tends to 0
as π β +β, the main ideas in this paper can be used, but with different algebraic techniques, to
obtain a similar result.
Theorem 1.5. For any 0 < π β€ 1 and π > 0 there is π = π(π, π) such that for π β₯ (1 + π)(log π)/π, if
Red starts with π/2 + π/π members, then it wins with probability at least 1 β π.
The technical changes needed to prove this theorem require rewriting entire proofs with
new computations, so we leave the proof to our future paper [17]. Additional information
such as the length of the process and the explicit relation between the bound with π and π
will also be discussed there. Notice that when π is a constant, this result covers the βPower of
Fewβ phenomenon as a special case, albeit with π much larger than 5. Thus, the techniques and
results in this paper still have merit since they achieve a specific, surprisingly small constant.
Theorem 1.5 no longer holds for π < (log π)/π as in this case there are, with high probability,
isolated vertices. Any of these vertices keeps its original color forever. In this case, the number
of Blue vertices converges with time, and we obtain a bound on the limit in [17].
One can use Theorem 1.5 to derive a βdelayedβ version (in which Red may need more than 4
days) of Theoremβ1.4, by first proving that with high probability, one side gains an advantage
of size at least πΆ π after the first day, for some constant πΆ. This βmajority sideβ then wins
with high probability given π β₯ ππ β1/2 (which
β satisfies the requirement π β₯ (1 + Ξ©(1))(log π)/π)
with π sufficiently large so that ππΆ = ππΆ π is large. The detailed argument is in Appendix A.
1.5 Notation
Below is the list of notations used throughout the paper.
β’ 1π : membership function of the set π β π, so that 1π (π£) := 1π£βπ .
β’ π
π‘ , π΅π‘ : Respectively the sets of Red and Blue vertices after day π‘. (At this point everyone
has updated their color π‘ times.)
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 4
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
β’ Iπ‘ (π’) := 1{π’βπ
π‘ } : indicator of the event that π’ is Red after day π‘, taking values in {0, 1}.
β’ Jπ‘ (π’) := 2Iπ‘ (π’) β 1: indicator of the same event, but taking values in {β1, 1}.
β’ π’ βΌ π£ β‘ (π’, π£) β πΈ: Event that π’ and π£ are adjacent.
β’ π(π£) := {π’ : π’ βΌ π£}: The neighborhood of π£.
β’ ππ’π£ := 1{π’βΌπ£} - Indicator of the adjacency between π’ and π£.
β« π βπ₯2 /2
β’ Ξ¦(π, π) := π
eβ
ππ₯ and Ξ¦(π) := Ξ¦(ββ, π), Ξ¦0 (π) := Ξ¦(0, π). Note that Ξ¦(π) = Ξ¦0 (π)+1/2.
2π
β’ Given a random variable π, we let supp(π) be the support of π, namely the smallest
subset π of β such that P(π β π) = 0. When π is discrete, supp(π) is simply the set of
possible values of π.
1.6 Main Theorem
The main theorem concerns dense graphs, where π is at least a constant. When given appropriate
values for the parameters, it implies the βPower of Fewβ phenomenon in Theorem 1.2.
Theorem 1.6. Assume π is a real number in (0, 1) and π and π are positive integers such that
!
β 2ππ + min{π, 1 β π} 1 β 2π + 2π 2 .8
πΉ(π, π, π) := πΞ¦0 β .6 p β > 0. (1.1)
π(1 β π)π π(1 β π) π
p
Consider the election process on πΊ βΌ πΊ(π, π) with π΅0 β€ π/2 β π. We have
β
π .8 π
β’ P π΅1 β€ β β₯ 1 β π1 , where
2 π
.25 + .7 1 β 2π + 2π 2 1 + 4π 2 π β2
2
π1 = π1 (π, π, π) := .
πΉ 2 (π, π, π)
β
π .8 π
β’ P π΅2 β€ .4π π΅1 β€ β β₯ 1 β π2 , where
2 π
β
π2 = π2 (π) := π β1 exp β(.076π β 1.38 π) .
π(π β 1)
β’ P π΅3 β€ π΅2 β€ .4π β₯ 1 β π3 , where
3
β1
π3 = π3 (π, π) := π exp β.02π(π π β 80) .
3
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 5
L INH T RAN AND VAN V U
π(π β 1)
β’ P π΅4 = 0 π΅3 β€ β₯ 1 β π4 , where
3
2
π4 = π4 (π, π) := π exp β π 2 (π β 1) .
9
Consequently, π
4 = π(πΊ) with probability at least 1 β (π1 + π2 + π3 + π4 ).
Note that the condition (1.1) is asymptotically the same as π = Ξ©(π β3/2 ).
The proof for this theorem has two main parts corresponding to the next two sections.
1. One day 1, the number of Red and Blue neighbors of each node are binomial with means
roughly π/2 + π and π/2 β π respectively. The central limit theorem then β implies that
most of their masses are concentrated within an interval of length Ξ( π) around their
respective expectations. A subinterval of constant length in this interval has Ξ(π β1/2 )
mass. We thus expect that the probability that Reds outnumber Blues in the neighborhood
of a given node is 1/2 + Ξ©(π β1/2 ). Thus, we expect that the number of Red nodes after the
first day is π/2 + Ξ©(π 1/2 ), meaning π/2 β Ξ©(π 1/2 ) Blue nodes.1 We use this intuition to
find the right bound for π΅1 in Theorem 1.6, and prove it in Section 2.
2. On subsequent days, the coloring depends on the graph so the argument for day 1 is
no longer available. We instead show that theβnumber of Blues decreases regardless of
the coloring after day 1, as long as Red has Ξ©( π) extra nodes. Using Hoeffding bound
(Theorem 3.1), we show that for a given coloring, the probability one can find βmanyβ
nodes with a majority of neighbors in Blue is small, sufficiently to beat the union bound
over all colorings. This bypasses the dependency issue after day 1. The details are in
Section 3.
From Theorem 1.6, one can deduce Theorems 1.2 and 1.3 in a few steps.
Proof of Theorem 1.2. Assume Theorem 1.6. Observe that if Equation (1.1) holds for some value
of π, then it holds for all larger values of π. Let π β₯ 1000, π = 1/2 and π = 5, we have (1.1)
satisfied. A routine calculation then shows that
n o
π1 (π, π, π) β€ .0902, max π2 (π), π3 (π, π), π4 (π, π) < 10β10 ,
which implies that P (π΅4 β β
) < .1 or equivalently that Red wins in the fourth day with
probability at least .9 (conditioned on the event π΅0 β€ π2 β π).
Proof of Theorem 1.3. In this proof, only π and π = π π can vary. We can assume, without
loss of generality, that π π β€ π/2. Assuming Theorem 1.6, a routine calculation shows that
1While this paper was under review, Sah and Sawhney in a recent paper [14] showed that π΅1 obeys a Gaussian
β
limit law with mean π/2 β Ξ(π ππ) and variance Ξ(π) when π β β for π > (log π)β1/16 .
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 6
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
π2 , π3 , π4 = o(π β2 ) and, so π1 + π2 + π3 + π4 = π1 + o(π β2 ). By Theorem 1.6, we then have
.25 + .7(1 β 2π + 2π 2 )2 1 + 4π 2 π β2
β2 β2
P(π
4 = π(πΊ)) β₯ 1 β (π1 + π(π )) = 1 β 2 β π(π ).
β 2ππ π +min{π,1βπ} .6(1β2π+2π 2 )
π Ξ¦0 β β β β .8π
π(1βπ)π π(1βπ)
We make use of the fact Ξ¦0 is increasing, while π₯ β¦β Ξ¦0 (π₯)/π₯ is decreasing, which implies
Ξ¦0 (π₯) β₯ min{1, π₯/π¦}Ξ¦0 (π¦) βπ₯, π¦ > 0. Then:
! ( )
β 2ππ π + min{π, 1 β π} β ππ β
πΞ¦0 β₯ π min 1, p Ξ¦0 2 π
π(1 β π)π (1 β π)π
p
( )
β ππ β β β
β₯ min π, p Ξ¦0 2 π β₯ min π, π π Ξ¦0 2 π .
1βπ
When π and π π are both sufficiently large and π is a constant, we have
" #
β β .6(1 β 2π + 2π 2 ) .8
min π, π π Ξ¦0 2 π β₯ 2 +
π(1 β π) π
p
Therefore we can give a lower bound on the denominator of π1 :
! β
β 2ππ π + min{π, 1 β π} .6(1 β 2π + 2π 2 ) .8 Ξ¦0 2 π β
π Ξ¦0 β β β₯ min π, π π .
π(1 β π)π π(1 β π) π
p p
2
Now we use the trivial facts π π β€ π/2 and 1 β 2π + 2π 2 β€ 1 to give an upper bound on the
numerator of π1 :
.25 + .7(1 β 2π + 2π 2 )2 1 + 4π 2 π β2 β€ .25 + .7 Β· 1 Β· (1 + 1) < 2.
Combining the above, we get
β
4Ξ¦0 (2 π)β2
2 β β2 1 1
π1 β€ β β 2 = β = 4Ξ¦0 (2 π) max , .
min{π π , π}Ξ¦0 (2 π)/2 min{ π, π π }2 π π π2
The term π(π β2 ) is also absorbed into this bound, since π β2 < π β1 . Therefore we have
1 1
P(π
4 = π(πΊ)) β₯ 1 β πΎ(π) max , .
π π π2
β
which is the desired bound where πΎ(π) can be chosen as 4Ξ¦0 (2 π)β2 + 1.
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 7
L INH T RAN AND VAN V U
1.7 Open questions
Let π(π, π, π) be the probability that Red wins if its camp has size π/2 + π in the beginning.
Theorem 1.2 shows that π(.5, 5, π) β₯ .9 (given that π is sufficiently large). In other words,
five defectors guarantee Redβs victory with overwhelming odds when π = 1/2. In fact, we
have π(.5, 4, π) β₯ .77 by plugging in the same values for π1 and π2 with π = 4 in the proof of
Theorem 1.2. We conjecture that one defector already brings a non-trivial advantage.
Conjecture 1.7 (The power of one). There is a constant πΏ(π) > 0 for each constant π > 0 such that
π(π, 1, π) β₯ 1/2 + πΏ(π) for all sufficiently large π. 1
In the following numerical experiment, we run π = 10000 independent trials. In each trial,
we fix a set of π = 10000 nodes with 5001 Red and 4999 Blue (meaning π = 1), generate a
graph from πΊ(π , 1/2), and simulate the process on the resulting graph. We record the number
of wins and the number of days to achieve the win in percentage in Table 1. Among others,
we see that Red wins within 3 days with frequency more than .9. The source code for the
simulation along with execution instructions can be found online at https://github.com/
thbl2012/majority-dynamics-simulation.
T p Red Blue Winner Last day Count Frequency
104 1/2 5001 4999 Blue 3 496 4.96 %
104 1/2 5001 4999 Blue 4 77 0.77 %
104 1/2 5001 4999 Blue 5 3 0.03 %
104 1/2 5001 4999 Blue 7 1 0.01 %
104 1/2 5001 4999 Red 2 25 0.25 %
104 1/2 5001 4999 Red 3 9313 93.13 %
104 1/2 5001 4999 Red 4 85 0.85 %
Table 1: Winners and winning days with their frequencies
Imagine that people defect from the Blue camp to the Red camp one by one. The value of the
π-th defector is defined as π£(π, π, π) := π(π, π, π) β π(π, π β 1, π) (where we take π(π, 0, π) = 1/2).
It is intuitive to think that the value of each extra defector decreases. (Clearly defector number
π/2 adds no value.)
Conjecture 1.8 (Values of defectors). For any fixed π, π, and π large enough, π£(π, π, π) β₯ π£(π, π +1, π).2
It is clear that the Conjecture 1.8 implies Conjecture 1.7, with πΏ = .45 = .08, although the
simulation results above suggests that πΏ can be at least .43.
In the next two sections we give the proof of our main result, Theorem 1.6.
1Both Conjectures 1.7 and 1.8 are proven for π > (log π)β1/16 in [14]. See footnote 1 a few paragraphs below
Theorem 1.6.
2See footnote 1 after Conjecture 1.7.
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 8
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
2 Day One
Firstly, let us recall a few terms defined in Theorem 1.6.
!
β 2ππ + min{π, 1 β π} 1 β 2π + 2π 2 .8
πΉ(π, π, π) := πΞ¦0 β .6 p β ,
π(1 β π)π π(1 β π) π
p
.25 + .7 1 β 2π + 2π 2 1 + 4π 2 π β2
2
π1 (π, π, π) := .
πΉ(π, π, π)2
Lemma 2.1. Given π β (0, 1) and π, π β β such that πΉ(π, π, π) > 0, The election process on πΊ βΌ πΊ(π, π)
with π΅0 β€ π/2 β π satisfies
β
π π
P π΅1 > β .8 β€ π1 (π, π, π) .
2 π
The core of the proof relies on some preliminary results regarding the difference of two
binomial random variables, which we discuss next.
2.1 Background on the difference of Binomial Random Variables
The difference of two binomial random variables with the same probability π can be written as a
sum of independent random variables, each of which is either a Bin(1, π) variable or minus of
one. A natural way to bound this sum is done via a BerryβEsseen normal approximation.
Theorem 2.2 (BerryβEsseen). Let π be any positive integer. If π1 , π2 , π3 , . . . , ππ are random variables
with zero means, variances π12 , π22 , Β· Β· Β· , ππ2 > 0, and absolute third moments E |ππ | 3 = π π < β, we
have:
π Γπ
!
π₯ ππ
β€ πΆ0 Β· π=13 ,
Γ
sup P ππ β€ π₯ β Ξ¦
π₯ββ ππ π
π=1 π
Γπ
where ππ = π=1 π π
2 1/2
and πΆ0 is a constant.
The original proof by Esseen [7] yielded πΆ0 = 7.59, and this constant has been improved a
number of times. The latest work by Shevtsova [15] achieved πΆ0 = .56, which will be used for
the rest of the paper. A direct application of this theorem gives the following lemma.
Lemma 2.3. For π β (0, 1), π = π(1 β π) and π1 , π2 β β such that π1 > π2 . let π1 βΌ Bin(π1 , π), π2 βΌ
p
Bin(π2 , π) be independent random variables. Then for any π β β,
!
π(π1 β π2 ) β π .56 1 β 2π + 2π 2
1
P (π1 > π2 + π) β₯ + Ξ¦0 p βp .
2 π(1 β π)(π1 + π2 ) π(1 β π)(π1 + π2 )
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 9
L INH T RAN AND VAN V U
Proof. Let π = π1 β π2 . By definition, we have π = π1 + π2 + π3 + Β· Β· Β· + ππ1 +π2 , where all the ππ
are independent and either ππ or βππ is Bin(1, π). Then E [π] = π E [ππ ] = π(π1 β π2 ). For all π,
Γ
Var [ππ ] = π(1 β π) and E |ππ β E [ππ ] | 3 = π(1 β π)3 + (1 β π)π 3 = π(1 β π)(1 β 2π + 2π 2 ).
Applying Theorem 2.2, we have
P (π1 β€ π2 + π) = P (π β E [π] β€ π β π(π1 β π2 ))
!
π β π(π1 β π2 )
Γ
πE |ππ β E [ππ ] | 3
β€ Ξ¦ p + .56
Var [π] Var [π]3/2
!
π β π(π1 β π2 ) π(1 β π)(1 β 2π + 2π 2 )(π1 + π2 )
= Ξ¦ p + .56
π(1 β π)(π1 + π2 ) (π(1 β π)(π1 + π2 ))3/2
!
1 π(π1 β π2 ) β π .56(1 β 2π + 2π 2 )
= β Ξ¦0 p +p ,
2 π(1 β π)(π1 + π2 ) π(1 β π)(π1 + π2 )
and the claim follows by taking the complement event.
Lemma 2.4. Let π β (0, 1) be a constant, and π1 βΌ Bin(π1 , π) and π2 βΌ Bin(π2 , π) be independent
r.v.s. Then for any integer π,
1.12 1 β 2π + 2π 2
P (π1 = π2 + π) β€ p .
π(1 β π)(π1 + π2 )
Proof. Let π = π1 + π2 and π = E [π1 ] β E [π2 ] = π(π1 β π2 ). Fix π β (0, 1), by the same
computations in Lemma 2.3, we have
!
πβπβπ .56 1 β 2π + 2π 2
P (π1 β π2 β€ π β π) β₯ Ξ¦ p β ,
π(1 β π)π π(1 β π)π
p
!
πβπ+π .56 1 β 2π β 2π 2
P (π1 β π2 < π + π) β€ Ξ¦ p + .
π(1 β π)π π(1 β π)π
p
It follows that
P(π1 = π2 + π) β€ P(π β π < π1 β π2 < π + π)
! !
πβπ+π πβπβπ
1.12 1 β 2π + 2π 2
β€ Ξ¦ p βΞ¦ p + .
π(1 β π)π π(1 β π)π π(1 β π)π
p
Letting π β 0, we obtain the desired claim.
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 10
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
Proof of Lemma 2.1
Recall that π
1 = π β π΅1 . Our goal is to give an upper bound on the probability that
β
π
1 < π/2 + π π for any given term π. Recall that π
1 = π£βπ I1 (π£), where I1 (π£) is 1 if π£ is Red
Γ
after Day 1 and 0 otherwise. We have: Since the indicators are not independent, a natural choice
for bounding their sum is to use Chebysevβs inequality. We proceed in two steps:
1. Give a lower bound on E π
1
by lowerbounding each term E [I1 (π£)].
2. Give an upper bound on Var π
1 by upperbounding each Var [I1 (π£)] and Cov [I1 (π£), I1 (π£ 0)].
Let π := π(1 β π), which appears in most equations in this proof. Note that 1β2π +2π 2 = 1β2π2 .
p
π β
Claim 2.5. E π
1 + πΉ1 (π, π, π) π, where
β₯
2
β
β 2ππ + min{π, 1 β π} .8 π
1 β 2π2
πΉ1 (π, π, π) = πΞ¦0 β β .6 = πΉ(π, π, π) + .
π π π π
Proof. For π£ β π and π β π, let ππ (π£) be the number of neighbors π£ has in π. By the majority
and tie-breaking rules , we have for each π£ β π,
π£ β π
1 β ππ
0 (π£) > ππ΅0 (π£) β I0 (π£). (2.1)
Note that ππ
0 (π£) βΌ Bin π
0 β I0 (π£), π and ππ΅0 (π£) βΌ Bin π΅0 + I0 (π£) β 1, π . By Lemma 2.3, we
have:
E [I1 (π£)] = P (π£ β π
1 ) = P (ππ
0 (π£) > ππ΅0 (π£) β I0 (π£))
!
π π
0 β π΅0 + 1 β 2I0 (π£) + I0 (π£) .56 1 β 2π2
1
β₯ + Ξ¦0 β β β
2 π πβ1 π πβ1
2ππ + π π£ .56 1 β 2π2 2ππ + min{π, 1 β π} .6 1 β 2π2
1 1
= + Ξ¦0 β β β β₯ + Ξ¦0 β β β ,
2 π πβ1 π πβ1 2 π π π π1
where π π£ := π (1 β I0 (π£)) + (1 β π)I0 (π£) β₯ min{π, 1 β π}, hence the last inequality. Summing over
all π£ β π, we get
" #
π β 2ππ + min{π, 1 β π} .6 1 β 2π2 β
E π
1 β₯ + πΞ¦0 π.
β β p
2 π π π(1 β π)
The proof is complete.
2
2 β1
Claim 2.6. Var π
1 β€ .25π + .7 1 β 2π π + 4π π .
2
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 11
L INH T RAN AND VAN V U
Proof. We first have:
Γ Γ
Var π
1 Cov [I1 (π£ 1 ), I1 (π£ 2 )] .
= Var [I1 (π£)] + 2
π£βπ π£1 β π£2
The variances Var [I1 (π£)] are easy due to I1 (π£) being a Bernoulli variable:
Var [I1 (π£)] = E [I1 (π£)] 1 β E [I1 (π£)] = .25 β (E [I1 (π£)] β .5)2 β€ .25.
(2.2)
Bounding the covariance Cov [I1 (π£ 1 ), I1 (π£2 )] for two distinct vertices π£ 1 , π£2 requires a bit more
care, as the indicators are dependent. By definition
Cov [I1 (π£ 1 ), I1 (π£ 2 )] = P (π£ 1 , π£2 β π
1 ) β P (π£ 1 β π
1 ) P (π£ 2 β π
1 ) .
Consider the event {π£ 1 , π£2 β π
1 }; P (π£1 , π£2 β π
1 ) can be written as
P (π£ 1 , π£2 β π
1 |π£1 βΌ π£ 2 ) P(π£ 1 βΌ π£ 2 ) + P (π£ 1 , π£2 β π
1 |π£1 6βΌ π£ 2 ) P(π£ 1 6βΌ π£ 2 ).
Notice that after we reveal the adjacency between π£ 1 and π£ 2 , the remaining vertices in the
neighborhoods of π£ 1 and π£ 2 are independent. Letting π π = P(π£ π β π
1 | π£ 1 βΌ π£ 2 ), π π = P(π£ π β π
1 |
π£ 1 6βΌ π£ 2 ) , we have
P (π£1 , π£2 β π
1 ) = ππ1 π 2 + (1 β π)π1 π2 .
Splitting up the other two events similarly gives P (π£ π β π
1 ) = ππ π + (1 β π)π π for π = 1, 2. Putting
the above together, we can write Cov [I1 (π£ 1 ), I1 (π£ 2 )] as
ππ 1 π 2 + (1 β π)π 2 π2 β (ππ1 + (1 β π)π1 ) (ππ2 + (1 β π)π 2 )
(2.3)
= π(1 β π)(π 1 β π1 )(π 2 β π2 ) = π2 (π1 β π1 )(π 2 β π2 ).
To bound π π β π π , we use logical reasoning to express it as something separate from the current
context. We start with π = 1. The analysis for π 2 β π2 is analogous. Assume π£1 βΌ π£ 2 for now, then
π 1 = P (ππ
0 (π£ 1 ) β₯ ππ΅0 (π£ 1 ) β I0 (π£1 ) + 1) ,
by Eq. (2.1). Define two binomial random variables:
π = ππ
0 (π£ 1 ) β I0 (π£ 2 ) = π(π£ 1 ) β© π
0 \ {π£ 1 , π£2 } βΌ Bin π
0 β I0 (π£ 1 ) β I0 (π£ 2 ), π ,
π = ππ΅0 (π£ 1 ) + I0 (π£ 2 ) β 1 = π(π£1 ) β© π΅0 \ {π£1 , π£2 } βΌ Bin π΅0 + I0 (π£ 1 ) + I0 (π£ 2 ) β 2, π .
We have
π 1 = P (π + I0 (π£ 2 ) + I0 (π£ 1 ) β₯ π + 2 β I0 (π£2 )) = P (π β π β₯ 1 β I0 (π£ 1 ) β J0 (π£ 2 )) .
Similarly, when conditioning on π£1 6βΌ π£ 2 , we get π1 = P (π β π β₯ 1 β I0 (π£ 1 )) .
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 12
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
Let us perform a case analysis on the initial color of π£ 2 .
π£ 2 β π
0 β π 1 β π1 = P(π β π β₯ βI0 (π£1 )) β P(π β π β₯ 1 β I0 (π£ 1 )) = P(π β π = βI0 (π£ 1 )).
π£ 2 β π΅0 β π 1 β π1 = P(π β π β₯ 2 β I0 (π£1 )) β P(π β π β₯ 1 β I0 (π£ 1 )) = βP(π β π = 1 β I0 (π£ 1 )).
Both cases give us
π 1 β π1 = J0 (π£ 2 )P π β π = 1 β I0 (π£ 1 ) β I0 (π£ 2 ) ,
where π βΌ Bin π
0 β I0 (π£ 1 ) β I0 (π£ 2 ), π and π βΌ Bin π΅0 + I0 (π£ 1 ) + I0 (π£ 2 ) β 2, π .
We apply the same analysis for π2 and π 2 and use Eq. (2.3) to get
2
Cov [I1 (π£ 1 ), I1 (π£2 )] = π2 J0 (π£ 1 )J0 (π£ 2 )P π β π = 1 β I0 (π£1 ) β I0 (π£ 2 ) .
If the initial colors of π£ 1 and π£2 are different Cov [I1 (π£ 1 ), I1 (π£ 2 )] β€ 0, so we can exclude them
from the upper bound. When π£ 1 and π£2 are of the same initial color, by Lemma 2.4, we have
2 2
1.12 1 β 2π2 1 β 2π2
Cov [I1 (π£ 1 ), I1 (π£2 )] β€ π 2
β β€ 1.4 .
π πβ2 π
Hence 2
π
0 π΅0 1 β 2π2
Γ
Cov [I1 (π£ 1 ), I1 (π£ 2 )] β€ + Β· 1.4 Β·
π£1 β π£2
2 2 π
1.4 π 2 π
2
2 4π 2
= +π β
2
1 β 2π 2
β€ .35 1 β 2π 2
π+ .
π 4 2 π
Equations (2.2) and (2.1) together yield
2
Var π
1 β€ .25π + .7 1 β 2π2 π + 4π 2 π β1 .
(2.4)
The proof is complete.
From Claims 2.5 and 2.6, Chebyshevβs inequality gives
π .8 β π .8 β
P π΅1 > β π = P π
1 < + π
2 π 2 π
Var π
1 .25π + .7 1 β 2π2 π + 4π 2 π β1
2
β€
β 2
β€ 2 .
E π
1 β π2 β .8π π
β
.8
π πΞ¦0 β .6 β
2ππ+min{π,1βπ} 2
β
π π
1β2π
β π
π(1βπ)
Dividing both the nominator and denominator by π and substituting π = π(1 β π), we get back
p
π1 (π, π, π) and finish the proof of Lemma 2.1. art one of Theorem 1.6 is complete.
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 13
L INH T RAN AND VAN V U
3 Day Two and after
Next, we analyze the situation after the first day. As mentioned in Section 1.6, the previous
argument is unusable due to the loss of independence from day 2 onwards. Instead, we use
βshrinking argumentsβ to argue that it is likely for the Blue camp to repeatedly shrink to empty,
regardless of the choice of its members, due to the structure of πΊ. The core of our shrinking
argument is Hoeffdingβs inequality, a classical result that gives exponentially small probability
tails for sums of independent random variables.
Theorem 3.1 (Hoeffdingβs inequality). Let {ππ } ππ=1 be independent random variables and {π π } ππ=1 ,
{π π } ππ=1 be real numbers such that for all π = 1, π, supp(π) β [π π , π π ]. Then for π = π1 + π2 + Β· Β· Β· + ππ ,
we have
n o 2π‘ 2
max P (π β₯ E [π] + π‘) , P (π β€ E [π] β π‘) β€ exp β Γπ .
π=1 (π π β π π )
2
The proof of Hoeffdingβs inequality is available in several graduate level probability textbooks,
e. g., [18]. The original proof given by Hoeffding appeared in [9].
Before discussing our shrinking arguments, we quickly record a useful lemma to simplify
binomial coefficients that may appear in union bounds.
π 2π
Lemma 3.2. For any integers π β₯ 1 and 0 β€ π β€ π, < β .
π π
Proof. This is well known. Observing that ππ β€ bπ/2c
π
, the case of even π is stated in [11, 2.6.2
Proposition]. The case of odd π = 2π β 1 follows immediately:
2π β 1 1 2π 1 22π 22πβ1 22πβ1
= < β = β < β .
πβ1 2 π 2 2π 2π 2π β 1
A simple yet useful shrinking argument is that, in the πΊ(π, π) model, it is with high
probability that all vertices in πΊ have many neighbors, so a small enough Blue camp will not be
able to influence anyone by a majority, thus inevitably vanishes the next day.
Lemma 3.3 (Part 4 of Theorem 1.6). For π β (0, 1) and π β β>1 , with probability at least
2
1 β π exp β π 2 (π β 1) = 1 β π4 (π, π),
9
πΊ is such that all vertices have more than 23 π(π β 1) neighbors, thus any choice of the Blue camp of at
most 31 π(π β 1) members vanishes the next day.
Proof. In a πΊ(π, π) graph, π(π£) is a sum of (π β 1) Bin(1, π) random variables, so Theorem 3.1
implies that for any π’ β π,
2 1 2
P π(π’) β€ π(π β 1) β€ P π(π’) β E [π(π’)] β€ β π(π β 1) β€ exp β π 2 (π β 1) .
3 3 9
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 14
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
By a union bound, the probability that all vertices have more than 23 π(π β 1) neighbors is at least
1 β π exp β 29 π 2 (π β 1) = 1 β π4 (π, π). Given this, a Blue camp of size 13 π(π β 1) surely vanishes
the next day since it cannot form a majority in the neighborhood of any vertex. The result then
follows.
This simple lemma completes the fourth part of Theorem 1.6. The arguments for Parts 2 and
3 require slightly more complexity, but both come directly from the following lemma.
Lemma 3.4. Let π β (0, 1), π, π0 β β , π0 < π2 . Then for all π β β , π β€ π, with probability at least
4π 2π 2 (π β 2π0 β 1)2 π
1β exp β ,
π π+πβ2
πΊ is such that any choice of the Blue camp of at most π0 members shrinks to below π in the next day.
Proof of Lemma 3.4. Consider a subset π of π with π elements. We first bound the probability
that π entirely turns Blue the next day. Let (π
, π΅) be the initial coloring with π΅ = π0 < π/2.
Similarly to the usual Jπ‘ , let J(π£) = 1 if π£ β π
and β1 otherwise. For each π£ β π, let dif(π£) :=
π(π£) β© π
β π(π£) β© π΅ = π’βπ J(π’)ππ’π£ , and let dif(π) := π£βπ dif(π£). We break down dif(π) as
Γ Γ
follows. Γh Γ i Γh i
dif(π) = 1π (π£) J(π’)ππ’π£ = J(π’)1π (π£) + J(π£)1π (π’) ππ’π£ . (3.1)
π£βπ π’βπ π’β π£
This is now a sum of independent variables, so we can apply Theorem 3.1.
" #
E [dif(π)]2
P (π β π΅1 | π΅0 ) β€ P(dif(π) β€ 0) β€ exp β Γ 2 ,
π’β π£ sup supp(ππ’π£ ππ’π£ ) β inf supp(ππ’π£ ππ’π£ )
(3.2)
where ππ’π£ := J(π’)1π (π£) + J(π£)1π (π’). The following table sums up respective values of ππ’π£ for
(π’, π£) among the sets π
β© π, π
\ π, π΅ β© π and π΅ \ π.
π’\π£ π
β© π π΅ β© π π
β© ππ π΅ β© ππ
π
β©π 2 0
1 β1
π΅β©π 0 β2 (3.3)
π
β© ππ 1
0
π΅ β© ππ β1
Substituting back into Equation (3.1), we get
Γ Γ Γ Γ Γ Γ
dif(π) = (2ππ’π£ ) β (2ππ’π£ ) + ππ’π£ β ππ’π£ .
{π’,π£}βπβ©π
{π’,π£}βπβ©π΅ π’βπ π£βπ
\π π’βπ π£βπ΅\π
For convenience in the following computations, let ππ
:= π β© π
and ππ΅ := π β© π΅ .
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 15
L INH T RAN AND VAN V U
Note that π΅ \ π = π0 β ππ΅ and π
\ π = π β π0 β ππ
. Let π be a Bin(1, π) random variable.
Observe that π is a sum of π2π
= ππ
(π2π
β1) copies of 2π, π2π΅ = ππ΅ (π2π΅ β1) copies of (β2π),
π(π β π0 β ππ
) copies of π, and π(π0 β ππ΅ ) copies of (βπ), all independent. Therefore
ππ
(ππ
β 1) ππ΅ (ππ΅ β 1)
E [dif(π)] = π 2 Β· β2Β· + π(π β π0 β ππ
) β π(π0 β ππ΅ )
2 2
h i
= π ππ
2 β ππ΅2 β ππ
+ ππ΅ + π(π β 2π0 β ππ
+ ππ΅ )
h i
= π (ππ
β ππ΅ )(ππ
+ ππ΅ ) β π(ππ
β ππ΅ ) + π(π β 2π0 ) β ππ
+ ππ΅ )
h i
= π π(ππ
β ππ΅ ) β π(ππ
β ππ΅ ) + π(π β 2π0 ) β (π β ππ΅ ) + ππ΅
h i h i
= π π(π β 2π0 ) β π + 2ππ΅ β₯ π π(π β 2π0 ) β π = ππ(π β 2π0 β 1).
Moreover, supp(ππ’π£ ) = 0, 1, so supp(ππ’π£ ππ’π£ ) = {0, ππ’π£ }, thus
sup supp(ππ’π£ ππ’π£ ) β inf supp(ππ’π£ ππ’π£ ) = |ππ’π£ | for all π’ β π£.
Therefore the denominator of the exponent in Equation (3.2) is
Γ Γ
sup supp(ππ’π£ ππ’π£ ) β inf supp(ππ’π£ ππ’π£ ) ππ’π£
2 2
=
π’β π£ π’β π£
Γ Γ Γ Γ Γ Γ
= 2 +
2
(β2) +
2
1 +
2
(β1)2
{π’,π£}βπβ©π
{π’,π£}βπβ©π΅ π’βπ π£βπ
\π π’βπ π£βπ΅\π
ππ
(ππ
β 1) ππ΅ (ππ΅ β 1)
= 4Β· +4Β· + π(π β π0 β ππ
) + π(π0 β ππ΅ )
2 2
= 2ππ
2 + 2ππ΅2 β 2ππ
β 2ππ΅ + π(π β ππ
β ππ΅ ) = 2(ππ
+ ππ΅ )2 β 4ππ
ππ΅ β 2π + π(π β π)
β€ 2(ππ
+ ππ΅ )2 β 2π + π(π β π) = 2π 2 β 2π + π(π β π) = π(π + π β 2).
Substituting this bound into Equation (3.2), we get
" 2 #
2 ππ(π β 2π0 β 1) 2π 2 (π β 2π0 β 1)2 π
P(π β π΅1 | π΅0 = π΅) β€ exp β = exp β .
π(π + π β 2) π+πβ2
Applying a double union bound over choices of π and π΅, noting that by Lemma 3.2, the number
π π
of choices π0 π is at most 4π /π, so we have
π π 4π 2π 2 (π β 2π0 β 1)2 π
P βπ΅0 β , βπ β , π β π΅1 β€ exp β .
π0 π π π+πβ2
Taking the complement event, we get the desired result.
Lemma 3.4 is sufficient to prove parts 2 and 3 of Theorem 1.6. The following lemmas are
direct corollaries of Lemma 3.4.
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 16
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
Lemma 3.5 (Part 2 of Theorem 1.6). In the election process on πΊ βΌ πΊ(π, π), with probability at least
1 β π β1 exp (β.025π) = 1 β π2 (π), (3.4)
β
πΊ is such that any choice of the Blue camp of at most π/2 β (.8/π) π = π1π΅ members shrinks to size at
most .4π = π2π΅ the next day.
β
Proof of Lemma 3.5. Let π2 := π/2 β (.8/π) π and π := d.4πe. By Lemma 3.4, πΊ is such that
every Blue set of at most π2 vertices shrinks to size π β 1 with probability at least
4π 2π 2 (π β 2π2 β 1)2 π 2π 2 (π β 2π2 β 1)2 π
1
1β exp β = 1 β exp β β 2π log 2 .
π π+πβ2 π π+πβ2
We have β
π π 1.6 π
β₯ β₯ 2/7, and π β 2π2 β 1 β₯ β 1.
π+πβ2 π+π π
Therefore can bound the exponent of the RHS of Equation (3) as follows:
β
2π 2 (π β 2π2 β 1)2 π 4 2 1.6 π
β 2π log 2 β₯ π β 1 β 2π log 2
π+πβ2 7 π
12.8 β β
10.24 4
= β 2 log 2 π β π π + π 2 β₯ .076π β 1.83 π.
7 7 7
The proof is complete.
Lemma 3.6 (Part 3 of Theorem 1.6). In the election process on πΊ βΌ πΊ(π, π), with probability at least
β1
1βπ exp β.02π(π π β 80) = 1 β π3 (π, π),
3
πΊ is such that any choice of the Blue camp with at most .4π members shrinks to at most 13 π(π β 1)
members the next day.
Proof of Lemma 3.6. Let π2 := b.4πc and π := d 31 π(π β 1)e. By Lemma 3.4, πΊ satisfies that every
Blue set of at most π2 vertices shrinks to size π β 1 with probability at least
4π 2π 2 (π β 2π2 β 1)2 π 2π 2 (π β 2π β 1)2 π
1
β 2π log 2 .
2
1β exp β = 1 β exp β (3.5)
π π+πβ2 π π+πβ2
Since π2 β€ .4π, we have (π β 2π2 β 1)2 β₯ (.2π β 1)2 . Furthermore, π β₯ 31 π(π β 1) so
π π(π β 1)/3 π π
β₯ = β₯ .
π+πβ2 π + π(π β 1)/3 β 1 3 + π 4
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 17
L INH T RAN AND VAN V U
Therefore we can bound the exponent in the RHS of Equation (3.5) as follows:
2π 2 (π β 2π2 β 1)2 π 2π 2 (.2π β 1)2 π π 3 (.2π β 1)2
β 2π log 2 β₯ β 2π log 2 β₯ β 2π log 2
π+πβ2 π+πβ2 2
π3
1
= π 3 .02π 2 β .2π + β 2π log 2 β₯ .02π 3 π 2 β (.2π 3 + 2 log 2)π +
2 2
= .02π(π 3 π β 10π 3 β 100 log 2) β₯ .02π(π 3 π β 80),
where the last inequality is due to 10π 3 + 100 log 2 β€ 10 + 100 log 2 < 80. The result follows.
These lemmas and Lemma 3.3, together with Lemma 2.1 form the complete βchain of
shrinkingβ for the number of Blue vertices to reach 0 in four days, hence wrapping up the proof
of Theorem 1.6.
4 Conclusion
The majority dynamics scheme on a network is a process where each of the π individuals is
assigned an initial color, which changes daily to match the majority among their neighbors. Our
main result, Theorem 1.6, when the network is a πΊ(π, π) random graph, yields an explicit lower
bound based on π, π and π for the probability that the side with the initial majority wins. It has
two important implications. The first is a surprising phenomenon, which we call the Power of
Few phenomenon (Theorem 1.2), which shows that when π = 1/2 and π = .1, π can be set to just
5, meaning five extra people is all it takes to win a large election with overwhelming odds. The
second is an asymptotic dependency between the π, π and π (Theorem 1.3), which shows that
for any fixed π, there is a constant πΎ(π) such that choosing π and π both large enough so that
πΎ(π) max{π β1 , π β2 } < π will ensure that the winning probability is at least 1 β π.
Although the results in this paper only apply to dense πΊ(π, π) graphs, we do cover sparse
graphs in a separate in-progress paper [17], where we obtain the Power of Few phenomenon
for π = Ξ©((log π)/π), and discuss the end result (other than a win) for lower values of π. We
nevertheless mention one of the main results proved in the upcoming paper (Theorem 1.5),
and use it to prove the main theorem of the paper [8] by Fountoulakis, Kang and Makai in
Appendix A.
A Proof of Fountoulakis et alβs Theorem from Theorem 1.5
In this Appendix we give a proof of the main theorem by Fountoulakis et al in [8] (Theorem 1.4)
using the main theorem in our upcoming paper (Theorem 1.5).
Proof. Assume Theorem 1.5. Let π
0 and π΅0 respectively be the initial Red and Blue camps. Fix a
constant 0 < π 0 β€ π/6. π
0 βΌ Bin(π, 1/2) since it is a sum of Bin(1, 1/2) variables. An application
of the BerryβEsseen theorem (Theorem 2.2; with .56 = .56) implies that
π β .56 π β .56
P π
0 β β€ π π β€ Ξ¦(2π ) + β , and P π
0 β β€ βπ π β₯ Ξ¦(β2π 0) β β ,
0 0 0
2 π 2 π
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 18
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
Thus
π β .56 .56
P π
0 β β€ π 0 π β€ Ξ¦(2π 0) + β β Ξ¦(β2π 0) β β
2 π π
20.56 4π 0 20.56 π 20.56
β€ Ξ¦(β2π 0 , 2π 0) + β β€ β + β β€ + β β€ π/2,
π 2π π 3 π
for sufficiently large π.
β β
On the other hand, if π
0 β π/2 > π 0 π, then one of the sides has more than π/2 + π 0 π
initial members, which we call the majority side. Now we apply Theorem 1.5 with π replaced by
π/2. Notice
β that in the setting of Theorem 1.4 , if we have π = ππ β1/2 for π sufficiently large,
then π 0 π β₯ π/π, where π is the constant in Theorem 1.5. Thus, by this theorem, the probability
for the majority side to win is at least 1 β π/2, and we are done by the union bound.
References
[1] Dan Alistarh, James Aspnes, David Eisenstat, Rati Gelashvili, and Ronald L. Rivest: Time-
space trade-offs in population protocols. In Proc. 28th Ann. ACMβSIAM Symp. on Discrete
Algorithms (SODAβ17), pp. 2560β2579. SIAM, 2017. [doi:10.1137/1.9781611974782.169,
arXiv:1602.08032] 3
[2] Dan Alistarh, Rati Gelashvili, and Milan VojnoviΔ: Fast and exact majority in population
protocols. In Proc. ACM Symp. Principles Distrib. Computing (PODCβ15), pp. 47β56. ACM
Press, 2015. [doi:10.1145/2767386.2767429] 3
[3] Itai Benjamini, Siu-On Chan, Ryan OβDonnell, Omer Tamuz, and Li-Yang Tan: Convergence,
unanimity and disagreement in majority dynamics on unimodular graphs and random
graphs. Stoch. Proc. Appl., 126(9):2719β2733, 2016. [doi:10.1016/j.spa.2016.02.015] 3, 4
[4] BΓ©la BollobΓ‘s: Models of random graphs. In Random Graphs, chapter 2, pp. 34β50.
Cambridge Univ. Press, 2001. [doi:10.1017/CBO9780511814068.004] 2
[5] Peter Clifford and Aidan Sudbury: A model for spatial conflict. Biometrika, 60(3):581β588,
1973. [doi:10.1093/biomet/60.3.581] 3
[6] Morris H. DeGroot: Reaching a consensus. J. Amer. Statistical Assoc., 69(345):118β121, 1974.
[doi:10.2307/2285509] 3
[7] Carl-Gustav Esseen: A moment inequality with an application to the central limit theorem.
Scandinavian Actuarial J., 1956(2):160β170, 1956. [doi:10.1080/03461238.1956.10414946] 9
[8] Nikolaos Fountoulakis, Mihyun Kang, and TamΓ‘s Makai: Resolution of a conjecture on
majority dynamics: Rapid stabilisation in dense random graphs. Random Struct. Algor.,
57(4):1134β1156, 2020. [doi:10.1002/rsa.20970, arXiv:1910.05820] 3, 4, 18
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 19
L INH T RAN AND VAN V U
[9] Wassily Hoeffding: Probability inequalities for sums of bounded random variables. J.
Amer. Statistical Assoc., 58(301):13β30, 1963. [doi:10.1080/01621459.1963.10500830] 14
[10] Svante Janson, Tomasz Εuczak, and Andrzej Rucinski: Random Graphs. Wiley, 2011.
Chapter 1 (Preliminaries), pages 1β23. [doi:10.1002/9781118032718.ch1] 2
[11] JiΕi MatouΕ‘ek and Jaroslav NeΕ‘etΕil: Invitation to Discrete Mathematics. Oxford Univ. Press,
1998. LINK. 14
[12] Elchanan Mossel, Joe Neeman, and Omer Tamuz: Majority dynamics and aggregation of
information in social networks. Autonomous Agents and Multi-Agent Systems, 28(3):408β429,
2014. [doi:10.1007/s10458-013-9230-4] 3, 4
[13] Elchanan Mossel and Omer Tamuz: Opinion exchange dynamics. Probability Surveys,
14:155β204, 2017. [doi:10.1214/14-PS230] 2, 3, 4
[14] Ashwin Sah and Mehtaab Sawhney: Majority dynamics: The power of one, 2021.
[arXiv:2105.13301] 6, 8
[15] Irina Gennadβevna Shevtsova: An improvement of convergence rate estimates in the
Lyapunov theorem. Doklady Math., 82(3):862β864, 2010. [doi:10.1134/S1064562410060062]
9
[16] Linh Tran and Van Vu: Reaching a consensus on random networks: The power
of few. In Proc. 24th Internat. Conf. on Randomization and Computation (RAN-
DOMβ20), pp. 20:1β15. Schloss DagstuhlβLeibniz-Zentrum fuer Informatik, 2020.
[doi:10.4230/LIPIcs.APPROX/RANDOM.2020.20] 1
[17] Linh Tran and Van Vu: The βpower of fewβ phenomenon: The sparse case, 2023.
[arXiv:2302.05605] 3, 4, 18
[18] Roman Vershynin: Concentration of sums of independent random variables. In High-
Dimensional Probability: An Introduction with Applications in Data Science, chapter 2, pp. 11β37.
Cambridge Univ. Press, 2018. [doi:10.1017/9781108231596.005] 14
AUTHORS
Linh Tran
Graduate Student
Department of Mathematics
Yale University
New Haven, CT, USA
l.tran yale edu
https://math.yale.edu/people/linh-tran
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 20
R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW
Van Vu
Percey F. Smith Professor of Mathematics and Professor of Statistics & Data Science
Yale University
New Haven, CT, USA
van.vu yale edu
https://campuspress.yale.edu/vanvu/
ABOUT THE AUTHORS
Linh Tran is currently a fifth year Ph. D. student in the Math department at Yale
University, working under the supervision of Professor Van Vu. He went to
special math classes for gifted children at the high school division of the Vietnam
National University in Ho Chi Minh City. Encouraged and nurtured by his high
school teachers, Tran found his passion for Math and was determined to follow
this path, especially after having had little success in other subjects. Under the
guidance of his advisor, Prof. Van Vu, he does research in random graph theory,
particularly majority dynamics and generalizations, and random matrix theory,
particularly its application in data analysis and machine learning methods. He is
also interested in combinatorics problems involving the probabilistic method,
applications of geometry in combinatorics, e. g., random fractals, and theoretical
computer science.
Van Vu is the P. F. Smith Professor of Mathematics and Professor of Data Science at
Yale University. Vu was born in Hanoi (Vietnam) in 1970. He went to special
math classes for gifted children at Chu Van An and Hanoi-Amsterdam high
schools. In 1987, he went to Hungary for his undergraduate studies, and obtained
his M. Sc. in mathematics at EΓΆtvΓΆs University, Budapest, in 1994. He received
his Ph. D. at Yale University, in 1998 under the direction of LΓ‘szlΓ³ LovΓ‘sz. His
research interest includes random structures (random matrices, random graphs,
random functions), probabilistic and additive combinatorics, and randomized
algorithms. Recently, he he has also been interested in real-life applications of
data science (such as NLP and computer vision).
T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1β21 21