DOKK Library

Reaching a Consensus on Random Networks: The Power of Few

Authors Linh Tran, Van Vu,

License CC-BY-3.0

Plaintext
                           T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21
                                        www.theoryofcomputing.org

                          S PECIAL ISSUE : APPROX-RANDOM 2020



          Reaching a Consensus on Random
            Networks: The Power of Few
                                         Linh Tran                  Van Vu
                  Received March 1, 2021; Revised July 20, 2022; Published November 14, 2023




       Abstract. A community of 𝑛 individuals splits into two camps, Red and Blue. The
       individuals are connected by a social network, which influences their colors. Every
       day each person changes their color according to the majority of their neighbors.
       Red (Blue) wins if everyone in the community becomes Red (Blue) at some point.
           We study this process when the underlying network is the random ErdΕ‘s–RΓ©nyi
       graph 𝐺(𝑛, 𝑝). With a balanced initial state (𝑛/2 persons in each camp), it is clear
       that each color wins with the same probability.
           Our study reveals that for any constants 𝑝 and πœ€, there is a constant 𝑐 such that
       if one camp has at least 𝑛/2 + 𝑐 individuals at the initial state, then it wins with
       probability at least 1 βˆ’ πœ€. The surprising fact here is that 𝑐 does not depend on 𝑛,
       the population of the community. When 𝑝 = 1/2 and πœ€ = .1, one can set 𝑐 = 5,
       meaning one camp has 𝑛/2 + 5 members initially. In other words, it takes only 5
       extra people to win an election with overwhelming odds. We also generalize the
       result to 𝑝 = 𝑝 𝑛 = o(1) in a separate paper.

   A preliminary version of this paper appeared in the Proceedings of the 24th International Conference on
Randomization and Computation (RANDOM’20) [16].


ACM Classification: G.3, G.2.2
AMS Classification: 37E25, 05C57, 05C80, 82B43, 68R10, 60K35, 60K37
Key words and phrases: random graphs, ErdΕ‘s-RΓ©nyi model, majority dynamics, consensus


Β© 2023 Linh Tran and Van Vu
c b Licensed under a Creative Commons Attribution License (CC-BY)                 DOI: 10.4086/toc.2023.v019a006
                                       L INH T RAN AND VAN V U

1     Introduction
1.1   The opinion exchange dynamics
Building mathematical models to explain how collective opinions are formed is an important
and interesting task (see [13] for a survey on the topic, with examples from various fields,
economy, sociology, statistical physics, to mention a few).
    Obviously, our opinions are influenced by people around us, and this motivates the study
of the following natural and simple model: A community of 𝑛 individuals splits into two
camps, Red and Blue, representing two competing opinions, which can be on any topic such
as brand competition, politics, ethical issues, etc. The individuals are connected by a social
network, which influences their opinion on a daily basis (by some specific rule). We say that
Red (respectively Blue) wins if everyone in the community becomes Red (respectively Blue) at
some point.
    We study this process when the underlying network is random. In this paper, we focus on
the ErdΕ‘s–RΓ©nyi random graph 𝐺(𝑛, 𝑝), which is the most popular model of random graphs
[4, 10]. We use the majority rule, which is a natural choice. When a new day comes, a vertex
scans its neighbors’ colors from the previous day and adopts the dominant one. If there is a tie,
it keeps its color.

Definition 1.1. The random graph 𝐺(𝑛, 𝑝) on 𝑛 ∈ β„• vertices and density 𝑝 ∈ (0, 1) is obtained
by putting an edge between any two vertices with probability 𝑝, independently.

1.2   Results
With a balanced initial state (𝑛/2 persons in each camp), by symmetry, each color wins with the
same probability π‘ž < 1/2, regardless of 𝑝. (Notice that there are graphs, such as the empty and
complete graphs, on which no one wins.) Our study reveals that for any given 𝑝 and πœ€, there
is a constant 𝑐 such that if one camp has at least 𝑛/2 + 𝑐 individuals at the initial state, then it
wins with probability at least 1 βˆ’ πœ€. The surprising fact here is that 𝑐 does not depend on 𝑛, the
population of the community. When 𝑝 = 1/2 and πœ€ = .1, one can set 𝑐 as small as 5.

Theorem 1.2 (The power of few). Consider the (majority) process on 𝐺(𝑛, 1/2). Assume that the Red
camp has at least 𝑛/2 + 5 vertices at the initial state, where 𝑛 β‰₯ 1000. Then Red wins after the fourth
day with probability at least 90%.

    This result can be stated without the ErdΕ‘s–RΓ©nyi 𝐺(𝑛, 1/2) model; one can state an equivalent
theorem by choosing the network uniformly, from the set of all graphs on 𝑛 vertices.
    This result reveals an interesting phenomenon, which we call β€œthe power of few.” The
collective outcome can be extremely sensitive, as a modification of the smallest scale in the initial
setting leads to the opposite outcome.
    Our result applies in the following equivalent settings.
    Model 1. We fix the two camps, of sizes 𝑛/2 + 𝑐 and 𝑛/2 βˆ’ 𝑐, and draw a random graph
𝐺 ∼ 𝐺(𝑛, 𝑝) over their union.

                       T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                          2
               R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

    Models 2. We draw a random graph 𝐺 ∼ 𝐺(𝑛, 𝑝) first, let Red be a random subset of 𝑛/2 + 𝑐
vertices (chosen uniformly from all subsets of that size), and Blue be the rest.
    Model 3. We split the society into two camps of size 𝑛/2 each, then draw the random graph
𝐺 ∼ 𝐺(𝑛, 𝑝) on their union, then recolor 𝑐 randomly selected Blue vertices to Red.
    Model 4. Split the society into two camps (Red and Blue) of size 𝑛/2 βˆ’ 𝑐 each and a β€œswing”
group (with no color yet) of 2𝑐 individuals. Draw the random graph on their union. Now let
the swing group join the Red camp.
    With Model 3, we can imagine a balanced election process at the beginning. Then 𝑐 = 5
people move to the other camp. Theorem 1.2 asserts that this tiny group already guarantees the
final win with overwhelming odds. Similarly, Model 4 implies that a swing group of size 10
decides the outcome.
    Our result can also be used to model the phenomenon that outcomes in seemingly identical
situations become opposite. Consider two communities, each has exactly 𝑛 individuals, sharing
the same social network. In the first community, the Red camp has size 𝑛/2 + 𝑐, and the Blue
camp has 𝑛/2 βˆ’ 𝑐. In the second community, the Blue camp has 𝑛/2 + 𝑐 and the Red camp has
𝑛/2 βˆ’ 𝑐. If 𝑛 is large, there is no way to tell the difference between the two communities. Even if
we record everyone’s initial opinion, clerical errors will surely swallow the tiny difference of 2𝑐.
However, at the end, the collective opinion will be opposite, with high probability.
    Now we state the general result for arbitrary constant density 𝑝.

Theorem 1.3 (Asymptotic bound). Let 𝑝 be a constant in (0, 1) and 𝑐 𝑛 be a positive integer which may
depend on 𝑛 . Assume that Red has 𝑛/2 + 𝑐 𝑛 in day 0 and the random graph is 𝐺(𝑛, 𝑝). Then Red wins
after four days with probability at least 1 βˆ’ 𝐾(𝑝) max{𝑛 βˆ’1 , 𝑐 π‘›βˆ’2 }, where 𝐾(𝑝) depends only on 𝑝.

   Both results follow from Theorem 1.6, which, in a slightly technical form, describes how the
process evolves day by day. Our results can be extended to cover the case when there are more
than 2 opinions; details will appear in a later paper [17].

1.3   Related results
Our problem is related to a well-studied class of opinion exchange dynamics problems. In the field
of Computer Science, loosely-related processes are studied in population protocols [2, 1], where
individuals/agents/nodes choose their next state based on that of their neighbors. The most
separating difference is the network, as connections in these models often randomly change with
time, while our study concerns a fixed network (randomly generated before the process begins).
    The survey by Mossel and Tamuz [13] discussed several models for these problems, including
the DeGroot model [6], where an individual’s next state is a weighted average of its neighbors’
current states, the voter model [5], where individuals change states by emulating a random
neighbor each day. The majority dynamics model is in fact the same as ours, and is also more
popular than the other two, having been studied in [12, 8, 3]. The key difference, as compared to
our study, is in the setups. In these earlier papers, each individual chooses his/her initial color
uniformly at random. The central limit theorem thus guarantees
                                                            √       that with high probability, the
initial difference between the two camps is of order Θ( 𝑛). Therefore, these papers did not

                      T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                         3
                                        L INH T RAN AND VAN V U

touch upon the β€œpower of few” phenomenon, which is our key message. On the other hand,
they considered sparse random graphs where the density 𝑝 = 𝑝 𝑛 β†’ 0 as 𝑛 β†’ +∞.
    In [3], Benjamini, Chan, O’Donnell, Tamuz, and Tan considered random graphs with
𝑝 β‰₯ πœ†π‘› βˆ’1/2 , where πœ† is a sufficiently large constant, and showed that the dominating color wins
with probability at least .4 [3, Theorem 1.2], while conjecturing that this probability in fact tends
to 1 as 𝑛 β†’ ∞. This conjecture was proved by Fountoulakis, Kang, and Makai [8, Theorem 1.1].
Theorem 1.4 (Fountoulakis, Kang, Makai). For any 0 < πœ€ ≀ 1 there is πœ† = πœ†(πœ€) such that the
following holds for 𝑝 β‰₯ πœ†π‘› βˆ’1/2 : With probability at least 1 βˆ’ πœ€, over the choice of the random graph
𝐺(𝑛, 𝑝) and the choice of the initial state, the dominating color wins after four days.
      For related results on random regular graphs, see [12, 13].

1.4     Extension for sparse random graphs
Note that the results presented in this paper only apply to a constant 𝑝, which, in the context of
𝐺(𝑛, 𝑝), produces dense graphs. For sparse graphs, i. e., when 𝑝 = 𝑝 𝑛 depends on 𝑛 and tends to 0
as 𝑛 β†’ +∞, the main ideas in this paper can be used, but with different algebraic techniques, to
obtain a similar result.
Theorem 1.5. For any 0 < πœ€ ≀ 1 and πœ† > 0 there is 𝑐 = 𝑐(πœ€, πœ†) such that for 𝑝 β‰₯ (1 + πœ†)(log 𝑛)/𝑛, if
Red starts with 𝑛/2 + 𝑐/𝑝 members, then it wins with probability at least 1 βˆ’ πœ€.
    The technical changes needed to prove this theorem require rewriting entire proofs with
new computations, so we leave the proof to our future paper [17]. Additional information
such as the length of the process and the explicit relation between the bound with 𝑝 and 𝑐
will also be discussed there. Notice that when 𝑝 is a constant, this result covers the β€œPower of
Few” phenomenon as a special case, albeit with 𝑐 much larger than 5. Thus, the techniques and
results in this paper still have merit since they achieve a specific, surprisingly small constant.
Theorem 1.5 no longer holds for 𝑝 < (log 𝑛)/𝑛 as in this case there are, with high probability,
isolated vertices. Any of these vertices keeps its original color forever. In this case, the number
of Blue vertices converges with time, and we obtain a bound on the limit in [17].
    One can use Theorem 1.5 to derive a β€œdelayed” version (in which Red may need more than 4
days) of Theorem√1.4, by first proving that with high probability, one side gains an advantage
of size at least 𝐢 𝑛 after the first day, for some constant 𝐢. This β€œmajority side” then wins
with high probability given 𝑝 β‰₯ πœ†π‘› βˆ’1/2 (which
                                           √    satisfies the requirement 𝑝 β‰₯ (1 + Ξ©(1))(log 𝑛)/𝑛)
with πœ† sufficiently large so that πœ†πΆ = 𝑝𝐢 𝑛 is large. The detailed argument is in Appendix A.

1.5     Notation
Below is the list of notations used throughout the paper.

      β€’ 1𝑆 : membership function of the set 𝑆 βŠ‚ 𝑉, so that 1𝑆 (𝑣) := 1π‘£βˆˆπ‘† .
      β€’ 𝑅 𝑑 , 𝐡𝑑 : Respectively the sets of Red and Blue vertices after day 𝑑. (At this point everyone
        has updated their color 𝑑 times.)

                        T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                        4
                  R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

      β€’ I𝑑 (𝑒) := 1{π‘’βˆˆπ‘…π‘‘ } : indicator of the event that 𝑒 is Red after day 𝑑, taking values in {0, 1}.

      β€’ J𝑑 (𝑒) := 2I𝑑 (𝑒) βˆ’ 1: indicator of the same event, but taking values in {βˆ’1, 1}.

      β€’ 𝑒 ∼ 𝑣 ≑ (𝑒, 𝑣) ∈ 𝐸: Event that 𝑒 and 𝑣 are adjacent.

      β€’ 𝑁(𝑣) := {𝑒 : 𝑒 ∼ 𝑣}: The neighborhood of 𝑣.

      β€’ π‘Šπ‘’π‘£ := 1{π‘’βˆΌπ‘£} - Indicator of the adjacency between 𝑒 and 𝑣.
                     ∫ 𝑏 βˆ’π‘₯2 /2
      β€’ Ξ¦(π‘Ž, 𝑏) :=    π‘Ž
                          e√
                               𝑑π‘₯ and Ξ¦(π‘Ž) := Ξ¦(βˆ’βˆž, π‘Ž), Ξ¦0 (π‘Ž) := Ξ¦(0, π‘Ž). Note that Ξ¦(π‘Ž) = Ξ¦0 (π‘Ž)+1/2.
                            2πœ‹

      β€’ Given a random variable 𝑋, we let supp(𝑋) be the support of 𝑋, namely the smallest
        subset 𝑆 of ℝ such that P(𝑋 βˆ‰ 𝑆) = 0. When 𝑋 is discrete, supp(𝑋) is simply the set of
        possible values of 𝑋.

1.6     Main Theorem
The main theorem concerns dense graphs, where 𝑝 is at least a constant. When given appropriate
values for the parameters, it implies the β€œPower of Few” phenomenon in Theorem 1.2.

Theorem 1.6. Assume 𝑝 is a real number in (0, 1) and 𝑛 and 𝑐 are positive integers such that
                                                                    !
                                  √         2𝑝𝑐 + min{𝑝, 1 βˆ’ 𝑝}             1 βˆ’ 2𝑝 + 2𝑝 2 .8
                 𝐹(𝑛, 𝑝, 𝑐) :=        𝑛Φ0                               βˆ’ .6 p           βˆ’   > 0.               (1.1)
                                                     𝑝(1 βˆ’ 𝑝)𝑛                 𝑝(1 βˆ’ 𝑝)    𝑝
                                                 p


Consider the election process on 𝐺 ∼ 𝐺(𝑛, 𝑝) with 𝐡0 ≀ 𝑛/2 βˆ’ 𝑐. We have
                        √ 
                 𝑛 .8 𝑛
       
   β€’ P 𝐡1 ≀ βˆ’                 β‰₯ 1 βˆ’ 𝑃1 , where
                 2      𝑝

                                                       .25 + .7 1 βˆ’ 2𝑝 + 2𝑝 2           1 + 4𝑐 2 𝑛 βˆ’2
                                                                                  2                    
                             𝑃1 = 𝑃1 (𝑛, 𝑝, 𝑐) :=                                                           .
                                                                        𝐹 2 (𝑝, 𝑛, 𝑐)
                                   √ 
                               𝑛 .8 𝑛
          
      β€’ P 𝐡2 ≀ .4𝑛         𝐡1 ≀ βˆ’      β‰₯ 1 βˆ’ 𝑃2 , where
                               2   𝑝
                                                                           √ 
                                      𝑃2 = 𝑃2 (𝑛) := 𝑛 βˆ’1 exp βˆ’(.076𝑛 βˆ’ 1.38 𝑛) .


               𝑝(𝑛 βˆ’ 1)
                                            
      β€’ P 𝐡3 ≀                    𝐡2 ≀ .4𝑛 β‰₯ 1 βˆ’ 𝑃3 , where
                  3
                                                                                            
                                                            βˆ’1
                                      𝑃3 = 𝑃3 (𝑛, 𝑝) := 𝑛        exp βˆ’.02𝑛(𝑝 𝑛 βˆ’ 80) .
                                                                                 3



                            T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                   5
                                         L INH T RAN AND VAN V U

                          𝑝(𝑛 βˆ’ 1)
                                    
   β€’ P 𝐡4 = 0        𝐡3 ≀          β‰₯ 1 βˆ’ 𝑃4 , where
                             3
                                                                            
                                                            2
                                   𝑃4 = 𝑃4 (𝑛, 𝑝) := 𝑛 exp βˆ’ 𝑝 2 (𝑛 βˆ’ 1) .
                                                            9

Consequently, 𝑅4 = 𝑉(𝐺) with probability at least 1 βˆ’ (𝑃1 + 𝑃2 + 𝑃3 + 𝑃4 ).

    Note that the condition (1.1) is asymptotically the same as 𝑐 = Ξ©(𝑝 βˆ’3/2 ).
    The proof for this theorem has two main parts corresponding to the next two sections.

   1. One day 1, the number of Red and Blue neighbors of each node are binomial with means
      roughly 𝑛/2 + 𝑐 and 𝑛/2 βˆ’ 𝑐 respectively. The central limit theorem then √    implies that
      most of their masses are concentrated within an interval of length Θ( 𝑛) around their
      respective expectations. A subinterval of constant length in this interval has Θ(𝑛 βˆ’1/2 )
      mass. We thus expect that the probability that Reds outnumber Blues in the neighborhood
      of a given node is 1/2 + Ξ©(𝑛 βˆ’1/2 ). Thus, we expect that the number of Red nodes after the
      first day is 𝑛/2 + Ξ©(𝑛 1/2 ), meaning 𝑛/2 βˆ’ Ξ©(𝑛 1/2 ) Blue nodes.1 We use this intuition to
      find the right bound for 𝐡1 in Theorem 1.6, and prove it in Section 2.

   2. On subsequent days, the coloring depends on the graph so the argument for day 1 is
      no longer available. We instead show that the√number of Blues decreases regardless of
      the coloring after day 1, as long as Red has Ξ©( 𝑛) extra nodes. Using Hoeffding bound
      (Theorem 3.1), we show that for a given coloring, the probability one can find β€œmany”
      nodes with a majority of neighbors in Blue is small, sufficiently to beat the union bound
      over all colorings. This bypasses the dependency issue after day 1. The details are in
      Section 3.

    From Theorem 1.6, one can deduce Theorems 1.2 and 1.3 in a few steps.

Proof of Theorem 1.2. Assume Theorem 1.6. Observe that if Equation (1.1) holds for some value
of 𝑛, then it holds for all larger values of 𝑛. Let 𝑛 β‰₯ 1000, 𝑝 = 1/2 and 𝑐 = 5, we have (1.1)
satisfied. A routine calculation then shows that
                                                 n                           o
                    𝑃1 (𝑛, 𝑝, 𝑐) ≀ .0902,   max 𝑃2 (𝑛), 𝑃3 (𝑛, 𝑝), 𝑃4 (𝑛, 𝑝) < 10βˆ’10 ,

which implies that P (𝐡4 β‰  βˆ…) < .1 or equivalently that Red wins in the fourth day with
probability at least .9 (conditioned on the event 𝐡0 ≀ 𝑛2 βˆ’ 𝑐).                       

Proof of Theorem 1.3. In this proof, only 𝑛 and 𝑐 = 𝑐 𝑛 can vary. We can assume, without
loss of generality, that 𝑐 𝑛 ≀ 𝑛/2. Assuming Theorem 1.6, a routine calculation shows that

   1While this paper was under review, Sah and Sawhney in a recent paper [14] showed that 𝐡1 obeys a Gaussian
                             √
limit law with mean 𝑛/2 βˆ’ Θ(𝑐 𝑝𝑛) and variance Θ(𝑛) when 𝑛 β†’ ∞ for 𝑝 > (log 𝑛)βˆ’1/16 .


                        T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                               6
                  R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

𝑃2 , 𝑃3 , 𝑃4 = o(𝑛 βˆ’2 ) and, so 𝑃1 + 𝑃2 + 𝑃3 + 𝑃4 = 𝑃1 + o(𝑛 βˆ’2 ). By Theorem 1.6, we then have

                                                                    .25 + .7(1 βˆ’ 2𝑝 + 2𝑝 2 )2 1 + 4𝑐 2 𝑛 βˆ’2
                                                                                                                                
                                         βˆ’2                                                                                                      βˆ’2
 P(𝑅 4 = 𝑉(𝐺)) β‰₯ 1 βˆ’ (𝑃1 + π‘œ(𝑛 )) = 1 βˆ’                                                                                               2 βˆ’ π‘œ(𝑛 ).
                                                               √              2𝑝𝑐 𝑛 +min{𝑝,1βˆ’π‘}                .6(1βˆ’2𝑝+2𝑝 2 )
                                                                   𝑛 Ξ¦0               √                    βˆ’     √              βˆ’ .8𝑝
                                                                                          𝑝(1βˆ’π‘)𝑛                    𝑝(1βˆ’π‘)


    We make use of the fact Ξ¦0 is increasing, while π‘₯ ↦→ Ξ¦0 (π‘₯)/π‘₯ is decreasing, which implies
Ξ¦0 (π‘₯) β‰₯ min{1, π‘₯/𝑦}Ξ¦0 (𝑦) βˆ€π‘₯, 𝑦 > 0. Then:
                                                               !                           (                     )
                  √          2𝑝𝑐 𝑛 + min{𝑝, 1 βˆ’ 𝑝}                      √                             𝑐𝑛                 √ 
                      𝑛Φ0                                           β‰₯       𝑛 min 1, p                               Ξ¦0 2 𝑝
                                       𝑝(1 βˆ’ 𝑝)𝑛                                                    (1 βˆ’ 𝑝)𝑛
                                   p
                                        (                      )
                                            √          𝑐𝑛              √      √          √ 
                              β‰₯ min             𝑛, p               Ξ¦0 2 𝑝 β‰₯ min 𝑛, 𝑐 𝑛 Ξ¦0 2 𝑝 .
                                                       1βˆ’π‘

When 𝑛 and 𝑐 𝑛 are both sufficiently large and 𝑝 is a constant, we have
                                                                              "                                      #
                                   √                 √     .6(1 βˆ’ 2𝑝 + 2𝑝 2 ) .8
                             min        𝑛, 𝑐 𝑛    Ξ¦0 2 𝑝 β‰₯ 2                   +
                                                                  𝑝(1 βˆ’ 𝑝)       𝑝
                                                                p


Therefore we can give a lower bound on the denominator of 𝑃1 :
                                                   !                                                    √ 
       √          2𝑝𝑐 𝑛 + min{𝑝, 1 βˆ’ 𝑝}                     .6(1 βˆ’ 2𝑝 + 2𝑝 2 )                .8   Ξ¦0 2 𝑝      √
           𝑛 Ξ¦0                                        βˆ’                                    βˆ’    β‰₯          min 𝑛, 𝑐 𝑛 .
                             𝑝(1 βˆ’ 𝑝)𝑛                              𝑝(1 βˆ’ 𝑝)                  𝑝
                         p                                     p
                                                                                                      2

Now we use the trivial facts 𝑐 𝑛 ≀ 𝑛/2 and 1 βˆ’ 2𝑝 + 2𝑝 2 ≀ 1 to give an upper bound on the
numerator of 𝑃1 :
                                                                                 
                      .25 + .7(1 βˆ’ 2𝑝 + 2𝑝 2 )2 1 + 4𝑐 2 𝑛 βˆ’2 ≀ .25 + .7 Β· 1 Β· (1 + 1) < 2.

Combining the above, we get
                                                  √
                                            4Ξ¦0 (2 𝑝)βˆ’2
                                                                                   
                                   2                               √ βˆ’2      1 1
           𝑃1 ≀          √       √     2 =     √          = 4Ξ¦0 (2 𝑝) max    ,       .
                min{𝑐 𝑛 , 𝑛}Ξ¦0 (2 𝑝)/2      min{ 𝑛, 𝑐 𝑛 }2                   𝑛 𝑐 𝑛2

The term π‘œ(𝑛 βˆ’2 ) is also absorbed into this bound, since 𝑛 βˆ’2 < 𝑛 βˆ’1 . Therefore we have
                                                                                                         
                                                                   1 1
                                       P(𝑅4 = 𝑉(𝐺)) β‰₯ 1 βˆ’ 𝐾(𝑝) max  ,     .
                                                                   𝑛 𝑐 𝑛2
                                                             √
which is the desired bound where 𝐾(𝑝) can be chosen as 4Ξ¦0 (2 𝑝)βˆ’2 + 1.                                                                            

                            T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                                                   7
                                          L INH T RAN AND VAN V U

1.7   Open questions
Let 𝜌(𝑝, π‘˜, 𝑛) be the probability that Red wins if its camp has size 𝑛/2 + π‘˜ in the beginning.
Theorem 1.2 shows that 𝜌(.5, 5, 𝑛) β‰₯ .9 (given that 𝑛 is sufficiently large). In other words,
five defectors guarantee Red’s victory with overwhelming odds when 𝑝 = 1/2. In fact, we
have 𝜌(.5, 4, 𝑛) β‰₯ .77 by plugging in the same values for πœ€1 and πœ€2 with 𝑐 = 4 in the proof of
Theorem 1.2. We conjecture that one defector already brings a non-trivial advantage.

Conjecture 1.7 (The power of one). There is a constant 𝛿(𝑝) > 0 for each constant 𝑝 > 0 such that
𝜌(𝑝, 1, 𝑛) β‰₯ 1/2 + 𝛿(𝑝) for all sufficiently large 𝑛. 1

   In the following numerical experiment, we run 𝑇 = 10000 independent trials. In each trial,
we fix a set of 𝑁 = 10000 nodes with 5001 Red and 4999 Blue (meaning 𝑐 = 1), generate a
graph from 𝐺(𝑁 , 1/2), and simulate the process on the resulting graph. We record the number
of wins and the number of days to achieve the win in percentage in Table 1. Among others,
we see that Red wins within 3 days with frequency more than .9. The source code for the
simulation along with execution instructions can be found online at https://github.com/
thbl2012/majority-dynamics-simulation.

                 T      p    Red      Blue    Winner      Last day     Count     Frequency
                104    1/2   5001     4999     Blue          3           496        4.96 %
                104    1/2   5001     4999     Blue          4            77        0.77 %
                104    1/2   5001     4999     Blue          5             3        0.03 %
                104    1/2   5001     4999     Blue          7             1        0.01 %
                104    1/2   5001     4999     Red           2            25        0.25 %
                104    1/2   5001     4999     Red           3          9313       93.13 %
                104    1/2   5001     4999     Red           4            85        0.85 %

                      Table 1: Winners and winning days with their frequencies

     Imagine that people defect from the Blue camp to the Red camp one by one. The value of the
𝑖-th defector is defined as 𝑣(𝑝, 𝑖, 𝑛) := 𝜌(𝑝, 𝑖, 𝑛) βˆ’ 𝜌(𝑝, 𝑖 βˆ’ 1, 𝑛) (where we take 𝜌(𝑝, 0, 𝑛) = 1/2).
It is intuitive to think that the value of each extra defector decreases. (Clearly defector number
𝑛/2 adds no value.)

Conjecture 1.8 (Values of defectors). For any fixed 𝑝, 𝑖, and 𝑛 large enough, 𝑣(𝑝, 𝑖, 𝑛) β‰₯ 𝑣(𝑝, 𝑖 +1, 𝑛).2

   It is clear that the Conjecture 1.8 implies Conjecture 1.7, with 𝛿 = .45 = .08, although the
simulation results above suggests that 𝛿 can be at least .43.
   In the next two sections we give the proof of our main result, Theorem 1.6.

   1Both Conjectures 1.7 and 1.8 are proven for 𝑝 > (log 𝑛)βˆ’1/16 in [14]. See footnote 1 a few paragraphs below
Theorem 1.6.
   2See footnote 1 after Conjecture 1.7.


                         T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                8
                R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

2     Day One
Firstly, let us recall a few terms defined in Theorem 1.6.
                                                                      !
                                   √          2𝑝𝑐 + min{𝑝, 1 βˆ’ 𝑝}             1 βˆ’ 2𝑝 + 2𝑝 2 .8
                   𝐹(𝑛, 𝑝, 𝑐) :=       𝑛Φ0                                βˆ’ .6 p           βˆ’ ,
                                                      𝑝(1 βˆ’ 𝑝)𝑛                  𝑝(1 βˆ’ 𝑝)   𝑝
                                                  p

                                   .25 + .7 1 βˆ’ 2𝑝 + 2𝑝 2         1 + 4𝑐 2 𝑛 βˆ’2
                                                             2                   
                𝑃1 (𝑛, 𝑝, 𝑐) :=                                                       .
                                                   𝐹(𝑛, 𝑝, 𝑐)2

Lemma 2.1. Given 𝑝 ∈ (0, 1) and 𝑛, 𝑐 ∈ β„• such that 𝐹(𝑛, 𝑝, 𝑐) > 0, The election process on 𝐺 ∼ 𝐺(𝑛, 𝑝)
with 𝐡0 ≀ 𝑛/2 βˆ’ 𝑐 satisfies
                                                √ 
                                          𝑛       𝑛
                                        
                                    P 𝐡1 > βˆ’ .8     ≀ 𝑃1 (𝑛, 𝑝, 𝑐) .
                                          2      𝑝

   The core of the proof relies on some preliminary results regarding the difference of two
binomial random variables, which we discuss next.


2.1   Background on the difference of Binomial Random Variables
The difference of two binomial random variables with the same probability 𝑝 can be written as a
sum of independent random variables, each of which is either a Bin(1, 𝑝) variable or minus of
one. A natural way to bound this sum is done via a Berry–Esseen normal approximation.

Theorem 2.2 (Berry–Esseen). Let 𝑛 be any positive integer. If 𝑋1 , 𝑋2 , 𝑋3 , . . . , 𝑋𝑛 are random variables
with zero means, variances 𝜎12 , 𝜎22 , Β· Β· Β· , πœŽπ‘›2 > 0, and absolute third moments E |𝑋𝑖 | 3 = 𝜌 𝑖 < ∞, we
                                                                                             
have:
                                        𝑛                                 Í𝑛
                                                     !
                                                             π‘₯                πœŒπ‘–
                                                            
                                                                  ≀ 𝐢0 Β· 𝑖=13 ,
                                      Γ•
                         sup P               𝑋𝑖 ≀ π‘₯ βˆ’ Ξ¦
                          π‘₯βˆˆβ„                               πœŽπ‘‹              𝜎
                                        𝑖=1                                               𝑋
              Í𝑛
where πœŽπ‘‹ =      𝑖=1 𝜎 𝑖
                      2 1/2
                              and 𝐢0 is a constant.
                       

The original proof by Esseen [7] yielded 𝐢0 = 7.59, and this constant has been improved a
number of times. The latest work by Shevtsova [15] achieved 𝐢0 = .56, which will be used for
the rest of the paper. A direct application of this theorem gives the following lemma.

Lemma 2.3. For 𝑝 ∈ (0, 1), 𝜎 = 𝑝(1 βˆ’ 𝑝) and 𝑛1 , 𝑛2 ∈ β„• such that 𝑛1 > 𝑛2 . let π‘Œ1 ∼ Bin(𝑛1 , 𝑝), π‘Œ2 ∼
                                    p

Bin(𝑛2 , 𝑝) be independent random variables. Then for any 𝑑 ∈ ℝ,
                                                                          !
                                        𝑝(𝑛1 βˆ’ 𝑛2 ) βˆ’ 𝑑     .56 1 βˆ’ 2𝑝 + 2𝑝 2
                                                                                              
                                1
               P (π‘Œ1 > π‘Œ2 + 𝑑) β‰₯ + Ξ¦0 p                    βˆ’p                    .
                                2       𝑝(1 βˆ’ 𝑝)(𝑛1 + 𝑛2 )    𝑝(1 βˆ’ 𝑝)(𝑛1 + 𝑛2 )

                          T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                             9
                                          L INH T RAN AND VAN V U

Proof. Let 𝑋 = π‘Œ1 βˆ’ π‘Œ2 . By definition, we have 𝑋 = 𝑋1 + 𝑋2 + 𝑋3 + Β· Β· Β· + 𝑋𝑛1 +𝑛2 , where all the 𝑋𝑖
are independent and either 𝑋𝑖 or βˆ’π‘‹π‘– is Bin(1, 𝑝). Then E [𝑋] = 𝑖 E [𝑋𝑖 ] = 𝑝(𝑛1 βˆ’ 𝑛2 ). For all 𝑖,
                                                                Í


  Var [𝑋𝑖 ] = 𝑝(1 βˆ’ 𝑝) and E |𝑋𝑖 βˆ’ E [𝑋𝑖 ] | 3 = 𝑝(1 βˆ’ 𝑝)3 + (1 βˆ’ 𝑝)𝑝 3 = 𝑝(1 βˆ’ 𝑝)(1 βˆ’ 2𝑝 + 2𝑝 2 ).
                                                       


Applying Theorem 2.2, we have

                  P (π‘Œ1 ≀ π‘Œ2 + 𝑑) = P (𝑋 βˆ’ E [𝑋] ≀ 𝑑 βˆ’ 𝑝(𝑛1 βˆ’ 𝑛2 ))
                                              !                                             
                         𝑑 βˆ’ 𝑝(𝑛1 βˆ’ 𝑛2 )
                                                            Í
                                                                𝑖E       |𝑋𝑖 βˆ’ E [𝑋𝑖 ] | 3
                   ≀ Ξ¦      p                     + .56
                                Var [𝑋]                              Var [𝑋]3/2
                                                    !
                           𝑑 βˆ’ 𝑝(𝑛1 βˆ’ 𝑛2 )                           𝑝(1 βˆ’ 𝑝)(1 βˆ’ 2𝑝 + 2𝑝 2 )(𝑛1 + 𝑛2 )
                   = Ξ¦ p                                + .56
                           𝑝(1 βˆ’ 𝑝)(𝑛1 + 𝑛2 )                                (𝑝(1 βˆ’ 𝑝)(𝑛1 + 𝑛2 ))3/2
                                                                !
                    1       𝑝(𝑛1 βˆ’ 𝑛2 ) βˆ’ 𝑑      .56(1 βˆ’ 2𝑝 + 2𝑝 2 )
                   = βˆ’ Ξ¦0 p                    +p                    ,
                    2       𝑝(1 βˆ’ 𝑝)(𝑛1 + 𝑛2 )    𝑝(1 βˆ’ 𝑝)(𝑛1 + 𝑛2 )

and the claim follows by taking the complement event.                                                                  

Lemma 2.4. Let 𝑝 ∈ (0, 1) be a constant, and 𝑋1 ∼ Bin(𝑛1 , 𝑝) and 𝑋2 ∼ Bin(𝑛2 , 𝑝) be independent
r.v.s. Then for any integer 𝑑,
                                                                                                 
                                                                1.12 1 βˆ’ 2𝑝 + 2𝑝 2
                                 P (𝑋1 = 𝑋2 + 𝑑) ≀ p                                                 .
                                                                     𝑝(1 βˆ’ 𝑝)(𝑛1 + 𝑛2 )

Proof. Let 𝑛 = 𝑛1 + 𝑛2 and πœ‡ = E [𝑋1 ] βˆ’ E [𝑋2 ] = 𝑝(𝑛1 βˆ’ 𝑛2 ). Fix πœ€ ∈ (0, 1), by the same
computations in Lemma 2.3, we have
                                                                               !
                                                            π‘‘βˆ’πœ‡βˆ’πœ€                      .56 1 βˆ’ 2𝑝 + 2𝑝 2
                                                                                                           
                    P (𝑋1 βˆ’ 𝑋2 ≀ 𝑑 βˆ’ πœ€) β‰₯ Ξ¦ p                                      βˆ’                           ,
                                                            𝑝(1 βˆ’ 𝑝)𝑛                         𝑝(1 βˆ’ 𝑝)𝑛
                                                                                          p
                                                                              !
                                                            π‘‘βˆ’πœ‡+πœ€                      .56 1 βˆ’ 2𝑝 βˆ’ 2𝑝 2
                                                                                                           
                    P (𝑋1 βˆ’ 𝑋2 < 𝑑 + πœ€) ≀ Ξ¦ p                                      +                           .
                                                            𝑝(1 βˆ’ 𝑝)𝑛                         𝑝(1 βˆ’ 𝑝)𝑛
                                                                                          p


It follows that

                   P(𝑋1 = 𝑋2 + 𝑑) ≀ P(𝑑 βˆ’ πœ€ < 𝑋1 βˆ’ 𝑋2 < 𝑑 + πœ€)
                                          !                                   !
                            π‘‘βˆ’πœ‡+πœ€                           π‘‘βˆ’πœ‡βˆ’πœ€
                                                                                                               
                                                                                       1.12 1 βˆ’ 2𝑝 + 2𝑝 2
                    ≀ Ξ¦ p                     βˆ’Ξ¦ p                                 +                               .
                            𝑝(1 βˆ’ 𝑝)𝑛                       𝑝(1 βˆ’ 𝑝)𝑛                         𝑝(1 βˆ’ 𝑝)𝑛
                                                                                          p


Letting πœ€ β†’ 0, we obtain the desired claim.                                                                            

                       T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                            10
                      R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

Proof of Lemma 2.1
Recall that 𝑅 1 = 𝑛 βˆ’ 𝐡1 . Our goal is to give an upper bound on the probability that
              √
 𝑅 1 < 𝑛/2 + 𝑑 𝑛 for any given term 𝑑. Recall that 𝑅 1 = π‘£βˆˆπ‘‰ I1 (𝑣), where I1 (𝑣) is 1 if 𝑣 is Red
                                                          Í
after Day 1 and 0 otherwise. We have: Since the indicators are not independent, a natural choice
for bounding their sum is to use Chebysev’s inequality. We proceed in two steps:

   1. Give a lower bound on E 𝑅1
                                                                        
                                                                             by lowerbounding each term E [I1 (𝑣)].

   2. Give an upper bound on Var 𝑅 1 by upperbounding each Var [I1 (𝑣)] and Cov [I1 (𝑣), I1 (𝑣 0)].
                                                                                


Let 𝜎 :=       𝑝(1 βˆ’ 𝑝), which appears in most equations in this proof. Note that 1βˆ’2𝑝 +2𝑝 2 = 1βˆ’2𝜎2 .
           p

                                       𝑛               √
Claim 2.5. E 𝑅 1                         + 𝐹1 (𝑛, 𝑝, 𝑐) 𝑛, where
                              
                                   β‰₯
                                       2
                                                                                                             √
                                           √             2𝑝𝑐 + min{𝑝, 1 βˆ’ 𝑝}                               .8 𝑛
                                                                            
                                                                                    1 βˆ’ 2𝜎2
               𝐹1 (𝑛, 𝑝, 𝑐) =                  𝑛Φ0               √             βˆ’ .6         = 𝐹(𝑛, 𝑝, 𝑐) +      .
                                                                𝜎 𝑛                    𝜎                     𝑝

Proof. For 𝑣 ∈ 𝑉 and 𝑆 βŠ‚ 𝑉, let 𝑑𝑆 (𝑣) be the number of neighbors 𝑣 has in 𝑆. By the majority
and tie-breaking rules , we have for each 𝑣 ∈ 𝑉,

                                                     𝑣 ∈ 𝑅1 ⇔ 𝑑𝑅0 (𝑣) > 𝑑𝐡0 (𝑣) βˆ’ I0 (𝑣).                                                       (2.1)

Note that 𝑑𝑅0 (𝑣) ∼ Bin 𝑅0 βˆ’ I0 (𝑣), 𝑝 and 𝑑𝐡0 (𝑣) ∼ Bin 𝐡0 + I0 (𝑣) βˆ’ 1, 𝑝 . By Lemma 2.3, we
                                                                                                                             
have:

     E [I1 (𝑣)] = P (𝑣 ∈ 𝑅1 ) = P (𝑑𝑅0 (𝑣) > 𝑑𝐡0 (𝑣) βˆ’ I0 (𝑣))
                                                                                                       !
             𝑝 𝑅 0 βˆ’ 𝐡0 + 1 βˆ’ 2I0 (𝑣) + I0 (𝑣)   .56 1 βˆ’ 2𝜎2
                                                                                                                         
       1
      β‰₯ + Ξ¦0            √                      βˆ’    √
       2               𝜎 π‘›βˆ’1                       𝜎 π‘›βˆ’1
             2𝑝𝑐 + 𝑝 𝑣   .56 1 βˆ’ 2𝜎2        2𝑝𝑐 + min{𝑝, 1 βˆ’ 𝑝}   .6 1 βˆ’ 2𝜎2
                                                                                                                                       
       1                              1
      = + Ξ¦0 √         βˆ’    √        β‰₯ + Ξ¦0         √           βˆ’     √      ,
       2     𝜎 π‘›βˆ’1         𝜎 π‘›βˆ’1      2            𝜎 𝑛               𝜎 𝑛1

where 𝑝 𝑣 := 𝑝 (1 βˆ’ I0 (𝑣)) + (1 βˆ’ 𝑝)I0 (𝑣) β‰₯ min{𝑝, 1 βˆ’ 𝑝}, hence the last inequality. Summing over
all 𝑣 ∈ 𝑉, we get
                                                         "                                                                    #
                                            𝑛 √    2𝑝𝑐 + min{𝑝, 1 βˆ’ 𝑝}   .6 1 βˆ’ 2𝜎2                                                √
                                                                                                                     
                      E 𝑅1                 β‰₯ + 𝑛Φ0                                                                                     𝑛.
                                      
                                                           √           βˆ’ p
                                            2             𝜎 𝑛               𝑝(1 βˆ’ 𝑝)

The proof is complete.                                                                                                                             
                                                                                            2                  
                                                                                                           2 βˆ’1
Claim 2.6. Var 𝑅1                          ≀ .25𝑛 + .7 1 βˆ’ 2𝜎                                       𝑛 + 4𝑐 𝑛          .
                                                                                       2



                                   T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                                           11
                                                  L INH T RAN AND VAN V U

Proof. We first have:
                                            Γ•                        Γ•
                        Var 𝑅1                                                Cov [I1 (𝑣 1 ), I1 (𝑣 2 )] .
                                   
                                        =         Var [I1 (𝑣)] + 2
                                            π‘£βˆˆπ‘‰                      𝑣1 ≠𝑣2

The variances Var [I1 (𝑣)] are easy due to I1 (𝑣) being a Bernoulli variable:

                 Var [I1 (𝑣)] = E [I1 (𝑣)] 1 βˆ’ E [I1 (𝑣)] = .25 βˆ’ (E [I1 (𝑣)] βˆ’ .5)2 ≀ .25.
                                                                
                                                                                                                     (2.2)

Bounding the covariance Cov [I1 (𝑣 1 ), I1 (𝑣2 )] for two distinct vertices 𝑣 1 , 𝑣2 requires a bit more
care, as the indicators are dependent. By definition

                   Cov [I1 (𝑣 1 ), I1 (𝑣 2 )] = P (𝑣 1 , 𝑣2 ∈ 𝑅 1 ) βˆ’ P (𝑣 1 ∈ 𝑅 1 ) P (𝑣 2 ∈ 𝑅 1 ) .

Consider the event {𝑣 1 , 𝑣2 ∈ 𝑅1 }; P (𝑣1 , 𝑣2 ∈ 𝑅 1 ) can be written as

               P (𝑣 1 , 𝑣2 ∈ 𝑅 1 |𝑣1 ∼ 𝑣 2 ) P(𝑣 1 ∼ 𝑣 2 ) + P (𝑣 1 , 𝑣2 ∈ 𝑅 1 |𝑣1 6∼ 𝑣 2 ) P(𝑣 1 6∼ 𝑣 2 ).

Notice that after we reveal the adjacency between 𝑣 1 and 𝑣 2 , the remaining vertices in the
neighborhoods of 𝑣 1 and 𝑣 2 are independent. Letting π‘Ž 𝑖 = P(𝑣 𝑖 ∈ 𝑅 1 | 𝑣 1 ∼ 𝑣 2 ), 𝑏 𝑖 = P(𝑣 𝑖 ∈ 𝑅 1 |
𝑣 1 6∼ 𝑣 2 ) , we have
                              P (𝑣1 , 𝑣2 ∈ 𝑅 1 ) = π‘π‘Ž1 π‘Ž 2 + (1 βˆ’ 𝑝)𝑏1 𝑏2 .
Splitting up the other two events similarly gives P (𝑣 𝑖 ∈ 𝑅 1 ) = π‘π‘Ž 𝑖 + (1 βˆ’ 𝑝)𝑏 𝑖 for 𝑖 = 1, 2. Putting
the above together, we can write Cov [I1 (𝑣 1 ), I1 (𝑣 2 )] as

                         π‘π‘Ž 1 π‘Ž 2 + (1 βˆ’ 𝑝)𝑏 2 𝑏2 βˆ’ (π‘π‘Ž1 + (1 βˆ’ 𝑝)𝑏1 ) (π‘π‘Ž2 + (1 βˆ’ 𝑝)𝑏 2 )
                                                                                                                     (2.3)
                         = 𝑝(1 βˆ’ 𝑝)(π‘Ž 1 βˆ’ 𝑏1 )(π‘Ž 2 βˆ’ 𝑏2 ) = 𝜎2 (π‘Ž1 βˆ’ 𝑏1 )(π‘Ž 2 βˆ’ 𝑏2 ).

To bound π‘Ž 𝑖 βˆ’ 𝑏 𝑖 , we use logical reasoning to express it as something separate from the current
context. We start with 𝑖 = 1. The analysis for π‘Ž 2 βˆ’ 𝑏2 is analogous. Assume 𝑣1 ∼ 𝑣 2 for now, then

                                   π‘Ž 1 = P (𝑑𝑅0 (𝑣 1 ) β‰₯ 𝑑𝐡0 (𝑣 1 ) βˆ’ I0 (𝑣1 ) + 1) ,

by Eq. (2.1). Define two binomial random variables:
                                                                                                            
      𝑋 = 𝑑𝑅0 (𝑣 1 ) βˆ’ I0 (𝑣 2 ) = 𝑁(𝑣 1 ) ∩ 𝑅 0 \ {𝑣 1 , 𝑣2 } ∼ Bin 𝑅 0 βˆ’ I0 (𝑣 1 ) βˆ’ I0 (𝑣 2 ), 𝑝 ,
                                                                                                                
       π‘Œ = 𝑑𝐡0 (𝑣 1 ) + I0 (𝑣 2 ) βˆ’ 1 = 𝑁(𝑣1 ) ∩ 𝐡0 \ {𝑣1 , 𝑣2 } ∼ Bin 𝐡0 + I0 (𝑣 1 ) + I0 (𝑣 2 ) βˆ’ 2, 𝑝 .

We have

          π‘Ž 1 = P (𝑋 + I0 (𝑣 2 ) + I0 (𝑣 1 ) β‰₯ π‘Œ + 2 βˆ’ I0 (𝑣2 )) = P (𝑋 βˆ’ π‘Œ β‰₯ 1 βˆ’ I0 (𝑣 1 ) βˆ’ J0 (𝑣 2 )) .

Similarly, when conditioning on 𝑣1 6∼ 𝑣 2 , we get 𝑏1 = P (𝑋 βˆ’ π‘Œ β‰₯ 1 βˆ’ I0 (𝑣 1 )) .

                         T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                          12
                 R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

Let us perform a case analysis on the initial color of 𝑣 2 .

 𝑣 2 ∈ 𝑅0 β‡’ π‘Ž 1 βˆ’ 𝑏1 = P(𝑋 βˆ’ π‘Œ β‰₯ βˆ’I0 (𝑣1 )) βˆ’ P(𝑋 βˆ’ π‘Œ β‰₯ 1 βˆ’ I0 (𝑣 1 )) = P(𝑋 βˆ’ π‘Œ = βˆ’I0 (𝑣 1 )).
 𝑣 2 ∈ 𝐡0 β‡’ π‘Ž 1 βˆ’ 𝑏1 = P(𝑋 βˆ’ π‘Œ β‰₯ 2 βˆ’ I0 (𝑣1 )) βˆ’ P(𝑋 βˆ’ π‘Œ β‰₯ 1 βˆ’ I0 (𝑣 1 )) = βˆ’P(𝑋 βˆ’ π‘Œ = 1 βˆ’ I0 (𝑣 1 )).

Both cases give us                                                                                      
                                 π‘Ž 1 βˆ’ 𝑏1 = J0 (𝑣 2 )P 𝑋 βˆ’ π‘Œ = 1 βˆ’ I0 (𝑣 1 ) βˆ’ I0 (𝑣 2 ) ,
                                                                                                                         
where 𝑋 ∼ Bin 𝑅0 βˆ’ I0 (𝑣 1 ) βˆ’ I0 (𝑣 2 ), 𝑝 and π‘Œ ∼ Bin 𝐡0 + I0 (𝑣 1 ) + I0 (𝑣 2 ) βˆ’ 2, 𝑝 .
We apply the same analysis for π‘Ž2 and 𝑏 2 and use Eq. (2.3) to get
                                                                                                                      2
                 Cov [I1 (𝑣 1 ), I1 (𝑣2 )] = 𝜎2 J0 (𝑣 1 )J0 (𝑣 2 )P 𝑋 βˆ’ π‘Œ = 1 βˆ’ I0 (𝑣1 ) βˆ’ I0 (𝑣 2 ) .

If the initial colors of 𝑣 1 and 𝑣2 are different Cov [I1 (𝑣 1 ), I1 (𝑣 2 )] ≀ 0, so we can exclude them
from the upper bound. When 𝑣 1 and 𝑣2 are of the same initial color, by Lemma 2.4, we have
                                                                                         2                    2
                                                                  1.12 1 βˆ’ 2𝜎2                        1 βˆ’ 2𝜎2
                                                              
                     Cov [I1 (𝑣 1 ), I1 (𝑣2 )] ≀ 𝜎        2
                                                                      √                         ≀ 1.4         .
                                                                     𝜎 π‘›βˆ’2                               𝑛

Hence                                                                                                             2
                                                                       𝑅0         𝐡0                    1 βˆ’ 2𝜎2
                     Γ•                                                                 
                              Cov [I1 (𝑣 1 ), I1 (𝑣 2 )] ≀                      +               Β· 1.4 Β·
                     𝑣1 ≠𝑣2
                                                                       2          2                        𝑛
                        1.4 𝑛 2      𝑛 
                                                                  2          
                                                  2                       4𝑐 2
                      =         +𝑐 βˆ’
                                  2
                                         1 βˆ’ 2𝜎 2
                                                     ≀ .35 1 βˆ’ 2𝜎 2
                                                                         𝑛+        .
                         𝑛   4       2                                       𝑛
Equations (2.2) and (2.1) together yield
                                                                                   2                  
                                Var 𝑅1           ≀ .25𝑛 + .7 1 βˆ’ 2𝜎2                       𝑛 + 4𝑐 2 𝑛 βˆ’1 .
                                            
                                                                                                                                (2.4)

The proof is complete.                                                                                                             
From Claims 2.5 and 2.6, Chebyshev’s inequality gives

                 𝑛 .8 √                              𝑛 .8 √
                                                                                 
           P 𝐡1 > βˆ’     𝑛                    = P 𝑅1 < +     𝑛
                 2  𝑝                                2  𝑝
                         Var 𝑅1                                        .25𝑛 + .7 1 βˆ’ 2𝜎2                𝑛 + 4𝑐 2 𝑛 βˆ’1
                                                                                                 2                   
            ≀ 
                               √ 2
                                    ≀                                           2 .
                E 𝑅1 βˆ’ 𝑛2 βˆ’ .8𝑝 𝑛
                                       √                  
                                                                              .8
                                      𝑛 𝑛Φ0                    βˆ’ .6 √
                                              2𝑝𝑐+min{𝑝,1βˆ’π‘}              2
                                                    √
                                                   𝜎 𝑛
                                                                     1βˆ’2𝜎
                                                                            βˆ’ 𝑝
                                                                                                              𝑝(1βˆ’π‘)


Dividing both the nominator and denominator by 𝑛 and substituting 𝜎 = 𝑝(1 βˆ’ 𝑝), we get back
                                                                                                              p

𝑃1 (𝑛, 𝑝, 𝑐) and finish the proof of Lemma 2.1. art one of Theorem 1.6 is complete.

                         T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                                     13
                                              L INH T RAN AND VAN V U

3    Day Two and after
Next, we analyze the situation after the first day. As mentioned in Section 1.6, the previous
argument is unusable due to the loss of independence from day 2 onwards. Instead, we use
β€œshrinking arguments” to argue that it is likely for the Blue camp to repeatedly shrink to empty,
regardless of the choice of its members, due to the structure of 𝐺. The core of our shrinking
argument is Hoeffding’s inequality, a classical result that gives exponentially small probability
tails for sums of independent random variables.

Theorem 3.1 (Hoeffding’s inequality). Let {𝑋𝑖 } 𝑛𝑖=1 be independent random variables and {π‘Ž 𝑖 } 𝑛𝑖=1 ,
{𝑏 𝑖 } 𝑛𝑖=1 be real numbers such that for all 𝑖 = 1, 𝑛, supp(𝑋) βŠ† [π‘Ž 𝑖 , 𝑏 𝑖 ]. Then for 𝑋 = 𝑋1 + 𝑋2 + Β· Β· Β· + 𝑋𝑛 ,
we have                                                                                              
                       n                                         o                       2𝑑 2
                   max P (𝑋 β‰₯ E [𝑋] + 𝑑) , P (𝑋 ≀ E [𝑋] βˆ’ 𝑑) ≀ exp βˆ’ Í𝑛                                 .
                                                                                    𝑖=1 (𝑏 𝑖 βˆ’ π‘Ž 𝑖 )
                                                                                                    2


    The proof of Hoeffding’s inequality is available in several graduate level probability textbooks,
e. g., [18]. The original proof given by Hoeffding appeared in [9].
    Before discussing our shrinking arguments, we quickly record a useful lemma to simplify
binomial coefficients that may appear in union bounds.
                                                              𝑛   2𝑛
                                                               
Lemma 3.2. For any integers 𝑛 β‰₯ 1 and 0 ≀ π‘˜ ≀ 𝑛,                < √ .
                                                              π‘˜    𝑛
Proof. This is well known. Observing that π‘›π‘˜ ≀ b𝑛/2c
                                                  𝑛
                                                     , the case of even 𝑛 is stated in [11, 2.6.2
                                                                   
Proposition]. The case of odd 𝑛 = 2π‘š βˆ’ 1 follows immediately:
                                                   
                               2π‘š βˆ’ 1   1 2π‘š  1 22π‘š   22π‘šβˆ’1    22π‘šβˆ’1
                                      =      < √    = √     < √       .                                         
                                π‘šβˆ’1     2 π‘š   2 2π‘š      2π‘š     2π‘š βˆ’ 1
   A simple yet useful shrinking argument is that, in the 𝐺(𝑛, 𝑝) model, it is with high
probability that all vertices in 𝐺 have many neighbors, so a small enough Blue camp will not be
able to influence anyone by a majority, thus inevitably vanishes the next day.

Lemma 3.3 (Part 4 of Theorem 1.6). For 𝑝 ∈ (0, 1) and 𝑛 ∈ β„•>1 , with probability at least
                                                              
                                             2
                                  1 βˆ’ 𝑛 exp βˆ’ 𝑝 2 (𝑛 βˆ’ 1) = 1 βˆ’ 𝑃4 (𝑛, 𝑝),
                                             9

𝐺 is such that all vertices have more than 23 𝑝(𝑛 βˆ’ 1) neighbors, thus any choice of the Blue camp of at
most 31 𝑝(𝑛 βˆ’ 1) members vanishes the next day.

Proof. In a 𝐺(𝑛, 𝑝) graph, 𝑑(𝑣) is a sum of (𝑛 βˆ’ 1) Bin(1, 𝑝) random variables, so Theorem 3.1
implies that for any 𝑒 ∈ 𝑉,
                                                                                                 
                 2                                1                2
         P 𝑑(𝑒) ≀ 𝑝(𝑛 βˆ’ 1) ≀ P 𝑑(𝑒) βˆ’ E [𝑑(𝑒)] ≀ βˆ’ 𝑝(𝑛 βˆ’ 1) ≀ exp βˆ’ 𝑝 2 (𝑛 βˆ’ 1) .
                 3                                3                9

                         T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                   14
                R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

By a union bound, the probability that all vertices have more than 23 𝑝(𝑛 βˆ’ 1) neighbors is at least
1 βˆ’ 𝑛 exp βˆ’ 29 𝑝 2 (𝑛 βˆ’ 1) = 1 βˆ’ 𝑃4 (𝑛, 𝑝). Given this, a Blue camp of size 13 𝑝(𝑛 βˆ’ 1) surely vanishes
                          
the next day since it cannot form a majority in the neighborhood of any vertex. The result then
follows.                                                                                             

    This simple lemma completes the fourth part of Theorem 1.6. The arguments for Parts 2 and
3 require slightly more complexity, but both come directly from the following lemma.

Lemma 3.4. Let 𝑝 ∈ (0, 1), 𝑛, 𝑛0 ∈ β„• , 𝑛0 < 𝑛2 . Then for all π‘š ∈ β„• , π‘š ≀ 𝑛, with probability at least

                                    4𝑛       2𝑝 2 (𝑛 βˆ’ 2𝑛0 βˆ’ 1)2 π‘š
                                                                          
                                 1βˆ’    exp βˆ’                       ,
                                    𝑛              𝑛+π‘šβˆ’2

𝐺 is such that any choice of the Blue camp of at most 𝑛0 members shrinks to below π‘š in the next day.

Proof of Lemma 3.4. Consider a subset 𝑆 of 𝑉 with π‘š elements. We first bound the probability
that 𝑆 entirely turns Blue the next day. Let (𝑅, 𝐡) be the initial coloring with 𝐡 = 𝑛0 < 𝑛/2.
Similarly to the usual J𝑑 , let J(𝑣) = 1 if 𝑣 ∈ 𝑅 and βˆ’1 otherwise. For each 𝑣 ∈ 𝑉, let dif(𝑣) :=
 𝑁(𝑣) ∩ 𝑅 βˆ’ 𝑁(𝑣) ∩ 𝐡 = π‘’βˆˆπ‘‰ J(𝑒)π‘Šπ‘’π‘£ , and let dif(𝑆) := π‘£βˆˆπ‘† dif(𝑣). We break down dif(𝑆) as
                            Í                               Í
follows.                 Γ•h           Γ•          i Γ•h                          i
                dif(𝑆) =       1𝑆 (𝑣)    J(𝑒)π‘Šπ‘’π‘£ =      J(𝑒)1𝑆 (𝑣) + J(𝑣)1𝑆 (𝑒) π‘Šπ‘’π‘£ .       (3.1)
                          π‘£βˆˆπ‘‰        π‘’βˆˆπ‘‰                𝑒≠𝑣

This is now a sum of independent variables, so we can apply Theorem 3.1.
                                               "                                                         #
                                                                           E [dif(𝑆)]2
 P (𝑆 βŠ† 𝐡1 | 𝐡0 ) ≀ P(dif(𝑆) ≀ 0) ≀ exp βˆ’ Í                                                            2 ,
                                                    𝑒≠𝑣 sup supp(𝑋𝑒𝑣 π‘Šπ‘’π‘£ ) βˆ’ inf supp(𝑋𝑒𝑣 π‘Šπ‘’π‘£ )
                                                                                         (3.2)
where 𝑋𝑒𝑣 := J(𝑒)1𝑆 (𝑣) + J(𝑣)1𝑆 (𝑒). The following table sums up respective values of 𝑋𝑒𝑣 for
(𝑒, 𝑣) among the sets 𝑅 ∩ 𝑆, 𝑅 \ 𝑆, 𝐡 ∩ 𝑆 and 𝐡 \ 𝑆.

                                 𝑒\𝑣     𝑅 ∩ 𝑆 𝐡 ∩ 𝑆 𝑅 ∩ 𝑆𝑐 𝐡 ∩ 𝑆𝑐
                                π‘…βˆ©π‘†        2      0
                                                       1      βˆ’1
                                π΅βˆ©π‘†        0     βˆ’2                                                     (3.3)
                                𝑅 ∩ 𝑆𝑐         1
                                                           0
                                𝐡 ∩ 𝑆𝑐        βˆ’1

Substituting back into Equation (3.1), we get
                          Γ•                   Γ•                Γ• Γ•                 Γ• Γ•
             dif(𝑆) =           (2π‘Šπ‘’π‘£ ) βˆ’          (2π‘Šπ‘’π‘£ ) +               π‘Šπ‘’π‘£ βˆ’               π‘Šπ‘’π‘£ .
                        {𝑒,𝑣}βŠ‚π‘†βˆ©π‘…           {𝑒,𝑣}βŠ‚π‘†βˆ©π΅          π‘’βˆˆπ‘† π‘£βˆˆπ‘…\𝑆           π‘’βˆˆπ‘† π‘£βˆˆπ΅\𝑆


For convenience in the following computations, let π‘šπ‘… := 𝑆 ∩ 𝑅 and π‘šπ΅ := 𝑆 ∩ 𝐡 .

                      T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                   15
                                            L INH T RAN AND VAN V U

Note that 𝐡 \ 𝑆 = 𝑛0 βˆ’ π‘šπ΅ and 𝑅 \ 𝑆 = 𝑛 βˆ’ 𝑛0 βˆ’ π‘šπ‘… . Let π‘Š be a Bin(1, 𝑝) random variable.
Observe that 𝑆 is a sum of π‘š2𝑅 = π‘šπ‘… (π‘š2𝑅 βˆ’1) copies of 2π‘Š, π‘š2𝐡 = π‘šπ΅ (π‘š2𝐡 βˆ’1) copies of (βˆ’2π‘Š),
                                                                                 
π‘š(𝑛 βˆ’ 𝑛0 βˆ’ π‘šπ‘… ) copies of π‘Š, and π‘š(𝑛0 βˆ’ π‘šπ΅ ) copies of (βˆ’π‘Š), all independent. Therefore

                            π‘šπ‘… (π‘šπ‘… βˆ’ 1)     π‘šπ΅ (π‘šπ΅ βˆ’ 1)
                                                                                                         
         E [dif(𝑆)] = 𝑝 2 Β·             βˆ’2Β·             + π‘š(𝑛 βˆ’ 𝑛0 βˆ’ π‘šπ‘… ) βˆ’ π‘š(𝑛0 βˆ’ π‘šπ΅ )
                                 2               2
                        h                                                             i
                     = 𝑝 π‘šπ‘…2 βˆ’ π‘šπ΅2 βˆ’ π‘šπ‘… + π‘šπ΅ + π‘š(𝑛 βˆ’ 2𝑛0 βˆ’ π‘šπ‘… + π‘šπ΅ )
                        h                                                                             i
                     = 𝑝 (π‘šπ‘… βˆ’ π‘šπ΅ )(π‘šπ‘… + π‘šπ΅ ) βˆ’ π‘š(π‘šπ‘… βˆ’ π‘šπ΅ ) + π‘š(𝑛 βˆ’ 2𝑛0 ) βˆ’ π‘šπ‘… + π‘šπ΅ )
                        h                                                                        i
                     = 𝑝 π‘š(π‘šπ‘… βˆ’ π‘šπ΅ ) βˆ’ π‘š(π‘šπ‘… βˆ’ π‘šπ΅ ) + π‘š(𝑛 βˆ’ 2𝑛0 ) βˆ’ (π‘š βˆ’ π‘šπ΅ ) + π‘šπ΅
                        h                             i         h                     i
                     = 𝑝 π‘š(𝑛 βˆ’ 2𝑛0 ) βˆ’ π‘š + 2π‘šπ΅ β‰₯ 𝑝 π‘š(𝑛 βˆ’ 2𝑛0 ) βˆ’ π‘š = π‘π‘š(𝑛 βˆ’ 2𝑛0 βˆ’ 1).

Moreover, supp(π‘Šπ‘’π‘£ ) = 0, 1, so supp(𝑋𝑒𝑣 π‘Šπ‘’π‘£ ) = {0, 𝑋𝑒𝑣 }, thus

                    sup supp(𝑋𝑒𝑣 π‘Šπ‘’π‘£ ) βˆ’ inf supp(𝑋𝑒𝑣 π‘Šπ‘’π‘£ ) = |𝑋𝑒𝑣 | for all 𝑒 β‰  𝑣.

Therefore the denominator of the exponent in Equation (3.2) is
  Γ•                                                                 Γ•
        sup supp(𝑋𝑒𝑣 π‘Šπ‘’π‘£ ) βˆ’ inf supp(𝑋𝑒𝑣 π‘Šπ‘’π‘£ )                         𝑋𝑒𝑣
                                                          2             2
                                                               =
  𝑒≠𝑣                                                       𝑒≠𝑣
          Γ•             Γ•             Γ• Γ•                 Γ• Γ•
   =           2 +
                2
                             (βˆ’2) +
                                  2
                                                  1 +
                                                  2
                                                                      (βˆ’1)2
        {𝑒,𝑣}βŠ‚π‘†βˆ©π‘…    {𝑒,𝑣}βŠ‚π‘†βˆ©π΅        π‘’βˆˆπ‘† π‘£βˆˆπ‘…\𝑆           π‘’βˆˆπ‘† π‘£βˆˆπ΅\𝑆
        π‘šπ‘… (π‘šπ‘… βˆ’ 1)      π‘šπ΅ (π‘šπ΅ βˆ’ 1)
   = 4Β·             +4Β·              + π‘š(𝑛 βˆ’ 𝑛0 βˆ’ π‘šπ‘… ) + π‘š(𝑛0 βˆ’ π‘šπ΅ )
             2                2
   = 2π‘šπ‘…2 + 2π‘šπ΅2 βˆ’ 2π‘šπ‘… βˆ’ 2π‘šπ΅ + π‘š(𝑛 βˆ’ π‘šπ‘… βˆ’ π‘šπ΅ ) = 2(π‘šπ‘… + π‘šπ΅ )2 βˆ’ 4π‘šπ‘… π‘šπ΅ βˆ’ 2π‘š + π‘š(𝑛 βˆ’ π‘š)
   ≀ 2(π‘šπ‘… + π‘šπ΅ )2 βˆ’ 2π‘š + π‘š(𝑛 βˆ’ π‘š) = 2π‘š 2 βˆ’ 2π‘š + π‘š(𝑛 βˆ’ π‘š) = π‘š(𝑛 + π‘š βˆ’ 2).

Substituting this bound into Equation (3.2), we get
                                      "                             2 #
                                           2 π‘π‘š(𝑛 βˆ’ 2𝑛0 βˆ’ 1)                       2𝑝 2 (𝑛 βˆ’ 2𝑛0 βˆ’ 1)2 π‘š
                                                                                                         
        P(𝑆 βŠ† 𝐡1 | 𝐡0 = 𝐡) ≀ exp βˆ’                                         = exp βˆ’                       .
                                                π‘š(𝑛 + π‘š βˆ’ 2)                             𝑛+π‘šβˆ’2

Applying a double union bound over choices of 𝑆 and 𝐡, noting that by Lemma 3.2, the number
           𝑛  𝑛
of choices 𝑛0 π‘š is at most 4𝑛 /𝑛, so we have

                      𝑉         𝑉                                4𝑛       2𝑝 2 (𝑛 βˆ’ 2𝑛0 βˆ’ 1)2 π‘š
                                                                                           
              P βˆƒπ΅0 ∈    , βˆƒπ‘† ∈   , 𝑆 βŠ‚ 𝐡1                     ≀    exp βˆ’                       .
                      𝑛0        π‘š                                𝑛              𝑛+π‘šβˆ’2
Taking the complement event, we get the desired result.                                                       
    Lemma 3.4 is sufficient to prove parts 2 and 3 of Theorem 1.6. The following lemmas are
direct corollaries of Lemma 3.4.

                        T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                                  16
               R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

Lemma 3.5 (Part 2 of Theorem 1.6). In the election process on 𝐺 ∼ 𝐺(𝑛, 𝑝), with probability at least

                                  1 βˆ’ 𝑛 βˆ’1 exp (βˆ’.025𝑛) = 1 βˆ’ 𝑃2 (𝑛),                                  (3.4)
                                                                  √
𝐺 is such that any choice of the Blue camp of at most 𝑛/2 βˆ’ (.8/𝑝) 𝑛 = 𝑛1𝐡 members shrinks to size at
most .4𝑛 = 𝑛2𝐡 the next day.
                                            √ 
Proof of Lemma 3.5. Let 𝑛2 := 𝑛/2 βˆ’ (.8/𝑝) 𝑛 and π‘š := d.4𝑛e. By Lemma 3.4, 𝐺 is such that
                              
every Blue set of at most 𝑛2 vertices shrinks to size π‘š βˆ’ 1 with probability at least

        4𝑛       2𝑝 2 (𝑛 βˆ’ 2𝑛2 βˆ’ 1)2 π‘š             2𝑝 2 (𝑛 βˆ’ 2𝑛2 βˆ’ 1)2 π‘š
                                                                                          
                                            1
     1βˆ’    exp βˆ’                       = 1 βˆ’ exp βˆ’                       βˆ’ 2𝑛 log 2                .
        𝑛              𝑛+π‘šβˆ’2                𝑛            𝑛+π‘šβˆ’2

We have                                                                     √
                         π‘š      π‘š                                        1.6 𝑛
                             β‰₯     β‰₯ 2/7,              and 𝑛 βˆ’ 2𝑛2 βˆ’ 1 β‰₯       βˆ’ 1.
                       𝑛+π‘šβˆ’2   𝑛+π‘š                                          𝑝
Therefore can bound the exponent of the RHS of Equation (3) as follows:
                                                            √
                   2𝑝 2 (𝑛 βˆ’ 2𝑛2 βˆ’ 1)2 π‘š               4 2 1.6 𝑛
                                                                     
                                          βˆ’ 2𝑛 log 2 β‰₯ 𝑝          βˆ’ 1 βˆ’ 2𝑛 log 2
                          𝑛+π‘šβˆ’2                        7      𝑝
                                               12.8 √                       √
                                         
                           10.24                         4
                    =            βˆ’ 2 log 2 𝑛 βˆ’      𝑝 𝑛 + 𝑝 2 β‰₯ .076𝑛 βˆ’ 1.83 𝑛.
                             7                   7       7

The proof is complete.                                                                                    

Lemma 3.6 (Part 3 of Theorem 1.6). In the election process on 𝐺 ∼ 𝐺(𝑛, 𝑝), with probability at least
                                                             
                                  βˆ’1
                            1βˆ’π‘›        exp βˆ’.02𝑛(𝑝 𝑛 βˆ’ 80) = 1 βˆ’ 𝑃3 (𝑛, 𝑝),
                                                   3



𝐺 is such that any choice of the Blue camp with at most .4𝑛 members shrinks to at most 13 𝑝(𝑛 βˆ’ 1)
members the next day.

Proof of Lemma 3.6. Let 𝑛2 := b.4𝑛c and π‘š := d 31 𝑝(𝑛 βˆ’ 1)e. By Lemma 3.4, 𝐺 satisfies that every
Blue set of at most 𝑛2 vertices shrinks to size π‘š βˆ’ 1 with probability at least

      4𝑛       2𝑝 2 (𝑛 βˆ’ 2𝑛2 βˆ’ 1)2 π‘š              2𝑝 2 (𝑛 βˆ’ 2𝑛 βˆ’ 1)2 π‘š
                                                                                      
                                          1
                                                                        βˆ’ 2𝑛 log 2 .
                                                               2
   1βˆ’    exp βˆ’                       = 1 βˆ’ exp βˆ’                                                       (3.5)
      𝑛              𝑛+π‘šβˆ’2                𝑛              𝑛+π‘šβˆ’2

Since 𝑛2 ≀ .4𝑛, we have (𝑛 βˆ’ 2𝑛2 βˆ’ 1)2 β‰₯ (.2𝑛 βˆ’ 1)2 . Furthermore, π‘š β‰₯ 31 𝑝(𝑛 βˆ’ 1) so

                              π‘š         𝑝(𝑛 βˆ’ 1)/3       𝑝    𝑝
                                  β‰₯                   =      β‰₯ .
                            𝑛+π‘šβˆ’2   𝑛 + 𝑝(𝑛 βˆ’ 1)/3 βˆ’ 1 3 + 𝑝  4

                        T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                             17
                                        L INH T RAN AND VAN V U

Therefore we can bound the exponent in the RHS of Equation (3.5) as follows:
     2𝑝 2 (𝑛 βˆ’ 2𝑛2 βˆ’ 1)2 π‘š                2𝑝 2 (.2𝑛 βˆ’ 1)2 π‘š                 𝑝 3 (.2𝑛 βˆ’ 1)2
                           βˆ’ 2𝑛 log 2 β‰₯                     βˆ’ 2𝑛 log 2 β‰₯                   βˆ’ 2𝑛 log 2
           𝑛+π‘šβˆ’2                             𝑛+π‘šβˆ’2                                  2
                                                                                  𝑝3
                             
                            1
      = 𝑝 3 .02𝑛 2 βˆ’ .2𝑛 +      βˆ’ 2𝑛 log 2 β‰₯ .02𝑝 3 𝑛 2 βˆ’ (.2𝑝 3 + 2 log 2)𝑛 +
                            2                                                     2
      = .02𝑛(𝑝 3 𝑛 βˆ’ 10𝑝 3 βˆ’ 100 log 2) β‰₯ .02𝑛(𝑝 3 𝑛 βˆ’ 80),
where the last inequality is due to 10𝑝 3 + 100 log 2 ≀ 10 + 100 log 2 < 80. The result follows.        
    These lemmas and Lemma 3.3, together with Lemma 2.1 form the complete β€œchain of
shrinking” for the number of Blue vertices to reach 0 in four days, hence wrapping up the proof
of Theorem 1.6.


4   Conclusion
The majority dynamics scheme on a network is a process where each of the 𝑛 individuals is
assigned an initial color, which changes daily to match the majority among their neighbors. Our
main result, Theorem 1.6, when the network is a 𝐺(𝑛, 𝑝) random graph, yields an explicit lower
bound based on 𝑛, 𝑝 and 𝑐 for the probability that the side with the initial majority wins. It has
two important implications. The first is a surprising phenomenon, which we call the Power of
Few phenomenon (Theorem 1.2), which shows that when 𝑝 = 1/2 and πœ€ = .1, 𝑐 can be set to just
5, meaning five extra people is all it takes to win a large election with overwhelming odds. The
second is an asymptotic dependency between the πœ€, 𝑛 and 𝑐 (Theorem 1.3), which shows that
for any fixed 𝑝, there is a constant 𝐾(𝑝) such that choosing 𝑛 and 𝑐 both large enough so that
𝐾(𝑝) max{𝑛 βˆ’1 , 𝑐 βˆ’2 } < πœ€ will ensure that the winning probability is at least 1 βˆ’ πœ€.
    Although the results in this paper only apply to dense 𝐺(𝑛, 𝑝) graphs, we do cover sparse
graphs in a separate in-progress paper [17], where we obtain the Power of Few phenomenon
for 𝑝 = Ξ©((log 𝑛)/𝑛), and discuss the end result (other than a win) for lower values of 𝑝. We
nevertheless mention one of the main results proved in the upcoming paper (Theorem 1.5),
and use it to prove the main theorem of the paper [8] by Fountoulakis, Kang and Makai in
Appendix A.


A    Proof of Fountoulakis et al’s Theorem from Theorem 1.5
In this Appendix we give a proof of the main theorem by Fountoulakis et al in [8] (Theorem 1.4)
using the main theorem in our upcoming paper (Theorem 1.5).
Proof. Assume Theorem 1.5. Let 𝑅 0 and 𝐡0 respectively be the initial Red and Blue camps. Fix a
constant 0 < 𝑐 0 ≀ πœ€/6. 𝑅0 ∼ Bin(𝑛, 1/2) since it is a sum of Bin(1, 1/2) variables. An application
of the Berry–Esseen theorem (Theorem 2.2; with .56 = .56) implies that
                𝑛      √            .56                     𝑛        √               .56
      P 𝑅 0 βˆ’ ≀ 𝑐 𝑛 ≀ Ξ¦(2𝑐 ) + √ , and P 𝑅0 βˆ’ ≀ βˆ’π‘ 𝑛 β‰₯ Ξ¦(βˆ’2𝑐 0) βˆ’ √ ,
                      0          0                                    0
                 2                      𝑛                      2                           𝑛

                      T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                              18
               R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

Thus
                        𝑛          √             .56              .56
                                                                         
                P 𝑅0 βˆ’        ≀ 𝑐 0 𝑛 ≀ Ξ¦(2𝑐 0) + √ βˆ’ Ξ¦(βˆ’2𝑐 0) βˆ’ √
                        2                           𝑛                𝑛
                                    20.56   4𝑐 0   20.56   πœ€  20.56
                 ≀ Ξ¦(βˆ’2𝑐 0 , 2𝑐 0) + √    ≀ √ + √        ≀   + √     ≀ πœ€/2,
                                       𝑛     2πœ‹       𝑛    3     𝑛
for sufficiently large 𝑛.
                                             √                                                 √
    On the other hand, if 𝑅 0 βˆ’ 𝑛/2 > 𝑐 0 𝑛, then one of the sides has more than 𝑛/2 + 𝑐 0 𝑛
initial members, which we call the majority side. Now we apply Theorem 1.5 with πœ€ replaced by
πœ€/2. Notice
        √     that in the setting of Theorem 1.4 , if we have 𝑝 = πœ†π‘› βˆ’1/2 for πœ† sufficiently large,
then 𝑐 0 𝑛 β‰₯ 𝑐/𝑝, where 𝑐 is the constant in Theorem 1.5. Thus, by this theorem, the probability
for the majority side to win is at least 1 βˆ’ πœ€/2, and we are done by the union bound.            



References
 [1] Dan Alistarh, James Aspnes, David Eisenstat, Rati Gelashvili, and Ronald L. Rivest: Time-
     space trade-offs in population protocols. In Proc. 28th Ann. ACM–SIAM Symp. on Discrete
     Algorithms (SODA’17), pp. 2560–2579. SIAM, 2017. [doi:10.1137/1.9781611974782.169,
     arXiv:1602.08032] 3

 [2] Dan Alistarh, Rati Gelashvili, and Milan Vojnović: Fast and exact majority in population
     protocols. In Proc. ACM Symp. Principles Distrib. Computing (PODC’15), pp. 47–56. ACM
     Press, 2015. [doi:10.1145/2767386.2767429] 3

 [3] Itai Benjamini, Siu-On Chan, Ryan O’Donnell, Omer Tamuz, and Li-Yang Tan: Convergence,
     unanimity and disagreement in majority dynamics on unimodular graphs and random
     graphs. Stoch. Proc. Appl., 126(9):2719–2733, 2016. [doi:10.1016/j.spa.2016.02.015] 3, 4

 [4] BΓ©la BollobΓ‘s: Models of random graphs. In Random Graphs, chapter 2, pp. 34–50.
     Cambridge Univ. Press, 2001. [doi:10.1017/CBO9780511814068.004] 2

 [5] Peter Clifford and Aidan Sudbury: A model for spatial conflict. Biometrika, 60(3):581–588,
     1973. [doi:10.1093/biomet/60.3.581] 3

 [6] Morris H. DeGroot: Reaching a consensus. J. Amer. Statistical Assoc., 69(345):118–121, 1974.
     [doi:10.2307/2285509] 3

 [7] Carl-Gustav Esseen: A moment inequality with an application to the central limit theorem.
     Scandinavian Actuarial J., 1956(2):160–170, 1956. [doi:10.1080/03461238.1956.10414946] 9

 [8] Nikolaos Fountoulakis, Mihyun Kang, and TamΓ‘s Makai: Resolution of a conjecture on
     majority dynamics: Rapid stabilisation in dense random graphs. Random Struct. Algor.,
     57(4):1134–1156, 2020. [doi:10.1002/rsa.20970, arXiv:1910.05820] 3, 4, 18

                      T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                      19
                                      L INH T RAN AND VAN V U

 [9] Wassily Hoeffding: Probability inequalities for sums of bounded random variables. J.
     Amer. Statistical Assoc., 58(301):13–30, 1963. [doi:10.1080/01621459.1963.10500830] 14

[10] Svante Janson, Tomasz Łuczak, and Andrzej Rucinski: Random Graphs. Wiley, 2011.
     Chapter 1 (Preliminaries), pages 1–23. [doi:10.1002/9781118032718.ch1] 2

[11] JiΕ™i MatouΕ‘ek and Jaroslav NeΕ‘etΕ™il: Invitation to Discrete Mathematics. Oxford Univ. Press,
     1998. LINK. 14

[12] Elchanan Mossel, Joe Neeman, and Omer Tamuz: Majority dynamics and aggregation of
     information in social networks. Autonomous Agents and Multi-Agent Systems, 28(3):408–429,
     2014. [doi:10.1007/s10458-013-9230-4] 3, 4

[13] Elchanan Mossel and Omer Tamuz: Opinion exchange dynamics. Probability Surveys,
     14:155–204, 2017. [doi:10.1214/14-PS230] 2, 3, 4

[14] Ashwin Sah and Mehtaab Sawhney: Majority dynamics: The power of one, 2021.
     [arXiv:2105.13301] 6, 8

[15] Irina Gennad’evna Shevtsova: An improvement of convergence rate estimates in the
     Lyapunov theorem. Doklady Math., 82(3):862–864, 2010. [doi:10.1134/S1064562410060062]
     9

[16] Linh Tran and Van Vu: Reaching a consensus on random networks: The power
     of few.    In Proc. 24th Internat. Conf. on Randomization and Computation (RAN-
     DOM’20), pp. 20:1–15. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2020.
     [doi:10.4230/LIPIcs.APPROX/RANDOM.2020.20] 1

[17] Linh Tran and Van Vu: The β€œpower of few” phenomenon: The sparse case, 2023.
     [arXiv:2302.05605] 3, 4, 18

[18] Roman Vershynin: Concentration of sums of independent random variables. In High-
     Dimensional Probability: An Introduction with Applications in Data Science, chapter 2, pp. 11–37.
     Cambridge Univ. Press, 2018. [doi:10.1017/9781108231596.005] 14


AUTHORS

      Linh Tran
      Graduate Student
      Department of Mathematics
      Yale University
      New Haven, CT, USA
      l.tran yale edu
      https://math.yale.edu/people/linh-tran



                      T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                         20
             R EACHING A C ONSENSUS ON R ANDOM N ETWORKS : T HE P OWER OF F EW

    Van Vu
    Percey F. Smith Professor of Mathematics and Professor of Statistics & Data Science
    Yale University
    New Haven, CT, USA
    van.vu yale edu
    https://campuspress.yale.edu/vanvu/


ABOUT THE AUTHORS

    Linh Tran is currently a fifth year Ph. D. student in the Math department at Yale
       University, working under the supervision of Professor Van Vu. He went to
       special math classes for gifted children at the high school division of the Vietnam
       National University in Ho Chi Minh City. Encouraged and nurtured by his high
       school teachers, Tran found his passion for Math and was determined to follow
       this path, especially after having had little success in other subjects. Under the
       guidance of his advisor, Prof. Van Vu, he does research in random graph theory,
       particularly majority dynamics and generalizations, and random matrix theory,
       particularly its application in data analysis and machine learning methods. He is
       also interested in combinatorics problems involving the probabilistic method,
       applications of geometry in combinatorics, e. g., random fractals, and theoretical
       computer science.


    Van Vu is the P. F. Smith Professor of Mathematics and Professor of Data Science at
      Yale University. Vu was born in Hanoi (Vietnam) in 1970. He went to special
       math classes for gifted children at Chu Van An and Hanoi-Amsterdam high
       schools. In 1987, he went to Hungary for his undergraduate studies, and obtained
       his M. Sc. in mathematics at EΓΆtvΓΆs University, Budapest, in 1994. He received
       his Ph. D. at Yale University, in 1998 under the direction of LΓ‘szlΓ³ LovΓ‘sz. His
       research interest includes random structures (random matrices, random graphs,
       random functions), probabilistic and additive combinatorics, and randomized
       algorithms. Recently, he he has also been interested in real-life applications of
       data science (such as NLP and computer vision).




                   T HEORY OF C OMPUTING, Volume 19 (6), 2023, pp. 1–21                      21