Authors Emanuele Viola,
License CC-BY-3.0
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12
www.theoryofcomputing.org
Pseudorandom Bits and Lower Bounds
for Randomized Turing Machines
Emanuele Violaβ
Received August 5, 2019; Revised January 24, 2022; Published May 20, 2022
Abstract. We exhibit a pseudorandom generator with nearly quadratic stretch for
randomized Turing machines, which have a one-way random tape and a two-way
work tape. This is the first generator for this model. Its stretch is essentially the
best possible given current lower bounds. We use the generator to prove a time
lower bound in the above Turing machine model extended with a two-way read-only
input tape. The lower bound is of the form π 1+Ξ©(1) and is for a function computable
in linear time with two quantifier alternations. Previously lower bounds were not
known even for functions computable in simply exponential time.
1
Turingβs machines (TMs) are one of the most studied models of computation. More than fifty
years ago, Hennie proved quadratic lower bounds [9] for solving simple problems on one-tape
TMs. While this remains the best time lower bound for one-tape machines, lower bounds for
stronger models have been established in many papers, including [23, 15, 16, 12, 24, 19, 28],
some of which are discussed below.
β Supported by NSF CCF awards 1813930 and 2114116.
ACM Classification: F.2.3
AMS Classification: 68Q17
Key words and phrases: lower bound, Turing machine, pseudorandom generator, tape
Β© 2022 Emanuele Viola
c b Licensed under a Creative Commons Attribution License (CC-BY) DOI: 10.4086/toc.2022.v018a010
E MANUELE V IOLA
In 1994, Impagliazzo, Nisan, and Wigderson constructed a pseudorandom generator [11] β
that fools one-tape TMs. More precisely they show how to efficiently map a seed of length π( π‘)
into a string of length π‘ that cannot be distinguished from uniform by a one-tape Turing machine
running in time π‘.
However, their result does not apply to randomized Turing machines, which can βtoss coins.β
The model in [11] corresponds to randomized machines that begin by filling the tape with
coin tosses in a one-way fashion, and then operate deterministically. By contrast, randomized
machines can toss coins at any time during a computation. Equivalently, we can think of
randomized machines as having an extra one-way tape with random bits on it. Note that in
[11] the Turing machine receives the output of the generator on its (only) tape. A randomized
machine instead receives it on a separate tape. This allows the machine to perform several tasks
in linear time, such as copying the random bits on the work tape in reverse order, comparing the
first half of the random bits with the second half, etc. All of these operations require quadratic
time on a basic one-tape machine. And in fact, an instantiation of the [11] generator can be
broken by a randomized TM (see Section 5).
In this paper we give the first generator for randomized TMs. Since there are several variants
of TMs let us first define the model precisely and then state our result.
Definition 1.1. A randomized Turing machine (RTM) is a Turing machine with two tapes:
- A two-way, read-write work tape,
- and a one-way random tape.
β
Theorem 1.2. There β
are explicit pseudorandom generators with seed length π( π‘ log3 π‘ log π)
and error π = (π‘ π)β π‘ that fool RTMs with π states running in time π‘. That is, no such RTM can
tell whether its random tape is filled with uniform bits or with the output of the generator on a
uniform random seed, except with advantage π.
The seed length is, up to the polylogarithmic factors, the best possible short of proving
super-quadratic lower bounds for TMs.
One motivation for constructing pseudorandom generators is proving lower bounds for
stronger models. The RTM model above is the natural randomized extension of the basic
one-tape machine model. Now we consider a different model to extend: Turing machines with a
work tape and a two-way read-only input tape. This is one of the strongest models for which we
can prove lower bounds. Time lower bounds of the form π 1+Ξ©(1) were shown in [16, 19, 28]. The
question of extending these lower bounds to hold even for randomized machines was asked by
several authors. In particular it was asked in [26, 27] (by the author), where lower bounds for an
intermediate model are proved, and in a blog post [14] by Lipton.
In this paper we prove a lower bound for this randomized model. Again, let us first define
the model.
Definition 1.3. An RTM2 is a Turing machine with three tapes:
- A two-way, read-write work tape,
- a one-way random tape, and
- a two-way, read-only input tape.
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 2
P SEUDORANDOM B ITS AND L OWER B OUNDS FOR R ANDOMIZED T URING M ACHINES
We prove a lower bound for a problem in Ξ£3 Time(π) β linear time with two quantifier
alternations. By standard completeness results the lower bound holds for πππ΄π3 β the problem
of deciding the validity of a formula with 2 quantifier alternations. This for example follows
from the proof of Theorem 7 in [4]. Previously lower bounds were not known even for functions
computable in exponential time πΈ = Time(2π(π) ).
Theorem 1.4. There is a function computable in Ξ£3 Time(π) which requires time π 1+Ξ©(1) on any
RTM2 with bounded error probability.
The lower bound holds for general two-sided error. Moreover, as in previous work, see for
example [19], it in fact holds even if the machine has direct access (a. k. a. random access) to the
input. This means that there is a separate index tape and in one time step the machine can move
the input head to the position written in binary on the index tape.
2 Overview of techniques
The proof of Theorem 1.2 relies on a new simulation of RTMs by a family of space-efficient
branching programs, henceforth called programs for brevity. It is critical that each program in the
family is allowed to read bits in a different order, so let us first define such programs.
Definition 2.1. A (branching) program with space π on π‘ bits consists of a permutation π of the
bits and a layered directed graph with π‘ + 1 layers. Each layer has 2π nodes, except the first layer
which has 1, and the last layer which has 2. Each node in layer π β€ π‘ has two edges, labeled with
0 and 1, connecting to nodes in layer π + 1. Nodes on layer π + 1 are labeled with 0 and 1. On
a π‘-bit input, the program follows the path corresponding to reading the input in a one-way
fashion according to the permutation π: layer π reads input bit π(π). It then outputs the label of
the last node.
If π is an RTM we write π(π) for the output of π when its random tape is initialized with π
(and the work tape is blank). We can now state our simulation.
β
Theorem 2.2. Let π be an RTM π( π‘)
β running in time π‘ and with π states. There is a family of (π‘ π)
programs ππΌ using space π( π‘ log π‘ π) such that for any π:
- if π(π) = 1 then there is exactly one πΌ such that ππΌ (π) = 1,
- if π(π) = 0 then ππΌ (π) = 0 for every πΌ.
The string πΌ in the simulation is used to guess short crossing sequences which partition the
work tape into small blocks; the program then simulates the machine one block at the time.
This idea goes back at least to the work by Maass and Schorr [16], see also the paper by van
Melkebeek and Raz [19]. We show that it can be applied to RTMs if the random bits can be
permuted. Here is where we use that the programs read bits in any order. This step is inspired
by the works of Maass [15] and Kalyanasundaram and Schnitger [12] who give explicit functions
(of the random bits) that are hard for RTMs. Our partition of the work tape has the additional
property that it can be βverifiedβ by the program, a property which is not apparent in previous
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 3
E MANUELE V IOLA
simulations. This allows us to guarantee that ππΌ (π) = 1 for at most one πΌ, which is used in
obtaining pseudorandom generators.
Given the simulation, we can use pseudorandom generators for programs that read bits in
any order. However, we need a good dependence on the number of states and the error. Using
a result from [7] (Corollary 20 and surrounding discussion) gives a pseudorandom generator
for RTMs with seed length π‘ 5/6+π(1) . This is sufficient for Theorem 1.4. Using the generator in
the follow-up by Forbes and Kelley [5] we improve 5/6 to 1/2.
Proof of Theorem 1.2 from Theorem 2.2. The generator in [5], Corollary 4.3, fools branching pro-
grams on π‘ bits using space π with error π and seed length π(log(π‘2π /π) log2 π‘) = π(π +
β β
log(1/π))
β log2 π‘. We set π to (π‘ π)βπ( π‘) and recall π = π( π‘ log π‘ π) to obtain seed length at most
π( π‘ log3 π‘ log π). To show correctness, let π be the uniform distribution and π· the output
distribution of the generator. We have
" # " #
Γ Γ β β
| πΌ[π(π)] β πΌ[π(π·)]| = πΌ ππΌ (π) β πΌ ππΌ (π·) β€ (π‘ π)π( π‘) π = (π‘ π)βΞ©( π‘)
πΌ πΌ
concluding the proof.
The proof of Theorem 1.4 is in Section 4. It follows closely the proof of Theorem 6.2 in [27]
which establishes a lower bound for a different model of Turing machines. Theorem 1.4 in this
paper is to Theorem 6.2 in [27] as Theorem 1.2 in this paper is to the generator in [11]: Theorem
6.2 in [27] does not allow for a separate random-bit tape, but instead concerns Turing machines
whose work tape is initially filled with random bits.
In turn, the proof from [27] relies on a number of other results in the literature, which are
also required for the proof in this paper. We use the arguments mentioned earlier in [16, 19]
to simulate TMs using little space and non-determinism. We then follow the lead of Diehl
and van Melkebeek [4] who proved time-space lower bounds for randomized space-bounded
computation. They used Nisanβs pseudorandom generator [22] and the SipserβGacsβLautemann
[25, 13] simulation of BPP in the polynomial-time hierarchy. We use instead the pseudorandom
generator given by Theorem 1.2. Here it is critical that the seed length has a good dependence on
the number of states, since we will hard-wire the input to an RTM2 in the states. Turning to the
SipserβGacsβLautemann simulation, we note that this simulation (which has two quantifiers) is
too slow for our purpose; we use the faster simulation in [27] (with three quantifiers). Finally,
we rely on the machinery of time-space tradeoffs; for a survey of those see van Melkebeek [18]
or Williamsβ thesis [29].
3 Proof of Theorem 2.2
The reader may want to refer to Table 1 for an example of an RTM computation and some of the
parameters in this proof. For integers π and π we write [π, π] for the set {π, π + 1, . . . , π}. Because
the machine runs in time π‘ it can only use π‘ + 1 work cells. We identify these cells with [1, π‘ + 1].
There are π‘ boundaries between these cells. Boundary π β [π‘] is between work cell π and π + 1.
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 4
P SEUDORANDOM B ITS AND L OWER B OUNDS FOR R ANDOMIZED T URING M ACHINES
Tape cell 1 2 3 4 5 6 7 8 9
β
1
H
H
H
H
H
β
3
β
3
H
H
H
H
H
H
H
H
H
H
H
β
2
H
β
3
H
H
H
H
H
H
H
β
3
block 1 2 2 2 3 3 3 3 3
π1 π2 π3
Table 1: Computation table of an RTM with 9 work tape cells reading 6 random bits. Each row
corresponds to a different time stamp and shows the position of the head H on the work tape.
The symbol β
indicates when a random bit is read. We have three boundaries shown at the
bottom. The βblockβ row shows the partition of work cells in blocks. The induced partition on
the random-bit tape is 133233.
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 5
E MANUELE V IOLA
β
The string πΌ. The string πΌ contains π‘ boundaries π1 , π2 , . . . , π βπ‘ β [π‘]. These boundaries
β
partition the work cells into π‘ + 1 blocks, where block π are the cells [π πβ1 + 1, π π ] with π 0 = 0
and πβπ‘+1 = π‘ + 1. Each boundary π π belongs to the set
β β
π΅ π := [ π‘(π β 1) + 1, π‘π].
β
This implies that each block has β€ 2 π‘ cells.
β
The string πΌ also contains π‘ crossings π1 , π2 , . . . , π βπ‘ . A crossing π is a tuple (π β π, β, π)
which means that the machine is crossing from block β π to block π with the head on the random
tape in position β and state π. The length of πΌ is π‘π(log π‘ π) as required.
The property of ππΌ . We shall give programs ππΌ such that ππΌ (π) = 1 if and only if
(1) π(π) = 1,
(2) for every π the boundary π π is the one among π΅ π such that the computation π(π) crosses
it the least number of times, picking the smallest π π in case of ties, and
(3) the crossings π π are those induced by the π π and the computation π(π).
Concluding the proof assuming the property. Assuming we have ππΌ as above, we conclude
the proof as follows.βFirst, we need to verify that there are boundaries with respect to which the
computation
β has β€ π‘ crossings. This would be true even if we picked the π π in a progression
π π = π‘(π β 1) + π: Because different values of π give disjoint crossings,
β and the total number of
crossings
β is at most the computation time π‘, there is a value π β [1, π‘] for which such π π induce
β€ π‘ crossings. More so it holds for our definition of π π .
Also, there cannot be πΌ β πΌ0 such that ππΌ (π) = ππΌ0 (π) = 1. This is true because πΌ is uniquely
specified by the computation π(π).
Designing the ππΌ . It remains to exhibit the programs β ππΌ . The crossings π π = (π π β π π+1 , β π , π π )
induce a partition of the random tape in at most π‘ + 1 intervals. Interval π is [β πβ1 + 1, β π ]
corresponding to the computation in block π π from crossing π β 1 to π, where crossing 0 is the
beginning of the computation and β 0 = 0, and β βπ‘+1 = π‘. We use the convention that [π₯ + 1, π₯] is
the empty interval, which may arise if the machine goes from one crossing to the next without
reading random bits. These intervals can have any length.
The program simulates the machine one block at the time, reusing space across the blocks.
The program will read the random bits in the following order. First it reads all the random
bits in the intervals corresponding to work-tape block 1, then all the intervals corresponding to
work-tape block 2, and so on.
For each block π, the program goes through the crossings involving block π in order and
simulates the machine starting when a crossing enters block π until it leaves it. While doing
so it verifies that the crossings leaving the block are correct assuming those entering it are. If
not the program aborts and outputs zero. For this the program β just needs memory for the
work cells in one block, and the state of the machine, overall π( π‘ + log π) bits. Moreover, the
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 6
P SEUDORANDOM B ITS AND L OWER B OUNDS FOR R ANDOMIZED T URING M ACHINES
program computesβ the number of crossings for each boundary in the block π it simulates. This
takes additional π( π‘ log π‘) bits. At the end of the simulation of one block, the program can use
this information to verify that the boundaries match the intended values. Recall that block π
is [π πβ1 + 1, π π ] and that each π π β π΅ π . The computation on block π can verify βone sideβ of the
requirements for both π πβ1 and π π . Specifically, the program verifies that for each π 0 β π΅ πβ1 that
is bigger than π πβ1 the number of crossings at π 0 is at least as big as the number of crossings
at π πβ1 , and that for each π 0 β π΅ π that is smaller than π π the number of crossings at π 0 is strictly
bigger than the number of crossings at π π . If one of these checks does not pass the program
aborts and outputs zero. If all checks pass for everyβblock then the πβπ are correct. β
Overall the space required by the program is π( π‘ + log π) + π( π‘ log π‘) = π( π‘ log π‘ π).
4 Proof of Theorem 1.4
In this section we present the proof of Theorem 1.4. The proof involves several different
computational models, so we need a definition.
Definition 4.1. We denote by RTM2Time (π‘) the class of problems that can be solved by an
RTM2 machine running in time π‘.
We denote by TiSp (π , π‘) the class of problems that can be solved in time π‘ and simultaneously
space π . The precise machine model is not important here β we can assume a RAM machine.
For a complexity class πΆ and a function π we denote by βπ πΆ the class of problems that can
be solved in πΆ with an β quantifier on π bits: βπ πΆ = {πΏ0 : βπΏ β πΆ such that π₯ β πΏ0 β βπ¦ β
{0, 1} π(|π₯|) : (π₯, π¦) β πΏ}. We use the corresponding definition for the β and probabilistic BP
quantifiers.
Towards a contradiction, assume that Ξ£3 Time (π) β RTM2Time π 1+π(1) . Let π := π 2 . We
derive the following contradiction as in [27]:
Ξ£3 Time (π) β RTM2Time π 1+π(1) (4.1)
β BPπ βπ TiSp poly(π), π 1βΞ©(1)
1βΞ©(1) 1βΞ©(1)
(4.2)
β βπ βπ βπ TiSp poly(π), π 1βΞ©(1)
1βΞ©(1) 1βΞ©(1) 1βΞ©(1)
(4.3)
β Ξ£π(1) Time π 1βΞ©(1) (4.4)
β Ξ£3 Time (π(π)) (4.5)
contradiction. (4.6)
The only inclusion that requires a new proof is (4.2). Before providing that proof we review
why the other inclusions hold.
(4.1) is by padding.
(4.3) is by the time-efficient simulation of probabilistic time with three alternations, proved
in [27] (Theorem 1.3).
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 7
E MANUELE V IOLA
(4.4) follows by the fact, usually attributed to [21], that one can trade alternations for time in
sublinear-space computations. The required inclusion can be found in Section 3.2 in [17], or in
[6].
(4.5) follows by noting that the assumption Ξ£3 Time (π) β RTM2Time π 1+π(1) plus the
simulation in [27] (Theorem 1.3) imply that Ξ£3 Time (π) β Ξ 3 Time π 1+π(1) . This allows us to
efficiently collapse the polynomial hierarchy at the third level.
The contradiction in (4.6) is with the time hierarchy, see for example Section 3.1 in [17].
4.1 Proving (4.2)
In this subsection we give the proof of inclusion (4.2). First we need that the generator is
computable in linear space and polynomial time.
Claim 4.2. Given an index π to an output bit, and a seed, we can compute the output bit π of the generator
in Theorem 1.2 in space π‘ 1βΞ©(1) and simultaneously polynomial time.
Proof. [Sketch] The generator in [5], Corollary 4.3, involves computing small-bias generators
[20, 2] and bounded-independence generators [3, 1]. Those generators are in fact computable
even in logarithmic space, see Theorem 14 in [8]. The rest of the computation amounts to taking
bit-wise ANDs and XORs of strings, which can be done efficiently in small space.
We can now prove Inclusion (4.2). We obtain the following variant of Claim 6.3 in [27]. For
an RTM2 machine π we write π(π₯; π) for the output of π on input π₯ and random bits π. Recall
that π = π(π) = π 2 .
Claim 4.3. Let π be an RTM2 machine with π(1) states running in time π 1+π(1) . Let πΊ : {0, 1} π
1βΞ©(1)
β
π
Claim 4.2. Then the language {(π₯; π) : π(π₯; πΊ(π)) = 1} is in
1+Ξ©(1)
{0, 1} be
the generator from
βπ TiSp poly(π), π 1βΞ©(1) . In particular,
1βΞ©(1)
π 1βΞ©(1) π 1βΞ©(1)
RTM2Time π 1+π(1)
β BP β TiSp poly(π), π 1βΞ©(1)
.
Proof. The βin particularβ part follows from the first part by the correctness of the generator,
using π 1βΞ©(1) uniform bits as seed for the generator. Here note that we hardwire the input π₯ by
multiplying the number of states by |π₯|, and think of the resulting machine as an RTM.
To prove the first part, denote by π the work tape of π. Let us divide
β π in consecutive blocks
where the first block is of size π and all the others are of size, say, π. The parameter π will be
specified later. Note π completely specifies this subdivision in blocks. For fixed π, similarly to
before let us define a crossing to be a tuple
(π β π 0 , βπ, βπ, π)
where π is an index to a block in π, π 0 = π Β± 1, βπ is the position of the head on the input tape, βπ
is the position of the head on the random tape, and π is the state. A crossing (π β π 0 , βπ, βπ, π)
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 8
P SEUDORANDOM B ITS AND L OWER B OUNDS FOR R ANDOMIZED T URING M ACHINES
corresponds to π crossing the boundary of block π towards block π 0 with state π and input-tape
and random-tape head positions βπ and βπ (the position of the head on the work tape is
determined by π and π 0).
β
Since different values of π β {0, 1, . . . , π β 1} give rise to disjoint sets of blocks, there is
πβ< π such that the number of crossings induced by the computation π(π₯; πΊ(π)) is at most
π‘/ π β€ π 0.51 .
Simulation:
β We simulate the machine π as follows: First we guess a shift π and a sequence
of π‘/ π crossings. Then we checkβ consistency of the computation one block at the time. For
every block number π, we use π bits of space to simulate the machine on that block. First we
initialize those bits to 0. Then we scan, in order, the list of guessed crossings. Whenever we
encounter a crossing of the form (π β π, βπ, βπ, π) we do the following. We move the input head
to βπ and we initialize a counter π to βπ. We simulate π starting in state π until it crosses the
block boundary towards block π 0. Whenever π asks for a random bit, we reply with the π-th
output bit of πΊ(π) and increase π by one. Then we look at the next crossing (πΌ β πΌ0 , βπ 0 , βπ 0 , π 0)
in the guessed list and check that πΌ = π, πΌ0 = π 0 and π = βπ 0 and that βπ 0 and π 0 match too. We
continue in this way until the list is over. Finally, without loss of generality we can check that
the last crossing has the accepting state of π.
Complexity: Note that each crossing can be specified by π(log π) bits, and therefore we
guess at most π 0.51 π(log π) = π 1βΞ©(1) bits total. The rest of the computation can be done
simultaneously in time poly(π) and space π 1βΞ©(1) . To see this, note that the generator is
computable in these resources
β by Claim 4.2, while the rest of the simulation only uses space for
one block, which takes π bits, plus π(log π) bits for various counters.
5 RTM breaks INW
In this section we show that an instantiation of the generator in [11] for Turing machines does
not fool RTMs. Let π be a parameter and πΊ be a graph on vertex set {0, 1} π . The basic step of
the [11] generator is showing the pseudorandomness of the distribution (π₯, π¦, π§), where (π₯, π§)
is a uniform edge in πΊ and π¦ is a uniform string in {0, 1} π . The strength of the generator is
related to the expansion of πΊ. A simple graph with nearly maximum expansion is the one
where (π₯, π§) is an edge when the bit-wise XOR π£ of π₯ and π§ has zero inner product modulo 2,
that is π£1 π£2 β π£ 3 π£ 4 β Β· Β· Β· β π£ π β1 π£ π = 0. This corresponds to the distribution (π₯, π¦, π₯ β π£) where
π₯ and π¦ are uniform and π£ is a uniform string with zero inner product modulo 2. (It can be
verified directly that the distribution π£ has bias 2βΞ©(π ) in the sense of Naor and Naor [20], and it
is folklore that this equals the normalized second largest eigenvalue in πΊ, which is a well-known
algebraic measure of expansion, see [10].)
We now claim that an RTM can distinguish this distribution from uniform in time π(π ). The
RTM first copies π₯ on the work tape, then moves the work-tape head back to the beginning.
Then by XORing corresponding random bits and work-tape bits the RTM can recover the bits of
π£ and compute the inner product of π£. Note we use one-way access to the random tape, but
two-way access to the work tape.
By contrast, a one-tape TM cannot distinguish this distribution from uniform unless it runs
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 9
E MANUELE V IOLA
in time Ξ©(π 2 ) [11].
Acknowledgments. We thank the anonymous referees for the detailed and helpful feedback.
References
[1] Noga Alon, LΓ‘szlΓ³ Babai, and Alon Itai: A fast and simple randomized algorithm for the
maximal independent set problem. J. Algorithms, 7(4):567β583, 1986. [doi:10.1016/0196-
6774(86)90019-2] 8
[2] Noga Alon, Oded Goldreich, Johan HΓ₯stad, and RenΓ© Peralta: Simple constructions of
almost π-wise independent random variables. Random Struct. Algor., 3(3):289β304, 1992.
[doi:10.1002/rsa.3240030308] 8
[3] Benny Chor and Oded Goldreich: On the power of two-point based sampling. J. Complexity,
5(1):96β106, 1989. [doi:10.1016/0885-064X(89)90015-0] 8
[4] Scott Diehl and Dieter van Melkebeek: Time-space lower bounds for the polynomial-
time hierarchy on randomized machines. SIAM J. Comput., 36(3):563β594, 2006.
[doi:10.1137/050642228] 3, 4
[5] Michael A. Forbes and Zander Kelley: Pseudorandom generators for read-once branch-
ing programs, in any order. In Proc. 59th FOCS, pp. 946β955. IEEE Comp. Soc., 2018.
[doi:10.1109/FOCS.2018.00093] 4, 8
[6] Lance Fortnow, Richard Lipton, Dieter van Melkebeek, and Anastasios Viglas: Time-space
lower bounds for satisfiability. J. ACM, 52(6):835β865, 2005. [doi:10.1145/1101821.1101822]
8
[7] Elad Haramaty, Chin Ho Lee, and Emanuele Viola: Bounded independence plus noise
fools products. SIAM J. Comput., 47(2):493β523, 2018. Preliminary version in CCCβ17.
[doi:10.1137/17M1129088] 4
[8] Alexander Healy and Emanuele Viola: Constant-depth circuits for arithmetic in finite
fields of characteristic two. In Proc. 23rd Symp. Theoret. Aspects of Comp. Sci. (STACSβ06), pp.
672β683. Springer, 2006. [doi:10.1007/11672142_55] 8
[9] Frederick C. Hennie: One-tape, off-line Turing machine computations. Inform. Control,
8(6):553β578, 1965. [doi:10.1016/S0019-9958(65)90399-2] 1
[10] Shlomo Hoory, Nathan Linial, and Avi Wigderson: Expander graphs and their applica-
tions. Bull. AMS, 43(4):439β561, 2006. [doi:10.1090/S0273-0979-06-01126-8] 9
[11] Russell Impagliazzo, Noam Nisan, and Avi Wigderson: Pseudorandomness for network
algorithms. In Proc. 26th STOC, pp. 356β364. ACM Press, 1994. [doi:10.1145/195058.195190]
2, 4, 9, 10
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 10
P SEUDORANDOM B ITS AND L OWER B OUNDS FOR R ANDOMIZED T URING M ACHINES
[12] Bala Kalyanasundaram and Georg Schnitger: The probabilistic communication complex-
ity of set intersection. SIAM J. Discr. Math., 5(4):545β557, 1992. [doi:10.1137/0405044] 1,
3
[13] Clemens Lautemann: BPP and the polynomial hierarchy. Inform. Process. Lett., 17(4):215β217,
1983. [doi:10.1016/0020-0190(83)90044-3] 4
[14] Richard Lipton: Old ideas and lower bounds on SAT. GΓΆdelβs lost letter and P=NP, 2010.
Authorβs website. 2
[15] Wolfgang Maass: Combinatorial lower bound arguments for deterministic and nonde-
terministic Turing machines. Trans. AMS, 292(2):675β693, 1985. [doi:10.2307/2000238] 1,
3
[16] Wolfgang Maass and Amir Schorr: Speed-up of Turing machines with one work tape and
a two-way input tape. SIAM J. Comput., 16(1):195β202, 1987. [doi:10.1137/0216016] 1, 2, 3, 4
[17] Dieter van Melkebeek: Time-space lower bounds for NP-complete problems. Current
Trends Theor. Comp. Sci., pp. 265β291, 2004. [doi:10.1142/9789812562494_0015] 8
[18] Dieter van Melkebeek: A survey of lower bounds for satisfiability and related problems.
Found. Trends Theor. Comp. Sci., 2(3):197β303, 2006. [doi:10.1561/0400000012] 4
[19] Dieter van Melkebeek and Ran Raz: A time lower bound for satisfiability. Theoret. Comput.
Sci., 348(2β3):311β320, 2005. [doi:10.1016/j.tcs.2005.09.020] 1, 2, 3, 4
[20] Joseph Naor and Moni Naor: Small-bias probability spaces: Efficient constructions and
applications. SIAM J. Comput., 22(4):838β856, 1993. [doi:10.1137/0222053] 8, 9
[21] Valery A. Nepomnyashchii: Rudimentary predicates and Turing calculations. Dokl. Akad.
Nauk SSSR (Russian), 195(2):282β284, 1970. MathNet.ru (Russian), English: Soviet Math.
Dokl. 11(6), 1970, pp. 1462β1465. 8
[22] Noam Nisan: Pseudorandom generators for space-bounded computation. Combinatorica,
12(4):449β461, 1992. [doi:10.1007/BF01305237] 4
[23] Wolfgang J. Paul, Nicholas Pippenger, Endre SzemerΓ©di, and William T. Trotter: On
determinism versus non-determinism and related problems (preliminary version). In Proc.
24th FOCS, pp. 429β438. IEEE Comp. Soc., 1983. [doi:10.1109/SFCS.1983.39] 1
[24] Rahul Santhanam: On separators, segregators and time versus space. In Proc. 16th
IEEE Conf. on Comput. Complexity (CCCβ01), pp. 286β294. IEEE Comp. Soc., 2001.
[doi:10.1109/CCC.2001.933895] 1
[25] Michael Sipser: A complexity theoretic approach to randomness. In Proc. 15th STOC, pp.
330β335. ACM Press, 1983. [doi:10.1145/800061.808762] 4
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 11
E MANUELE V IOLA
[26] Emanuele Viola: The Complexity of Hardness Amplification and Derandomization. Ph. D. thesis,
Harvard Univ., 2006. Authorβs website. 2
[27] Emanuele Viola: On approximate majority and probabilistic time. Comput. Complexity,
18(3):337β375, 2009. Preliminary version in CCCβ07. [doi:10.1007/s00037-009-0267-3] 2, 4,
7, 8
[28] Ryan Williams: Inductive time-space lower bounds for SAT and related problems. Comput.
Complexity, 15(4):433β470, 2006. [doi:10.1007/s00037-007-0221-1] 1, 2
[29] Ryan Williams: Algorithms and Resource Requirements for Fundamental Problems. Ph. D. thesis,
Carnegie Mellon Univ., 2007. Authorβs website. 4
AUTHOR
Emanuele Viola
Associate professor
Khoury College of Computer Sciences
Northeastern University
Boston, MA
viola ccs neu edu
http://www.ccs.neu.edu/~viola
ABOUT THE AUTHOR
Emanuele Viola, called βManuβ by friends and colleagues, has been thinking about
these problems (on and off) for at least 15 years.
T HEORY OF C OMPUTING, Volume 18 (10), 2022, pp. 1β12 12