DOKK Library

Modal FRP for all: Functional reactive programming without space leaks in Haskell

Authors Patrick Bahr

License CC-BY-4.0

Plaintext
                 JFP 32, e15, 48 pages, 2022.   c The Author(s), 2022. Published by Cambridge University Press. This is an Open 1
                 Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/
                 licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is
                 properly cited.
                 doi:10.1017/S0956796822000132



                         Modal FRP for all: Functional reactive
                       programming without space leaks in Haskell
                                                             PATRICK BAHR
                               Computer Science Department, IT University of Copenhagen, Copenhagen, Denmark
                                                          (e-mail: paba@itu.dk)



                                                                       Abstract

                 Functional reactive programming (FRP) provides a high-level interface for implementing reactive
                 systems in a declarative manner. However, this high-level interface has to be carefully reigned in
                 to ensure that programs can in fact be executed in practice. Specifically, one must ensure that FRP
                 programs are causal and can be implemented without introducing space leaks. In recent years, modal
                 types have been demonstrated to be an effective tool to ensure these operational properties. In this
                 paper, we present Rattus, a modal FRP language that extends and simplifies previous modal FRP
                 calculi while still maintaining the operational guarantees for productivity, causality, and space leaks.
                 The simplified type system makes Rattus a practical programming language that can be integrated
                 with existing functional programming languages. To demonstrate this, we have implemented a shal-
                 low embedding of Rattus in Haskell that allows the programmer to write Rattus code in familiar
                 Haskell syntax and seamlessly integrate it with regular Haskell code. Thus, Rattus combines the
                 benefits enjoyed by FRP libraries such as Yampa, namely access to a rich library ecosystem (e.g.,
                 for graphics programming), with the strong operational guarantees offered by a bespoke type sys-
                 tem. To establish the productivity, causality, and memory properties of the language, we prove type
                 soundness using a logical relations argument fully mechanised in the Coq proof assistant.




                                                                  1 Introduction
                 Reactive systems perform an ongoing interaction with their environment, receiving inputs
                 from the outside, changing their internal state, and producing output. Examples of such sys-
                 tems include GUIs, web applications, video games, and robots. Programming such systems
                 with traditional general-purpose imperative languages can be challenging: the components
                 of the reactive system are put together via a complex and often confusing web of call-
                 backs and shared mutable state. As a consequence, individual components cannot be easily
                 understood in isolation, which makes building and maintaining reactive systems in this
                 manner difficult and error-prone (Parent, 2006; Järvi et al., 2008).
                    Functional reactive programming (FRP), introduced by Elliott and Hudak (1997), tries
                 to remedy this problem by introducing time-varying values (called behaviours or signals)
                 and events as a means of communication between components in a reactive system instead
                 of shared mutable state and callbacks. Crucially, signals and events are first-class values
                 in FRP and can be freely combined and manipulated. These high-level abstractions not




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      2                                                       P. Bahr

                      only provide a rich and expressive programming model but also make it possible for us to
                      reason about FRP programs by simple equational methods.
                         Elliott and Hudak’s original conception of FRP is an elegant idea that allows for direct
                      manipulation of time-dependent data but also immediately raises the question of what the
                      interface for signals and events should be. A naive approach would be to model discrete
                      signals as streams defined by the following Haskell data type:1
                      data Str a = a ::: (Str a)
                      A stream of type Str a thus consists of a head of type a and a tail of type Str a. The type
                      Str a encodes a discrete signal of type a, where each element of a stream represents the
                      value of that signal at a particular time.
                         Combined with the power of higher-order functional programming, we can easily
                      manipulate and compose such signals. For example, we may apply a function to the values
                      of a signal:
                      map :: (a → b) → Str a → Str b
                      map f (x ::: xs) = f x ::: map f xs
                         However, this representation is too permissive and allows the programmer to write non-
                      causal programs, that is, programs where the present output depends on future input such
                      as the following:
                      tomorrow :: Str Int → Str Int
                      tomorrow (x ::: xs) = xs
                      At each time step, this function takes the input of the next time step and returns it in the
                      current time step. In practical terms, this reactive program cannot be effectively executed
                      since we cannot compute the current value of the signal that it defines.
                         Much of the research in FRP has been dedicated to addressing this problem by ade-
                      quately restricting the interface that the programmer can use to manipulate signals. This
                      can be achieved by exposing only a carefully selected set of combinators to the program-
                      mer or by using a more sophisticated type system. The former approach has been very
                      successful in practice, not least because it can be readily implemented as a library in exist-
                      ing languages. This library approach also immediately integrates the FRP language with
                      a rich ecosystem of existing libraries and inherits the host language’s compiler and tools.
                      The most prominent example of this approach is Arrowised FRP (Nilsson et al., 2002),
                      as implemented in the Yampa library for Haskell (Hudak et al., 2004), which takes signal
                      functions as primitive rather than signals themselves. However, this library approach for-
                      feits some of the simplicity and elegance of the original FRP model as it disallows direct
                      manipulation of signals.
                         More recently, an alternative to this library approach has been developed (Jeffrey, 2014;
                      Krishnaswami and Benton, 2011; Krishnaswami et al., 2012; Krishnaswami, 2013; Jeltsch,
                      2013; Bahr et al., 2019, 2021) that uses a modal type operator , which captures the notion
                      of time. Following this idea, an element of type a represents data of type a arriving in
                      the next time step. Signals are then modelled by the type of streams defined instead as
                      follows:
                      1   Here ::: is a data constructor written as a binary infix operator.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                                Modal FRP for all                            3

                 data Str a = a ::: ((Str a))
                 That is, a stream of type Str a is an element of type a now and a stream of type Str a later,
                 thus separating consecutive elements of the stream by one time step. Combining this modal
                 type with guarded recursion (Nakano, 2000) in the form of a fixed point operator of type
                 (a → a) → a gives a powerful type system for reactive programming that guarantees not
                 only causality, but also productivity, that is, the property that each element of a stream can
                 be computed in finite time.
                    Causality and productivity of an FRP program means that it can be effectively imple-
                 mented and executed. However, for practical purposes, it is also important whether it can
                 be implemented with given finite resources. If a reactive program requires an increasing
                 amount of memory or computation time, it will eventually run out of resources to make
                 progress or take too long to react to input. It will grind to a halt. Since FRP programs
                 operate on a high level of abstraction, it is typically quite difficult to reason about their
                 space and time cost. A reactive program that exhibits a gradually slower response time,
                 that is, its computations take longer and longer as time progresses, is said to have a time
                 leak. Similarly, we say that a reactive program has a space leak, if its memory use is gradu-
                 ally increasing as time progresses, for example, if it holds on to memory while continually
                 allocating more.
                    Within both lines of work – the library approach and the modal types approach – there
                 has been an effort to devise FRP languages that avoid implicit space leaks. We say that
                 a space leak is implicit if it is caused not by explicit memory allocations intended by the
                 programmer but rather by the implementation of the FRP language holding on to old data.
                 This is difficult to prevent in a higher-order language as closures may capture references
                 to old data, which consequently must remain in memory for as long as the closure might
                 be invoked. In addition, the language has to carefully balance eager and lazy evaluation:
                 While some computations must necessarily be delayed to wait for input to arrive, we run
                 the risk of needing to keep intermediate values in memory for too long unless we perform
                 computations as soon as all required data have arrived. To avoid implicit space leaks, Ploeg
                 and Claessen (2015) devised an FRP library for Haskell that avoids implicit space leaks by
                 carefully restricting the API to manipulate events and signals. Based on the modal operator
                  described above, Krishnaswami (2013) has devised a modal FRP calculus that permits
                 an aggressive garbage collection strategy that rules out implicit space leaks.

                 Contributions. In this paper, we present Rattus, a practical modal FRP language that
                 takes its ideas from the modal FRP calculi of Krishnaswami (2013) and Bahr et al. (2019,
                 2021) but with a simpler and less restrictive type system that makes it attractive to use
                 in practice. Like the Simply RaTT calculus of Bahr et al., we use a Fitch-style type sys-
                 tem (Clouston, 2018), which extends typing contexts with tokens to avoid the syntactic
                 overhead of the dual-context-style type system of Krishnaswami (2013). In addition, we
                 further simplify the typing system by (1) only requiring one kind of token instead of two,
                 (2) allowing tokens to be introduced without any restrictions, and (3) generalising the
                 guarded recursion scheme. The resulting calculus is simpler and more expressive, yet still
                 retains the operational guarantees of the earlier calculi, namely productivity, causality,
                 and admissibility of an aggressive garbage collection strategy that prevents implicit space




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      4                                                       P. Bahr

                      leaks. We have proved these properties by a logical relations argument formalised using
                      the Coq theorem prover (see supplementary material and Appendix B).
                         To demonstrate its use as a practical programming language, we have implemented
                      Rattus as an embedded language in Haskell. This implementation consists of a library that
                      implements the primitives of the language along with a plugin for the GHC Haskell com-
                      piler. The latter is necessary to check the more restrictive variable scope rules of Rattus
                      and to ensure the eager evaluation strategy that is necessary to obtain the operational prop-
                      erties. Both components are bundled in a single Haskell library that allows the programmer
                      to seamlessly write Rattus code alongside Haskell code. We further demonstrate the use-
                      fulness of the language with a number of case studies, including an FRP library based
                      on streams and events as well as an arrowised FRP library in the style of Yampa. We
                      then use both FRP libraries to implement a primitive game. The two libraries implemented
                      in Rattus also demonstrate different approaches to FRP libraries: discrete time (streams)
                      versus continuous time (Yampa); and first-class signals (streams) versus signal functions
                      (Yampa). The Rattus Haskell library and all examples are included in the supplementary
                      material.

                      Overview of paper. Section 2 gives an overview of the Rattus language introducing the
                      main concepts and their intuitions. Section 3 presents a case study of a simple FRP library
                      based on streams and events, as well as an arrowised FRP library. Section 4 presents the
                      underlying core calculus of Rattus including its type system, its operational semantics,
                      and our main metatheoretical results: productivity, causality, and absence of implicit space
                      leaks. We then reflect on these results and discuss the language design of Rattus. Section 5
                      gives an overview of the proof of our metatheoretical results. Section 6 describes how
                      Rattus has been implemented as an embedded language in Haskell. Section 7 reviews
                      related work and Section 8 discusses future work.


                                                                 2 Introduction to Rattus
                      To illustrate Rattus, we will use example programs written in the embedding of the
                      language in Haskell. The type of streams is at the centre of these example programs:
                      data Str a = !a ::: !((Str a))
                      The annotation with bangs (!) ensures that the constructor ::: is strict in both its arguments.
                      We will have a closer look at the evaluation strategy of Rattus in Section 2.2.
                         The simplest stream one can define just repeats the same value indefinitely. Such a
                      stream is constructed by the constInt function below, which takes an integer and produces
                      a constant stream that repeats that integer at every step:
                      constInt :: Int → Str Int
                      constInt x = x ::: delay (constInt x)
                      Because the tail of a stream of integers must be of type (Str Int), we have to use delay,
                      which is the introduction form for the type modality . Intuitively speaking, delay moves
                      a computation one time step into the future. We could think of delay having type a → a,
                      but this type is too permissive as it can cause space leaks. It would allow us to move




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                                Modal FRP for all                              5

                 arbitrary computations – and the data they depend on – into the future. Instead, the typing
                 rule for delay is formulated as follows:
                                                                     ,   t :: A
                                                                    delay t :: A
                 This is a characteristic example of a Fitch-style typing rule (Clouston, 2018): it introduces
                 the token  (pronounced ‘tick’) in the typing context . A typing context consists of type
                 assignments of the form x :: A, but it can also contain several occurrences of . We can
                 think of  as denoting the passage of one time step, that is, all variables to the left of 
                 are one time step older than those to the right. In the above typing rule, the term t does not
                 have access to these ‘old’ variables in . There is, however, an exception: if a variable in
                 the typing context is of a type that is time-independent, we still allow t to access them –
                 even if the variable is one time step old. We call these time-independent types stable types,
                 and in particular all base types such as Int and Bool are stable. We will discuss stable types
                 in more detail in Section 2.1.
                    Formally, the variable introduction rule of Rattus reads as follows:
                                                               tick-free or A stable
                                                                , x :: A,    x :: A
                 That is, if x is not of a stable type and appears to the left of a , then it is no longer in
                 scope.
                   Turning back to our definition of the constInt function, we can see that the recursive
                 call constInt x must be of type Str Int in the context , , where  contains x :: Int. So x
                 remains in scope because it is of type Int, which is a stable type. This would not be the
                 case if we were to generalise constInt to arbitrary types:
                 leakyConst :: a → Str a
                 leakyConst x = x ::: delay (leakyConst x) -- the rightmost occurrence of x is out of scope
                 In this example, x is of type a and therefore goes out of scope under delay: since a is
                 not necessarily stable, x :: a is blocked by the  introduced by delay. We can see that
                 leakyConst would indeed cause a space leak by instantiating it to the type leakyConst ::
                 Str Int → Str (Str Int): at each time step n, it would have to store all previously observed
                 input values from time step 0 to n − 1, thus making its memory usage grow linearly with
                 time. To illustrate this on a concrete example, assume that leakyConst is fed the stream
                 of numbers 0, 1, 2, . . . as input. Then, the resulting stream of type Str (Str Int) contains at
                 each time step n the same stream 0, 1, 2, . . . . However, the input stream arrives one integer
                 at a time. So at time n, the input stream would have advanced to n, n + 1, n + 2, . . . , that
                 is, the next input to arrive is n. Consequently, the implementation of leakyConst would
                 need to have stored the previous values 0, 1, . . . n − 1 of the input stream.
                    The definition of constInt also illustrates the guarded recursion principle used in Rattus.
                 For a recursive definition to be well-typed, all recursive calls have to occur in the presence
                 of a  – in other words, recursive calls have to be guarded by delay. This restriction ensures
                 that all recursive functions are productive, which means that each element of a stream can
                 be computed in finite time. If we did not have this restriction, we could write the following
                 obviously unproductive function:




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      6                                                       P. Bahr

                      loop :: Str Int
                      loop = loop -- unguarded recursive call to loop is not allowed
                      The recursive call to loop does not occur under a delay and is thus rejected by the type
                      checker.
                         Let’s consider an example program that transforms streams. The function inc below
                      takes a stream of integers as input and increments each integer by 1:
                      inc :: Str Int → Str Int
                      inc (x ::: xs) = (x + 1) ::: delay (inc (adv xs))
                      Here we have to use adv, the elimination form for , to convert the tail of the input stream
                      from type (Str Int) into type Str Int. Again we could think of adv having type a → a,
                      but this general type would allow us to write non-causal functions such as the tomorrow
                      function we have seen in the introduction:
                      tomorrow :: Str Int → Str Int
                      tomorrow (x ::: xs) = adv xs -- adv is not allowed here
                      This function looks one time step ahead so that the output at time n depends on the input
                      at time n + 1.
                         To ensure causality, adv is restricted to contexts with a :
                                                                  t :: A             tick-free
                                                                     , ,    adv t :: A
                      Not only does adv require a , but it also causes all bound variables to the right of  to go
                      out of scope. Intuitively speaking delay looks ahead one time step and adv then allows us
                      to go back to the present. Variable bindings made in the future are therefore not accessible
                      once we returned to the present.
                         Note that adv causes the variables to the right of  to go out of scope forever, whereas it
                      brings variables back into scope that were previously blocked by the . That is, variables
                      that go out of scope due to delay can be brought back into scope by adv.


                                                                       2.1 Stable types
                      We haven’t yet made precise what stable types are. To a first approximation, types are
                      stable if they do not contain  or function types. Intuitively speaking,  expresses a
                      temporal aspect and thus types containing  are not time-invariant. Moreover, functions
                      can implicitly have temporal values in their closure and are therefore also excluded from
                      stable types.
                         However, as a consequence, we cannot implement the map function that takes a function
                      f :: a → b and applies it to each element of a stream of type Str a, because it would require
                      us to apply the function f at any time in the future. We cannot do this because a → b is not
                      a stable type (even if a and b were stable) and therefore f cannot be transported into the
                      future. However, Rattus has the type modality , pronounced ‘box’, that turns any type A
                      into a stable type A. Using the  modality, we can implement map as follows:




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                                Modal FRP for all                             7

                 map :: (a → b) → Str a → Str b
                 map f (x ::: xs) = unbox f x ::: delay (map f (adv xs))
                 Instead of a function of type a → b, map takes a boxed function f of type (a → b) as its
                 argument. That means, f is still in scope under the delay because it is of a stable type. To
                 use f , it has to be unboxed using unbox, which is the elimination form for the  modality
                 and simply has type a → a, without any restrictions.
                   The corresponding introduction form for  does come with some restrictions. It has to
                 make sure that boxed values only refer to variables of a stable type:
                                                                         t :: A
                                                                     box t :: A
                 Here,   denotes the typing context that is obtained from  by removing all variables of
                 non-stable types and all  tokens:
                                                     
                                                        , x :: A if A stable
                         · = ·       (, x :: A) =                               (, ) =  
                                                                 otherwise
                 Thus, for a well-typed term box t, we know that t only accesses variables of stable type.
                   For example, we can implement the inc function using map as follows:
                 inc :: Str Int → Str Int
                 inc = map (box (+1))
                 Using the  modality, we can also generalise the constant stream function to arbitrary
                 boxed types:
                 constBox :: a → Str a
                 constBox a = unbox a ::: delay (constBox a)
                 Alternatively, we can make use of the Stable type class, to constrain type variables to stable
                 types:
                 const :: Stable a ⇒ a → Str a
                 const x = x ::: delay (const x)
                 Since the type of streams is not stable, the restriction to stable types disallows the instan-
                 tiation of the const function to the type Str Int → Str (Str Int), which as we have seen
                 earlier would cause a memory leak. By contrast, constBox can be instantiated to the type
                 (Str Int) → Str (Str Int). This is unproblematic since a value of type (Str Int) is a sus-
                 pended, time-invariant computation that produces an integer stream. In other words, this
                 computation is independent of any external input and can thus be executed at any time in
                 the future without keeping old temporal values in memory.
                    So far, we have only looked at recursive definitions at the top level. Recursive definitions
                 can also be nested, but we have to be careful how such nested recursion interacts with the
                 typing environment. Below is an alternative definition of map that takes the boxed function
                 f as an argument and then calls the run function that recurses over the stream:
                 map :: (a → b) → Str a → Str b
                 map f = run




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      8                                                       P. Bahr

                          where run :: Str a → Str b
                                run (x ::: xs) = unbox f x ::: delay (run (adv xs))
                      Here, run is type-checked in a typing environment  that contains f :: (a → b). Since run
                      is defined by guarded recursion, we require that its definition must type-check in the typing
                      context   . Because f is of a stable type, it remains in   and is thus in scope in the def-
                      inition of run. That is, guarded recursive definitions interact with the typing environment
                      in the same way as box, which ensures that such recursive definitions are stable and can
                      thus safely be executed at any time in the future. As a consequence, the type checker will
                      prevent us from writing the following leaky version of map:
                      leakyMap :: (a → b) → Str a → Str b
                      leakyMap f = run
                         where run :: Str a → Str b
                               run (x ::: xs) = f x ::: delay (run (adv xs)) -- f is no longer in scope here
                      The type of f is not stable, and thus it is not in scope in the definition of run.
                         Note that top-level defined identifiers such as map and const are in scope in any context
                      after they are defined regardless of whether there is a  or whether they are of a stable
                      type. One can think of top-level definitions being implicitly boxed when they are defined
                      and implicitly unboxed when they are used later on.


                                                                2.2 Operational semantics
                      As we have seen in the examples above, the purpose of the type modalities  and  is
                      to ensure that Rattus programs are causal, productive and without implicit space leaks. In
                      simple terms, the latter means that temporal values, that is, values of type A, are safe to
                      be garbage collected after two time steps. In particular, input from a stream can be safely
                      garbage collected one time step after it has arrived. This memory property is made precise
                      later in Section 4 along with a precise definition of the operational semantics of Rattus.
                         To obtain this memory property, Rattus uses an eager evaluation strategy except for
                      delay and box. That is, arguments are evaluated to values before they are passed on to
                      functions, but special rules apply to delay and box. In addition, we only allow strict data
                      types in Rattus, which explains the use of strictness annotations in the definition of Str.
                      This eager evaluation strategy ensures that we do not have to keep intermediate values in
                      memory for longer than one time step.
                         Following the temporal interpretation of the  modality, its introduction form delay
                      does not eagerly evaluate its argument since we may have to wait until input data arrives.
                      For example, in the following function, we cannot evaluate adv x + 1 until the integer value
                      of x :: Int arrives, which is one time step from now:
                      delayInc :: Int → Int
                      delayInc x = delay (adv x + 1)
                      However, evaluation is only delayed by one time step, and this delay is reversed by adv.
                      For example, adv (delay (1 + 1)) evaluates immediately to 2.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                                Modal FRP for all                                  9

                    Turning to box, we can see that it needs to lazily evaluate its argument in order to
                 maintain the memory property of Rattus: in the expression box (delay 1) of type (Int),
                 we should not evaluate delay 1 right away. As mention above, values of type Int should
                 be garbage collected after two time steps. However, boxed types are stable and can thus
                 be moved arbitrarily into the future. Hence, by the time this boxed value is unboxed in the
                 future, we might have already garbage collected the value of type Int it contains.
                    The modal FRP calculi of Krishnaswami (2013) and Bahr et al. (2019, 2021) have a
                 similar operational semantics to achieve same memory property that Rattus has. However,
                 Rattus uses a slightly more eager evaluation strategy for delay: recall that delay t delays
                 the computation t by one time step and that adv reverses such a delay. The operational
                 semantics of Rattus reflects this intuition by first evaluating every term t that occurs
                 as delay (. . . adv t . . . ) before evaluating delay. In other words, delay (. . . adv t . . . ) is
                 equivalent to
                                                        let x = t in delay (. . . adv x . . . )
                    This adjustment of the operational semantics of delay is important, as it allows us to lift
                 the restrictions present in previous calculi (Krishnaswami, 2013; Bahr et al., 2019, 2021)
                 that disallow guarded recursive definitions to ‘look ahead’ more than one time step. In the
                 Fitch-style calculi of Bahr et al. (2019, 2021), this can be seen in the restriction to allow
                 at most one  in the typing context. For the same reason, these two calculi also disallow
                 function definitions in the context of a . As a consequence, terms like delay(delay 0) and
                 delay(λx.x) do not type-check in the calculi of Bahr et al. (2019, 2021).
                    The extension in expressive power afforded by Rattus’s slightly more eager evalua-
                 tion strategy has immediate practical benefits: most importantly, there are no restrictions
                 on where one can define functions. Secondly, we can write recursive functions that look
                 several steps into the future:
                 stutter :: Int → Str Int
                 stutter n = n ::: delay (n ::: delay (stutter (n + 1)))
                 Applying stutter to 0 would construct a stream of numbers 0, 0, 1, 1, 2, 2, . . .. In order to
                 implement stutter in the more restrictive language of Krishnaswami (2013) and Bahr et al.
                 (2019, 2021), we would need to decompose it into two mutually recursive functions (Bahr
                 et al., 2021). A more detailed comparison of the expressive power of these calculi can be
                 found in Section 4.5
                    At first glance, one might think that allowing multiple ticks could be accommodated
                 without changing the evaluation of delay and instead extending the time one has to keep
                 temporal values in memory accordingly, that is, we may safely garbage collect temporal
                 values after n + 1 time steps, if we allow at most n ticks. However, as we will demonstrate
                 in Section 4.3.3, this is not enough. Even allowing just two ticks would require us to keep
                 temporal values in memory indefinitely, that is, it would permit implicit space leaks. On
                 the other hand, given the more eager evaluation strategy for delay, we can still garbage
                 collect all temporal values after two time steps, no matter how many ticks were involved
                 in type-checking the program.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      10                                                       P. Bahr




                                                       Fig. 1. Small library for streams and events.



                                                         3 Reactive programming in Rattus
                      In this section, we showcase how Rattus can be used for reactive programming. To this
                      end, we implement a small library of combinators for programming with streams and
                      events. We then use this library to implement a simple game. Finally, we implement a
                      Yampa-style library and re-implement the game using that library instead. The full sources
                      of both implementations of the game are included in the supplementary material along with
                      a third variant that uses a combinator library based on monadic streams (Perez et al., 2016).


                                                     3.1 Programming with streams and events
                      To illustrate how Rattus facilitates working with streams and events, we have implemented
                      a small set of combinators, as shown in Figure 1. The map function should be familiar by
                      now. The zip function combines two streams similarly to Haskell’s zip function defined on
                      lists. Note however that instead of the normal pair type, we use a strict pair type:
                      data a ⊗ b = !a ⊗ !b
                      It is like the normal pair type (a, b), but when constructing a strict pair s ⊗ t, the two
                      components s and t are evaluated to weak head normal form.
                         The scan function is similar to Haskell’s scanl function on lists: given a stream of values
                      v0 , v1 , v2 , ..., the expression scan (box f ) v computes the stream
                                                  f v v0 ,     f (f v v0 ) v1 ,    f (f (f v v0 ) v1 ) v2 ,   ...
                      If one would want a variant of scan that is closer to Haskell’s scanl, that is, the result starts
                      with the value v instead of f v v0 , one can simply replace the first occurrence of acc in the




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                            11

                 definition of scan with acc. Note that the type b has to be stable in the definition of scan
                 so that acc :: b is still in scope under delay.
                    A central component of FRP is that it must provide a way to react to events. In particular,
                 it must support the ability to switch behaviour in response to the occurrence of an event.
                 There are different ways to represent events. The simplest representation defines events of
                 type a as streams of type Maybe a. However, we will use the strict variant of the Maybe
                 type:
                 data Maybe a = Just !a | Nothing
                    We can then devise a simple switch combinator that reacts to events. Given an initial
                 stream xs and an event e that may produce a stream, switch xs e initially behaves as xs but
                 changes to the new stream provided by the occurrence of an event. In this implementation,
                 the behaviour changes every time an event occurs, not only the first time. For a one-shot
                 variant of switch, we would just have to change the second equation to
                 switch (Just as ::: ) = as
                     In the definition of switch, we use the applicative operator  defined as follows:
                 () :: (a → b) → a → b
                 f  x = delay ((adv f ) (adv x))
                 Instead of using , we could have also written delay (switch (adv xs) (adv fas)) instead.
                    Finally, switchTrans is a variant of switch that switches to a new stream function rather
                 than just a stream. It is implemented using the variant switchTrans , which takes just a
                 stream as its first argument instead of a stream function.


                                                        3.2 A simple reactive program
                 To put our bare-bones FRP library to use, let’s implement a simple single-player variant
                 of the classic game Pong: the player has to move a paddle at the bottom of the screen to
                 bounce a ball and prevent it from falling.2 The core behaviour is described by the following
                 stream function:
                 pong :: Str Input → Str (Pos ⊗ Float)
                 pong inp = zip ball pad where
                   pad :: Str Float
                   pad = padPos inp
                   ball :: Str Pos
                   ball = ballPos (zip pad inp)
                 It receives a stream of inputs (button presses and how much time has passed since the
                 last input) and produces a stream of pairs consisting of the 2D position of the ball and
                 the x coordinate of the paddle. Its implementation uses two helper functions to compute
                 these two components. The position of the paddle only depends on the input, whereas the
                 position of the ball also depends on the position of the paddle (since it may bounce off it):

                 2   So it is rather like Breakout, but without the bricks.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      12                                                       P. Bahr

                      padPos :: Str (Input) → Str Float
                      padPos = map (box fst ) ◦ scan (box padStep) (0 ⊗ 0)
                      padStep :: (Float ⊗ Float) → Input → (Float ⊗ Float)
                      padStep (pos ⊗ vel) inp = ...
                      ballPos :: Str (Float ⊗ Input) → Str Pos
                      ballPos = map (box fst ) ◦ scan (box ballStep) ((0 ⊗ 0) ⊗ (20 ⊗ 50))
                      ballStep :: (Pos ⊗ Vel) → (Float ⊗ Input) → (Pos ⊗ Vel)
                      ballStep (pos ⊗ vel) (pad ⊗ inp) = ...
                      Both auxiliary functions follow the same structure. They use a scan to compute the position
                      and the velocity of the object, while consuming the input stream. The velocity is only
                      needed to compute the position and is therefore projected away afterward using map. Here,
                      fst is the first projection for the strict pair type. We can see that the ball starts at the centre
                      of the screen (at coordinates (0, 0)) and moves toward the upper right corner (with velocity
                      (20, 50)).
                         This simple game only requires a static dataflow network. To demonstrate the ability to
                      express dynamically changing dataflow networks in Rattus, we briefly discuss three refine-
                      ments of the pong game, each of which introduces different kinds of dynamic behaviours.
                      In each case, we reuse the existing stream functions that describe the basic behaviour of
                      the ball and the paddle, but we combine them using different combinators. Thus, these
                      examples also demonstrate the modularity that Rattus affords.

                      First refinement. We change the implementation of pong so that it allows the player to
                      reset the game, for example, after the ball has fallen off the screen:
                      pong1 :: Str Input → Str (Pos ⊗ Float)
                      pong1 inp = zip ball pad where
                        pad = padPos inp
                        ball = switchTrans ballPos                                         -- starting ball behaviour
                                            (map (box ballTrig) inp)                       -- trigger restart on pressing reset button
                                            (zip pad inp)                                  -- input to the switch
                      ballTrig :: Input → Maybe (Str (Float ⊗ Input) → Str Pos)
                      ballTrig inp = if reset inp then Just ballPos else Nothing
                      To achieve this behaviour, we use the switchTrans combinator, which we initialise
                      with the original behaviour of the ball. The event that will trigger the switch is con-
                      structed by mapping ballTrig over the input stream, which will create an event of type
                      Event (Str (Float ⊗ Input) → Str Pos), which will be triggered every time the player hits
                      the reset button.

                      Second refinement. 3 We can further refine this behaviour of the game so that it automat-
                      ically resets once the ball has fallen off the screen. This requires a feedback loop since the
                      behaviour of the ball depends on the current position of the ball. Such a feedback loop can

                      3   See Haskell-library/examples/game directory in the supplementary material.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                 13

                 be constructed using a variant of the switchTrans combinator that takes a delayed event as
                 argument:
                 dswitchTrans :: (Str a → Str b) → (Event (Str a → Str b)) → (Str a → Str b)
                 Thus, the event we pass on to dswitchTrans can be constructed from the output of
                 dswitchTrans by using guarded recursion, which closes the feedback loop:
                 pong2 :: Str Input → Str (Pos ⊗ Float)
                 pong2 inp = zip ball pad where
                   pad = padPos inp
                   ball = dswitchTrans ballPos
                                        (delay (map (box ballTrig ) (pong2 (adv (tl inp)))))
                                        (zip pad inp)
                 ballTrig :: Pos ⊗ Float → Maybe (Str (Float ⊗ Input) → Str Pos)
                 ballTrig (( ⊗ y) ⊗ ) = if y < bottomScreenEdge then Just ballPos else Nothing


                 Final refinement. 4 Finally, we change the game a bit so as to allow the player to spawn
                 new balls at any time and remove balls that are currently in play. To this end, we introduce
                 a type ObjAction a b that represents events that may remove or add objects defined by a
                 stream function of type Str a → Str b:
                 data ObjAction a b = Remove | Add !(Str a → Str b)
                 To keep it simple, we only have two operations: we may add an object by giving its stream
                 function, or we may remove the oldest object. Given the strict list type
                 data List a = Nil | Cons !a !(List a)
                 we can implement a combinator that can react to such events:
                 objTrans :: Event (ObjAction a b) → List (Str b) → Str a → Str (List b)
                 This combinator takes three arguments: an event that may trigger operations to manipulate
                 the list of active objects, the initial list of active objects, and the input stream. The result is
                 a stream that at each step contains a list with the current value of each active object.
                    To use this combinator, we revise the ballTrig function so that it produces events to add
                 or remove balls in response to input by the player, and we initialise the list of objects as
                 the empty list:
                 pong3 :: Str Input → Str (List Pos ⊗ Float)
                 pong3 inp = zip balls pad where
                   pad :: Str Float
                   pad = padPos inp
                   balls :: Str (List Pos)
                   balls = objTrans (map (box ballTrig ) inp) Nil (zip pad inp)
                 ballTrig :: Input → Maybe (ObjAction (Float ⊗ Input) Pos)
                 ballTrig inp | addBall inp   = Just (Add ballPos)
                 4   See Haskell-library/examples/game-multi directory in the supplementary material.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      14                                                       P. Bahr

                                        | removeBall inp = Just Remove
                                        | otherwise      = Nothing
                      Instead of a stream of type Str Pos to describe a single ball, we now have a stream of type
                      Str (List Pos) to describe multiple balls.
                         The objTrans combinator demonstrates that Rattus can accommodate dataflow net-
                      works that dynamically grow and shrink. But more realistic variants of this combinator
                      can be implemented as well. For example, by replacing List with a Map data structure,
                      objects may have identifiers and can be removed or manipulated based on that identi-
                      fier. Orthogonal to that, we may also allow objects to destroy themselves and spawn new
                      objects. This can be achieved by describing objects using trees instead of streams:
                      data Tree a = End | Next !a !((Tree a)) | Branch !a !((Tree a)) !((Tree a))
                      A variant of objTrans can then be implemented where the type Str b in objTrans and
                      ObjAction a b is replaced by Tree b. For example, this can be used in the pong game to
                      make balls destroy themselves once they disappear from screen or make a ball split into
                      two balls when it hits a certain surface.


                                                                     3.3 Arrowised FRP
                      The benefit of a modal FRP language is that we can directly interact with signals and events
                      in a way that guarantees causality. A popular alternative to ensure causality is arrowised
                      FRP (Nilsson et al., 2002), which takes signal functions as primitive and uses Haskell’s
                      arrow notation (Paterson, 2001) to construct them. By implementing an arrowised FRP
                      library in Rattus instead of plain Haskell, we can not only guarantee causality but also
                      productivity and the absence of implicit space leaks. As we will see, this forces us to
                      slightly restrict the API of arrowised FRP compared to Yampa. Furthermore, this exercise
                      demonstrates that Rattus can also be used to implement a continuous-time FRP library, in
                      contrast to the discrete-time FRP library from Section 3.1.
                          At the centre of arrowised FRP is the Arrow type class shown in Figure 2. If we can
                      implement a signal function type SF a b that implements the Arrow class, we can bene-
                      fit from the convenient notation Haskell provides for it. For example, assuming we have
                      signal functions ballPos :: SF (Float ⊗ Input) Pos and padPos :: SF Input Float describing
                      the positions of the ball and the paddle from our game in Section 3.2, we can combine
                      these as follows:
                      pong :: SF Input (Pos ⊗ Float)
                      pong = proc inp → do pad ← padPos −≺ inp
                                             ball ← ballPos −≺(pad ⊗ inp)
                                             returnA −≺(ball ⊗ pad)
                        The Rattus definition of SF is almost identical to the original Haskell definition from
                      Nilsson et al. (2002). The only difference is the use of strict types and the insertion of the
                       modality to make it a guarded recursive type:
                      data SF a b = SF ! (Float → a → ((SF a b) ⊗ b))




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                            15




                                                              Fig. 2. Arrow type class.


                 This implements a continuous-time signal function using sampling, where the additional
                 argument of type Float indicates the time passed since the previous sample.
                   Implementing the methods of the Arrow type class is straightforward with the exception
                 of the arr method. In fact, we cannot implement arr in Rattus at all. Because the first
                 argument is not stable, it falls out of scope in the recursive call:
                 arr :: (a → b) → SF a b
                 arr f = SF (λ a → (delay (arr f ), f a))                -- f is not in scope under delay
                 The situation is similar to the map function, and we must box the function argument so
                 that it remains available at all times in the future:
                 arrBox :: (a → b) → SF a b
                 arrBox f = SF (λ a → (delay (arrBox f ), unbox f a))
                 That is, the arr method could be a potential source for space leaks. However, in practice,
                 arr does not seem to cause space leaks and thus its use in conventional arrowised FRP
                 libraries should be safe. Nonetheless, in Rattus, we have to replace arr with the more
                 restrictive variant arrBox. But fortunately, this does not prevent us from using the arrow
                 notation: Rattus treats arr f as a short hand for arrBox (box f ), which allows us to use the
                 arrow notation while making sure that box f is well-typed, that is, f only refers to variables
                 of stable type.
                    The Arrow type class only provides a basic interface for constructing static signal
                 functions. To permit dynamic behaviour, we need to provide additional combinators,
                 for example, for switching signals and for recursive definitions. The rSwitch combinator
                 corresponds to the switchTrans combinator from Figure 1:
                 rSwitch :: SF a b → SF (a ⊗ Maybe (SF a b)) b
                 This combinator allows us to implement our game so that it resets to its start position if we
                 hit the reset button:5
                 pong1 :: SF Input (Pos ⊗ Float)
                 pong1 = proc inp → do pad ← padPos −≺ inp
                                         let event = if reset inp then Just ballPos else Nothing
                                         ball ← rSwitch ballPos −≺((pad ⊗ inp) ⊗ event)
                                         returnA −≺(ball ⊗ pad)
                   Arrows can be endowed with a very general recursion principle by instantiating the
                 loop method in the ArrowLoop type class shown in Figure 2. However, loop cannot
                 5   See Haskell-library/examples/game-arrow directory in the supplementary material.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      16                                                       P. Bahr

                      be implemented in Rattus as it would break the productivity property. Moreover, loop
                      depends crucially on lazy evaluation and is thus a source for space leaks.
                         Instead of loop, we implement a different recursion principle that corresponds to guarded
                      recursion:
                      loopPre :: d → SF (b ⊗ d) (c ⊗ d) → SF b c
                      Intuitively speaking, this combinator constructs a signal function from b to c with the
                      help of an internal state of type d. The first argument initialises the state, and the second
                      argument is a signal function that turns input of type b into output of type c while also
                      updating the internal state. Apart from the addition of the  modality and strict pair types,
                      this definition has the same type as Yampa’s loopPre. Alternatively, we could drop the 
                      modality and constrain d to be stable. The use of loopPre instead of loop introduce a delay
                      by one sampling step (as indicated by the  modality) and thus ensures productivity.
                      However, in practice, such a delay can be avoided by refactoring the underlying signal
                      function.
                         Using the loopPre combinator, we can implement the signal function of the ball:
                      ballPos :: SF (Float ⊗ Input) Pos
                      ballPos = loopPre (20 ⊗ 50) run where
                        run :: SF ((Float ⊗ Input) ⊗ Vel) (Pos ⊗ Vel)
                        run = proc ((pad ⊗ inp) ⊗ v) → do p ← integral (0 ⊗ 0) −≺ v
                                                            returnA −≺(p ⊗ delay (calcNewVelocity pad p v))
                      Here, we also use the integral combinator that computes the integral of a signal using a
                      simple approximation that sums up rectangles under the curve:
                      integral :: (Stable a, VectorSpace a s) ⇒ a → SF a a
                         The signal function for the paddle can be implemented in a similar fashion. The com-
                      plete code of the case studies presented in this section can be found in the supplementary
                      material.



                                                                       4 Core calculus
                      In this section, we present the core calculus of Rattus, which we call λ . The purpose of
                      λ is to formally present the language’s Fitch-style typing rules, its operational semantics,
                      and to formally prove the central operational properties, that is, productivity, causality,
                      and absence of implicit space leaks. To this end, the calculus is stripped down to its
                      essence: a simply typed lambda calculus extended with guarded recursive types Fix α.A,
                      a guarded fixed point combinator, and the two type modalities  and . Since general
                      inductive types and polymorphic types are orthogonal to the issue of operational properties
                      in reactive programming, we have omitted these for the sake of clarity.


                                                                       4.1 Type system
                      Figure 3 defines the syntax of λ . Besides guarded recursive types, guarded fixed points,
                      and the two type modalities, we include standard sum and product types along with unit




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                    17




                 Fig. 3. Syntax of (stable) types, terms, and values. In typing rules, only closed types are considered
                 (i.e., each occurrence of a type variable α is in the scope of a Fix α).




                                                           Fig. 4. Well-formed contexts.



                 and integer types. The type Str A of streams of type A is represented as the guarded
                 recursive type Fix α.A × α. Note the absence of  in this type. When unfolding guarded
                 recursive types such as Fix α.A × α, the  modality is inserted implicitly: Fix α.A × α ∼   =
                 A × (Fix α.A × α). This ensures that guarded recursive types are by construction always
                 guarded by the  modality.
                    For the sake of the operational semantics, the syntax also includes heap locations l.
                 However, as we shall see, heap locations cannot be used by the programmer directly,
                 because there is no typing rule for them. Instead, heap locations are allocated and returned
                 by delay in the operational semantics. This is a standard approach for languages with
                 references (see e.g., Abramsky et al. (1998); Krishnaswami (2013)).
                    Typing contexts, defined in Figure 4, consist of variable typings x : A and  tokens. If a
                 typing context contains no , we call it tick-free. The complete set of typing rules for λ
                 is given in Figure 5. The typing rules for Rattus presented in Section 2 appear in the same
                 form also here, except for replacing Haskell’s :: operator with the more standard notation.
                 The remaining typing rules are entirely standard, except for the typing rule for the guarded
                 fixed point combinator fix.
                    The typing rule for fix follows Nakano’s fixed point combinator and ensures that the
                 calculus is productive. In addition, following Krishnaswami (2013), the rule enforces the
                 body t of the fixed point to be stable by strengthening the typing context to   . Moreover,
                 we follow Bahr et al. (2021) and assume x to be of type (A) instead of A. As a
                 consequence, recursive calls may occur at any time in the future, that is, not necessarily
                 in the very next time step. In conjunction with the more general typing rule for delay,
                 this allows us to write recursive function definitions that, like stutter in Section 2.2, look
                 several steps into the future.
                    To see how the recursion syntax of Rattus translates into the fixed point combinator of
                 λ , let us reconsider the constInt function:
                 constInt :: Int → Str Int
                 constInt x = x ::: delay (constInt x)
                 For readability of the corresponding λ term, we use the shorthand s ::: t for the λ term
                 into s, t . Recall that Str A is represented as Fix α.A × α in λ . That is, given s : A and




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      18                                                       P. Bahr




                                                                     Fig. 5. Typing rules.



                      t : (Str A), we have that s ::: t is of type Str A. Using this notation, the above definition
                      translates into the following λ term:
                                                    constInt = fix r.λx.x ::: delay(adv (unbox r) x)
                      The recursive notation is simply translated into a fixed point fix r.t where the recursive
                      occurrence of constInt is replaced by adv (unbox r). The variable r is of type ((Int →
                      Str Int)) and applying unbox followed by adv turns it into type Int → Str Int. Moreover,
                      the restriction that recursive calls must occur in a context with  makes sure that this
                      transformation from recursion notation to fixed point combinator results in a well-typed
                      λ term.
                         The typing rule for fix x.t also explains the treatment of recursive definitions that are
                      nested inside a top-level definition. The typing context  is turned into   when type-
                      checking the body t of the fixed point.
                         For example, reconsider the following ill-typed definition of leakyMap:
                      leakyMap :: (a → b) → Str a → Str b
                      leakyMap f = run
                         where run :: Str a → Str b
                               run (x ::: xs) = f x ::: delay (leakyMap (adv xs))




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                              19

                 Translated into λ , the definition looks like this:
                                   leakyMap = λf .fix r.λs.let x = head s in let xs = tail s
                                                                   in f x ::: delay((adv (unbox r)) (adv xs))
                 The pattern matching syntax is translated into projection functions head and tail that
                 decompose a stream into its head and tail, respectively, that is,
                                               head = λx.π1 (out x)              tail = λx.π2 (out x)
                 More importantly, the variable f bound by the outer lambda abstraction is of a function
                 type and thus not stable. Therefore, it is not in scope in the body of the fixed point.


                                                           4.2 Operational semantics
                 The operational semantics is given in two steps: to execute a λ program, it is first trans-
                 lated into a more restrictive variant of λ , which we call λ . The resulting λ program
                 is then executed using an abstract machine that ensures the absence of implicit space leaks
                 by construction. By presenting the operational semantics in two stages, we avoid the more
                 complicated set-up of an abstract machine that is capable of directly executing λ pro-
                 grams. As we show in the example in Section 4.3.3, such a machine would need to perform
                 some restricted form of partial evaluation under delay and under lambda abstractions.

                                                             4.2.1 Translation to λ
                 The λ calculus has the same syntax as λ , but the former has a more restrictive type
                 system. It restricts typing contexts to contain at most one  and restricts the typing rules
                 for lambda abstraction and delay as follows:
                      || , x : A  t : B                || ,   t : A                        || =  if  is tick-free
                                                                                                     
                        λx.t : A → B                   delay t : A                  , ,    =   ,  

                 The construction || turns  into a tick-free context, which ensures that we have at most
                 one  – even for nested occurrences of delay – and that the body of a lambda abstraction
                 is not in the scope of a . All other typing rules are the same as for λ . The λ calculus
                 is a fragment of the λ calculus in the sense that   t : A implies   t : A.
                    Any closed λ term can be translated into a corresponding λ term by exhaustively
                 applying the following rewrite rules:
                               delay(K[adv t]) −→ let x = t in delay(K[adv x]) if t is not a variable
                                    λx.K[adv t] −→ let y = adv t in λx.(K[y])
                 where K is a term with a single hole that does not occur in the scope of delay, adv, box,
                 fix, or lambda abstraction; and K[t] is the term obtained from K by replacing its hole with
                 the term t.
                    For example, consider the closed λ term λf .delay(λx.adv (unbox f ) (x + 1)) of type
                 (Int → Int) → (Int → Int). Applying the second rewrite rule followed by the first




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      20                                                       P. Bahr

                      rewrite rule, this term rewrites to a λ term of the same type as follows:
                                                λf .delay(λx.adv (unbox f ) (x + 1))
                                          −→ λf .delay(let y = adv (unbox f ) in λx.y (x + 1))
                                          −→ λf .let z = unbox f in delay(let y = adv z in λx.y (x + 1))
                         In a well-typed λ term, each subterm adv t must occur in the scope of a corresponding
                      delay. The above rewrite rules make sure that the subterm t is evaluated before the delay.
                      This corresponds to the intuition that delay moves ahead in time and adv moves back in
                      time – thus the two cancel out one another.
                         One can show that the rewrite rules are strongly normalising and type-preserving (in
                      λ ). Moreover, any closed term in λ that cannot be further rewritten is also well-typed
                      in λ . As a consequence, we can exhaustively apply the rewrite rules to a closed term of
                      λ to transform it into a closed term of λ :

                      Theorem 4.1. For each  t : A, we can effectively construct a term t with t −→∗ t and
                       t : A.

                        Below we give a brief overview of the three components of the proof of Theorem 4.1.
                      For the full proof, we refer the reader to Appendix A.

                      Strong normalisation. To show that −→ is strongly normalising, we define for each term
                      t a natural number d(t) such that, whenever t −→ t , then d(t) > d(t ). We define d(t) to be
                      the sum of the depth of all redex occurrences in t (i.e., subterms that match the left-hand
                      side of a rewrite rule). Since each rewrite step t −→ t removes a redex or replaces a redex
                      with a new redex at a strictly smaller depth, we have that d(t) > d(t ).

                      Subject reduction. We want to prove that   s : A and s −→ t implies   t : A. To
                      this end, we proceed by induction on s −→ t. In case the reduction s −→ t is due to con-
                      gruence closure,   t : A follows immediately by the induction hypothesis. Otherwise,
                      s matches the left-hand side of one of the rewrite rules. Each of these two cases follows
                      from the induction hypothesis and one of the following two properties:

                         (1) If , ,    K[adv t] : A and   tick-free, then   t : B and , x :
                             B, ,    K[adv x] : A for some B.
                         (2) If ,    K[adv t] : A and   tick-free, then   adv t : B and , x : B,   
                             K[x] : A for some B.

                      Both properties can be proved by a straightforward induction on K. The proofs rely on the
                      fact that due to the typing rule for adv, we know that if ,    K[adv t] : A for a tick-free
                        , then all of t’s free variables must be in .

                      Exhaustiveness. Finally, we need to show that  t : A with t −→
                                                                                        implies  t : A, that is,
                      if we cannot rewrite t any further it must be typable in λ as well. In order to prove this
                      property by induction, we must generalise it to open terms: if   t : A for a context 
                      with at most one tick and t −→,
                                                     then   t : A. We prove this implication by induction on




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                21

                 t and a case distinction on the last typing rule in the derivation of   t : A. For almost
                 all cases,   t : A follows from the induction hypothesis since we find a corresponding
                 typing rule in λ that is either the same as in λ or has a side condition is satisfied by our
                 assumption that  have at most one tick. We are thus left with two interesting cases: the
                 typing rules for delay and lambda abstraction, given that  contains exactly one tick (the
                 zero-tick cases are trivial). Each of these two cases follows from the induction hypothesis
                 and one of the following two properties:

                    (1) If 1 , , 2  t : A, 2 contains a tick, and delay t −→,
                                                                                    then 1 , 2  t : A.
                                                                      
                    (2) If 1 , , 2  t : A and λx.t −→,
                                                             then 1 , 2  t : A.

                 Both properties can be proved by a straightforward induction on the typing derivation.
                 The proof of (1) uses the fact that t cannot have nested occurrences of adv and thus any
                 occurrence of adv only needs the tick that is already present in 2 . In turn, (2) holds due
                 to the fact that all occurrences of adv in t must be guarded by an occurrence of delay in t
                 itself, and thus the tick between 1 and 2 is not needed. Note that (2) is about λ as we
                 first apply the induction hypothesis and then apply (2).

                                                        4.2.2 Abstract machine for λ
                 To prove the absence of implicit space leaks, we devise an abstract machine that after
                 each time step deletes all data from the previous time step. That means, the operational
                 semantics is by construction free of implicit space leaks. This approach, pioneered by
                 Krishnaswami (2013), allows us to reduce the proof of no implicit space leaks to a proof
                 of type soundness.
                    At the centre of this approach is the idea to execute programs in a machine that has
                 access to a store consisting of up to two separate heaps: a ‘now’ heap from which we can
                 retrieve delayed computations, and a ‘later’ heap where we must store computations that
                 should be performed in the next time step. Once the machine advances to the next time
                 step, it will delete the ‘now’ heap and the ‘later’ heap will become the new ‘now’ heap.
                    The machine consists of two components: the evaluation semantics, presented in
                 Figure 6, which describes the operational behaviour of λ within a single time step; and
                 the step semantics, presented in Figure 7, which describes the behaviour of a program over
                 time, for example, how it consumes and constructs streams.
                    The evaluation semantics is given as a deterministic big-step operational semantics,
                 where we write t; σ ⇓ v; σ  to indicate that starting with the store σ , the term t evalu-
                 ates to the value v and the new store σ  . A store σ can be of one of two forms: either it
                 consists of a single heap ηL , that is, σ = ηL , or it consists of two heaps ηN and ηL , written
                 σ = ηN ηL . The ‘later’ heap ηL contains delayed computations that may be retrieved and
                 executed in the next time step, whereas the ‘now’ heap ηN contains delayed computations
                 from the previous time step that can be retrieved and executed now. We can only write
                 to ηL and only read from ηN . However, when one time step passes, the ‘now’ heap ηN is
                 deleted and the ‘later’ heap ηL becomes the new ‘now’ heap. This shifting of time is part
                 of the step semantics in Figure 7, which we turn to shortly.
                    Heaps are simply finite mappings from heap locations to terms. To allocate fresh heap
                 locations, we assume a function alloc (·) that takes a store σ of the form ηL or ηN ηL and




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      22                                                       P. Bahr




                                                                Fig. 6. Evaluation semantics.




                                                             Fig. 7. Step semantics for streams.



                      returns a heap location l that is not in the domain of ηL . Given such a fresh heap location
                      l and a term t, we write σ , l → t to denote the store ηL or ηN ηL , respectively, where
                      ηL = ηL , l → t, that is, ηL is obtained from ηL by extending it with a new mapping l → t.
                          Applying delay to a term t stores t in a fresh location l on the ‘later’ heap and then returns
                      l. Conversely, if we apply adv to such a delayed computation, we retrieve the term from
                      the ‘now’ heap and evaluate it.


                                                                       4.3 Main results
                      In this section, we present the main metatheoretical results. Namely, that the core calculus
                      λ enjoys the desired productivity, causality, and memory properties (see Theorem 4.2
                      and Theorem 4.3 below). The proof of these results is sketched in Section 5 and is
                      fully mechanised in the accompanying Coq proofs. In order to formulate and prove these




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                                 Modal FRP for all                             23

                 metatheoretical results, we devise a step semantics that describes the behaviour of reac-
                 tive programs. Here, we consider two kinds of reactive programs: terms of type Str A
                 and terms of type Str A → Str B. The former just produce infinite streams of values of
                 type A, whereas the latter are reactive processes that produce a value of type B for each
                 input value of type A. The purpose of the step semantics is to formulate clear metatheo-
                 retical properties and to subsequently prove them using the fundamental property of our
                 logical relation (Theorem 5.1). In principle, we could formulate similar step semantics
                 for the Yampa-style signal functions from Section 3.3 or other basic FRP types such as
                 resumptions (Krishnaswami, 2013) and then derive similar metatheoretical results.

                                                         4.3.1 Productivity of streams
                                              v
                 The step semantics =⇒ from Figure 7 describes the unfolding of streams of type Str A.
                 Given a closed term  t : Str A, it produces an infinite reduction sequence
                                                             v               v             v
                                                     t; ∅ =⇒
                                                           0
                                                             t1 ; η1 =⇒
                                                                      1
                                                                        t2 ; η2 =⇒
                                                                                 2
                                                                                   ...
                 where ∅ denotes the empty heap and each vi has type A. In each step, we have a term
                 ti and the corresponding heap ηi of delayed  computations. According to the definition
                 of the step semantics, we evaluate ti ; ηi  ⇓ vi ::: l; ηi ηi+1 , where ηi is ηi but possibly
                 extended with some additional delayed computations and ηi+1 is the new heap with delayed
                 computations for the next time step. Crucially, the old heap ηi is thrown away. That is, by
                 construction, old data is not implicitly retained but garbage collected immediately after we
                 completed the current time step.
                    To see this garbage collection strategy in action, consider the following definition of the
                 stream of consecutive numbers starting from some given number:
                 from :: Int → Str Int
                 from n = n ::: delay (from (n + 1))
                 This definition translates to the λ term fix r.λn.n ::: delay(adv (unbox r) (n + 1)), which,
                 in turn, rewrites into the following λ term:
                                      from = fix r.λn.n ::: let r = unbox r in delay(adv r (n + 1))
                 Let’s see how the term from 0 of type Str Int is executed on the machine:
                                            0                                            
                                  from 0; ∅ =⇒ adv l1 ; l1 → from, l1 → adv l1 (0 + 1)
                                              1                                            
                                             =⇒ adv l2 ; l2 → from, l2 → adv l2 (1 + 1)
                                              2                                            
                                             =⇒ adv l3 ; l3 → from, l3 → adv l3 (2 + 1)
                                               ..
                                                .
                 In each step, the heap contains at location li the fixed point from and at location li the
                 delayed computation produced by the occurrence of delay in the body of the fixed point.
                 The old versions of the delayed computations are garbage collected after each step and
                 only the most recent version survives.
                   Our main result is that the execution of λ terms by the machine described in Figure 6
                 and 7 is safe. To describe the type of the produced values precisely, we need to restrict




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      24                                                       P. Bahr

                      ourselves to streams over types whose evaluation is not suspended, which excludes func-
                      tion and modal types. This idea is expressed in the notion of value types, defined by the
                      following grammar:
                                                   Value types        V , W ::= 1 | Int | U × W | U + W
                      We can then prove the following theorem, which both expresses the fact that the aggressive
                      garbage collection strategy of Rattus is safe, and that stream programs are productive:

                      Theorem 4.2 (productivity of streams). Given a term  t : Str A with A a value type, there
                      is an infinite reduction sequence
                                          v               v               v
                                  t; ∅ =⇒
                                        0
                                          t1 ; η1 =⇒
                                                   1
                                                     t2 ; η2 =⇒
                                                              2
                                                                ...                        such that  vi : A for all i ≥ 0.

                         The restriction to value types is only necessary for showing that each output value vi has
                      the correct type.

                                              4.3.2 Productivity and causality of stream transducers
                                                 v/v 
                      The step semantics =⇒ from Figure 7 describes how a term of type Str A → Str B trans-
                      forms a stream of inputs into a stream of outputs in a step-by-step fashion. Given a closed
                      term  t : Str A → Str B, and an infinite stream of input values  vi : A, it produces an
                      infinite reduction sequence
                                                                   v0 /v     v1 /v      v2 /v 
                                                     t (adv l∗ ); ∅ =⇒0 t1 ; η1 =⇒1 t2 ; η2 =⇒1 . . .
                      where each output value vi has type B.
                                                 v/v 
                         The definition of =⇒ assumes that we have some fixed heap location l∗ , which acts both
                      as interface to the currently available input value and as a stand-in for future inputs that
                      are not yet available. As we can see above, this stand-in value l∗ is passed on to the stream
                      function in the form of the argument adv l∗ . Then, in each step, we evaluate the current
                      term ti in the current heap ηi :
                                                                                                       
                                          ti ; ηi , l∗ → vi ::: l∗ l∗ → ⇓ vi ::: l; ηi ηi+1 , l∗ →
                      which produces the output vi and the new heap ηi+1 . Again the old heap ηi is simply
                      dropped. In the ‘later’ heap, the operational semantics maps l∗ to the placeholder value
                        , which is safe since the machine never reads from the ‘later’ heap. Then in the next
                      reduction step, we replace that placeholder value with vi+1 ::: l∗ which contains the newly
                      received input value vi+1 .
                         For an example, consider the following function that takes a stream of integers and
                      produces the stream of prefix sums:
                      sum :: Str Int → Str Int
                      sum = run 0 where
                        run :: Int → Str Int → Str Int
                        run acc (x ::: xs) = let acc = acc + x
                                             in acc ::: delay (run acc (adv xs))




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                                Modal FRP for all                                                25

                    This function definition translates to the following term sum in the λ calculus:
                             run = fix r.λacc.λs. let x = head s in let xs = tail s in let acc = acc + x in
                                                        let r = unbox r in acc ::: delay(adv r acc (adv xs))
                             sum = run 0
                    Let’s look at the first three steps of executing the sum function with 2, 11 and 5 as its
                 first three input values:
                                                            sum (adv l∗ ); ∅
                                                2/2                                                      
                                               =⇒        adv l1 ; l1 → run, l1 → adv l1 2 (adv l∗ )
                                              11/13                                                          
                                               =⇒        adv l2 ; l2 → run, l2 → adv l2 13 (adv l∗ )
                                               5/18                                                          
                                               =⇒        adv l3 ; l3 → run, l3 → adv l3 18 (adv l∗ )
                                                ..
                                                 .
                    In each step of the computation, li stores the fixed point run and li stores the computation
                 that calls that fixed point with the new accumulator value (2, 13 and 18, respectively) and
                 the tail of the current input stream.
                    We can prove the following theorem, which again expresses the fact that the garbage col-
                 lection strategy of Rattus is safe, and that stream processing functions are both productive
                 and causal:

                 Theorem 4.3 (causality and productivity of stream transducers). Given a term  t :
                 Str A → Str B with B a value type, and an infinite sequence of values  vi : A, there is
                 an infinite reduction sequence
                                     v0 /v     v1 /v      v2 /v 
                       t (adv l∗ ); ∅ =⇒0 t1 ; η1 =⇒1 t2 ; η2 =⇒2 . . .                    such that  vi : B for all i ≥ 0.

                                                                                                                  vi /v 
                    Since the operational semantics is deterministic, in each step ti ; ηi =⇒i ti+1 ; ηi+1 , the
                                   
                 resulting output vi+1 and new state of the computation ti+1 ; ηi+1 are uniquely determined
                 by the previous state ti ; ηi and the input vi . Thus, vi and ti+1 ; ηi+1 are independent of
                 future inputs vj with j > i.

                                          4.3.3 Space leak in the naive operational semantics
                 Theorems 4.2 and 4.3 show that λ terms can be executed without implicit space leaks
                 after they have been translated into λ terms. To demonstrate the need of the translation
                 step, we give an example that illustrates what would happen if we would skip it.
                    Since both λ and λ share the same syntax, the abstract machine from Section 4.2.2
                 could in principle be used for λ terms directly, without transforming them to λ first.
                 However, we can construct a well-typed λ term for which the machine will try to deref-
                 erence a heap location that has previously been garbage collected. We might conjecture
                 that we could accommodate direct execution of λ by increasing the number of available
                 heaps in the abstract machine so that it matches the number of ticks that were necessary
                 to type-check the term we wish to execute. But this is not the case: we can construct a
                 λ term that can be type-checked with only two ticks but requires an unbounded number




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      26                                                       P. Bahr

                      of heaps to safely execute directly. In other words, programs in λ may exhibit implicit
                      space leaks, if we just run them using the evaluation strategy of the abstract machine from
                      Section 4.2.2.
                         Such implicit space leaks can occur in λ for two reasons: (1) a lambda abstraction that
                      appears in the scope of a , and (2) a term that requires more than one  to type-check.
                      Bahr et al. (2019) give an example of (1), which translates to λ . The following term of
                      type (Str Int) → (Str Int) is an example of (2):
                                   t = fix r.λx.delay(head(adv x) ::: adv(unbox r)(delay(adv(tail(adv x)))))
                      The abstract machine would get stuck trying to dereference a heap location that was pre-
                      viously garbage collected. The problem is that the second occurrence of x is nested below
                      two occurrences of delay. As a consequence, when adv x is evaluated, the heap location
                      bound to x is two time steps old and has been garbage collected already.
                         To see this behaviour on a concrete example, recall the λ term from : Int → Str Int
                      from Section 4.3.1, which produces a stream of consecutive integers. Using t and from, we
                      can construct the λ term 0 ::: t (delay (from 1)) of type Str Int, which we can run on the
                      abstract machine:6
                                                                 0              1
                                      0 ::: t (delay (from 1)); ∅ =⇒ adv l2 ; η1 =⇒ adv l4 ; η2 =⇒
                                                                                                 

                           where     η1 = l1 → from 1,
                                            l2 → head(adv l1 ) ::: adv(unbox(box(delay t)))(delay(adv(tail(adv l1 ))))
                                     η2 = l3 → adv (tail(adv l1 )),
                                            l4 → head(adv l3 ) ::: adv(unbox(box(delay t)))(delay(adv(tail(adv l3 ))))
                      In the last step, the machine would get stuck trying to evaluate adv l4 , since this in turn
                      requires evaluating head(adv l3 ) and adv (tail(adv l1 )), which would require looking up l1
                      in η1 , which has already been garbage collected. Assuming we modified the machine so
                      that it does not perform garbage collection, but instead holds on to the previous heaps
                      η1 , η2 , it would continue as follows:
                                                              1                            2
                                             adv l2 ; η1 =⇒ adv l4 ; η1 η2 =⇒ adv l6 ; η1 η2 η3

                           where     η3 = l5 → adv (tail(adv l3 )),
                                            l6 → head(adv l5 ) ::: adv(unbox(box(delay t)))(delay(adv(tail(adv l5 ))))
                      The next execution step would require the machine to look up l6 , which in turn requires l5 ,
                      l3 , and eventually l1 . We could continue this arbitrarily far into the future, and in each step
                      the machine would need to look up l1 in the very first heap η1 .
                          This examples suggests that the cause of this space leak is the fact that the machine
                      leaves the nested terms adv l1 , adv l3 , etc. unevaluated on the heap. As our metatheoretical
                      results in this section show, the translation into λ avoids this problem. Indeed, if we apply
                      the rewrite rules from Section 4.2.1 to t, we obtain the λ term 0 ::: t (delay (from 1)),
                      where

                      6   To help make the examples in this section more readable, we elide heap locations if they are not referenced
                          anywhere in the term or other parts of the heap.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                     27

                          t = fix r.λx.let r = unbox r
                                          in delay(head(adv x) ::: adv r (let y = tail(adv x) in delay(adv y)))
                 This term can then be safely executed on the abstract machine:
                                                          0
                              0 ::: t (delay (from 1)); ∅ =⇒ adv l3 ; η1 =⇒ adv l8 ; η2 =⇒ . . .
                                                                           1              2



                          where       η1 = l1 → from 1, l2 → t
                                            l3 → head(adv l1 ) ::: adv l2 (let y = tail(adv l1 ) in delay(adv y))
                                      η2 = l4 → from, l5 → adv l4 (1 + 1), l6 → adv l5 , l7 → t
                                            l8 → head(adv l6 ) ::: adv l7 (let y = tail(adv l6 ) in delay(adv y))


                                                                  4.4 Limitations
                 Now that we have formally precise statements about the operational properties of Rattus’
                 core calculus, we should make sure that we understand what they mean in practice and
                 what their limitations are. In simple terms, the productivity and causality properties estab-
                 lished by Theorems 4.2 and 4.3 state that reactive programs in Rattus can be executed
                 effectively – they always make progress and never depend on data that is not yet avail-
                 able. However, Rattus allows calling general Haskell functions, for which we can make
                 no such operational guarantees. This trade-off is intentional as we wish to make Haskell’s
                 rich library ecosystem available to Rattus. Similar trade-offs are common in foreign func-
                 tion interfaces that allow function calls into another language. For instance, Haskell code
                 may call C functions.
                    In addition, by virtue of the operational semantics, Theorems 4.2 and 4.3 also imply that
                 programs can be executed without implicitly retaining memory – thus avoiding implicit
                 space leaks. This follows from the fact that in each step, the step semantics (cf. Figure 7)
                 discards the ‘now’ heap and only retains the ‘later’ heap for the next step. However, sim-
                 ilarly to the calculi of Krishnaswami (2013) and Bahr et al. (2021, 2019), Rattus still
                 allows explicit space leaks (we may still construct data structures to hold on to an increas-
                 ing amount of memory) as well as time leaks (computations may take an increasing amount
                 of time). Below we give some examples of these behaviours.
                    Given a strict list type List (cf. Section 3.2), we can construct a function that buffers the
                 entire history of its input stream:
                 buffer :: Stable a ⇒ Str a → Str (List a)
                 buffer = scan (box (λxs x → Cons x xs)) Nil
                 Given that we have a function sum :: List Int → Int that computes the sum of a list of num-
                 bers, we can write the following alternative implementation of the sums function using
                 buffer:
                 leakySums1 :: Str Int → Str Int
                 leakySums1 = map (box sum) ◦ buffer
                 At each time step, this function adds the current input integer to the buffer of type List Int
                 and then computes the sum of the current value of that buffer. This function exhibits both




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      28                                                       P. Bahr

                      a space leak (buffering a steadily growing list of numbers) and a time leak (the time to
                      compute each element of the resulting stream increases at each step). However, these leaks
                      are explicit in the program.
                         An example of a time leak is found in the following alternative implementation of the
                      sums function:
                      leakySums2 :: Str Int → Str Int
                      leakySums2 (x ::: xs) = x ::: delay (map (box (+x)) (leakySums2 (adv xs)))
                      In each step, we add the current input value x to each future output. The closure (+x),
                      which is Haskell shorthand notation for λy → y + x, stores each input value x.
                         None of the above space and time leaks are prevented by Rattus. The space leaks in
                      buffer and leakySums1 are explicit, since the desire to buffer the input is explicitly stated
                      in the program. That is, while Rattus prevents implicit space leaks, it still allows program-
                      mers to allocate memory as they see fit. The other example is more subtle as the leaky
                      behaviour is rooted in a time leak as the program constructs an increasing computation in
                      each step. This shows that the programmer still has to be careful about time leaks. Note that
                      these leaky functions can also be implemented in the calculi of Krishnaswami (2013) and
                      Bahr et al. (2019, 2021), although some reformulation is necessary for the Simply RaTT
                      calculus of Bahr et al. (2019).


                                                         4.5 Language design considerations
                      The design of Rattus and its core calculus is derived from the calculi of Krishnaswami
                      (2013) and Bahr et al. (2019, 2021), which are the only other modal FRP calculi with a
                      garbage collection result similar to ours. In the following, we review the differences of
                      Rattus compared to these calculi with the aim of illustrating how the design of Rattus
                      follows from our goal to simplify previous calculi while still maintaining their strong
                      operational properties.
                         Like the present work, the Simply RaTT calculus of Bahr et al. (2019) uses a Fitch-style
                      type system, which provides lighter syntax to interact with the  and  modality com-
                      pared to Krishnaswami’s use of qualifiers in his calculus. In addition, Simply RaTT also
                      dispenses with the allocation tokens of Krishnaswami’s calculus by making the box primi-
                      tive call-by-name. By contrast, Krishnaswami’s calculus is closely related to dual-context
                      systems and requires the use of pattern matching as elimination forms of the modalities.
                      The Lively RaTT calculus of Bahr et al. (2021) extends Simply RaTT with temporal induc-
                      tive types to express liveness properties. But otherwise, Lively RaTT is similar to Simply
                      RaTT and the discussion below equally applies to Lively RaTT as well.
                         As discussed in Section 2.2, Simply RaTT restricts where ticks may occur, which dis-
                      allows terms like delay(delay 0) and delay(λx.x). In addition, Simply RaTT has a more
                      complicated typing rule for guarded fixed points (cf. Figure 8). In addition to , Simply
                      RaTT uses the token  to serve the role that stabilisation of a context  to   serves in
                      Rattus. But Simply RaTT only allows one such  token, and  may only appear to the
                      right of . Moreover, fixed points in Simply RaTT produce terms of type A rather than
                      just A. Taken together, this makes the syntax and typing for guarded recursive function




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                              29




                                             Fig. 8. Selected typing rules from Bahr et al. (2019).



                 definitions more complicated and less intuitive. For example, the map function would be
                 defined as follows in Simply RaTT:
                 map : (a → b) → (Str a → Str b)
                 map f # (a :: as) = unbox f a :: (map f  as)
                 Here, # is used to indicate that the argument f is to the left of the  token and only
                 because of the presence of this token can we use the unbox combinator on f (cf. Figure 8).
                 Additionally, the typing of recursive definitions is somewhat confusing: map has return
                 type (Str a → Str b) but when used in a recursive call as seen above, map f is of type
                 (Str a → Str b) instead. Moreover, we cannot call map recursively on its own. All recur-
                 sive calls must be of the form map f , the exact pattern that appears to the left of the #. This
                 last restriction rules out the following variant of map that is meant to take two functions
                 and alternately apply them to a stream:
                 alterMap : (a → b) → (a → b) → (Str a → Str b)
                 alterMap f g # (a :: as) = unbox f a :: (alterMap g f  as)
                 Only alterMap f g is allowed as recursive call, but not alterMap g f . By contrast, alterMap
                 can be implemented in Rattus without problems:
                 alterMap : (a → b) → (a → b) → Str a → Str b
                 alterMap f g (a :: as) = f a :: (delay (alterMap g f )  as)
                    In addition, because guarded recursive functions always have a boxed return type, def-
                 initions in Simply RaTT are often littered with calls to unbox. For example, the function
                 pong1 from Section 3.2 would be implemented as follows in Simply RaTT:
                 pong1 :: (Str Input → Str (Pos ⊗ Float))
                 pong1 # inp = unbox zip ball pad where
                   pad = unbox padPos inp
                   ball = unbox switchTrans (unbox ballPos inp)
                                              (unbox (triggerMap ballTrig) inp)
                                              (unbox zip pad inp)
                    By making the type of guarded fixed points A rather than A, we avoid all of the above
                 issues related to guarded recursive definitions. Moreover, the unrestricted unbox combina-
                 tor found in Rattus follows directly from this change in the guarded fixed point typing. If
                 we were to change the typing rule for fix in Simply RaTT so that fix has type A instead of
                 A, we would be able to define an unrestricted unbox combinator λx.fix y.unbox x of type
                 A → A.
                    Conversely, if we keep the unbox combinator of Simply RaTT but lift some of the
                 restrictions regarding the  token such as allowing  not only to the right of a  or allowing
                 more than one  token, then we would break the garbage collection property and thus




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      30                                                       P. Bahr

                      permit leaky programs. In such a modified version of Simply RaTT, we would be able to
                      type-check the following term:
                                       (λx.delay(fix y.unbox(adv x) ::: y))(delay(box 0)) : (StrNat)
                      where  =  if we were to allow two  tokens or  empty, if we were to allow the  to
                      occur left of the . The above term stores the value box 0 on the heap and then constructs a
                      stream, which at each step tries to read this value from the heap location. Hence, in order
                      to maintain the garbage collection property in Rattus, we had to change the typing rule for
                      unbox.
                         In addition, Rattus permits recursive functions that look more than one time step into
                      the future (e.g., stutter from Section 2.2), which is not possible in Krishnaswami (2013)
                      and Bahr et al. (2019, 2021). However, we conjecture that Krishnaswami’s calculus can
                      be adapted to allow this by changing the typing rule for fix in a similar way as we did for
                      Rattus.
                         We should note that that the # token found in Simply RaTT has some benefit over
                      the   construction. It allows the calculus to reject some programs with time leaks, for
                      example, leakySums2 from Section 4.4, because of the condition in the rule for unbox
                      requiring that   be token-free. However, we can easily write a program that is equiva-
                      lent to leakySums2, that is well-typed in Simply RaTT using tupling, which corresponds
                      to defining leakySums2 mutually recursively with map. Moreover, this side condition for
                      unbox was dropped in Lively RaTT as it is incompatible with the extension by temporal
                      inductive types.
                         Finally, there is an interesting trade-off in all four calculi in terms of their syntactic prop-
                      erties such as eta-expansion and local soundness/completeness. The potential lack of these
                      syntactic properties has no bearing on the semantic soundness results for these calculi, but
                      it may be counterintuitive to a programmer using the language.
                         For example, typing in Simply RaTT is closed under certain eta-expansions involving ,
                      which are no longer well-typed in λ because of the typing rule for unbox. For example,
                      we have
                                                     f : A → B  λ x.box(unbox(f x)) : A → B
                      in Simply RaTT but not in λ . However, λ has a slightly different eta-expansion for
                      this type instead:
                                            f : A → B  λ x.let x = unbox(f x) in box x : A → B
                      which matches the eta-expansion in Krishnaswami’s calculus:
                                        f : A → B now  λ x.let stable(x ) = f x in stable x : A → B
                         On the other hand, because λ lifts Simply RaTT’s restrictions on tokens, λ is
                      closed under several types of eta-expansions that are not well-typed in Simply RaTT. For
                      example,
                                                    x : (A)  box(box(unbox(unbox x))) : (A)
                                                  x : (A)  delay(delay(adv(adv x))) : (A)
                                               f : (A → B)  delay(λ x.adv f x) : (A → B)




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                             31

                    In return, both Krishnaswami’s calculus and λ lack local soundness and complete-
                 ness for the  type. For instance, from   t : A we can obtain   unbox t : A in
                 λ , but from   t : A, we cannot construct a term   t : A. By contrast, the use
                 of the  token ensures that Simply RaTT enjoys these local soundness and complete-
                 ness properties. However, Simply RaTT can only allow one such token and must thus
                 trade off eta-expansion as show above in order to avoid space leaks (cf. the example term
                 above).
                    In summary, we argue that our typing system and syntax are simpler than both the work
                 of Krishnaswami (2013) and Bahr et al. (2019, 2021). These simplifications are meant to
                 make the language easier to use and integrate more easily with mainstream languages like
                 Haskell, while still maintaining the strong memory guarantees of the earlier calculi.



                                                                   5 Metatheory
                 In the previous section, we have presented Rattus’s core calculus and stated its three cen-
                 tral operational properties in Theorems 4.2 and 4.3: productivity, causality, and absence
                 of implicit space leaks. Note that the absence of space leaks follows from these theo-
                 rems because the operational semantics already ensures this memory property by means of
                 garbage collecting the ‘now’ heap after each step. Since the proof of Theorems 4.2 and 4.3
                 is fully formalised in the accompanying Coq proofs, we only give a high-level overview
                 of the underlying proof technique here.
                    We prove the above-mentioned theorems by establishing a semantic soundness property.
                 For
                  productivity,
                                   our soundness property must imply that the evaluation semantics t; σ ⇓
                  v; σ  converges for each well-typed term t, and for causality, the soundness property must
                 imply that this is also the case if t contains references to heap locations in σ .
                    To obtain such a soundness result, we construct a Kripke logical relation that incor-
                 porates these properties. Generally speaking, a Kripke logical relation constructs for each
                 type A a relation Aw indexed over some world w with some closure conditions when the
                 index w changes. In our case, Aw is a set of terms. Moreover, the index w consists of
                 three components: a number ν to act as a step index (Appel and McAllester, 2001), a store
                 σ to establish the safety of garbage collection, and an infinite sequence η of future heaps
                 in order to capture the causality property.
                    A crucial ingredient of a Kripke logical relation is the ordering on the indices. The order-
                 ing on the number ν is the standard ordering on numbers. For heaps, we use the standard
                 ordering on partial maps: η  η iff η(l) = η (l) for all l ∈ dom (η). Infinite sequences of
                 heaps are ordered pointwise according to . Moreover, we extend the ordering to stores in
                 two different ways:
                                 ηN  ηN          ηL  ηL                   σ  σ          η  η
                                    ηN ηL  ηN ηL                        σ  σ       η  η η
                 That is,  is the pointwise extension of the order on heaps to stores, whereas  is more
                 general and permits introducing an arbitrary ‘now’ heap if none is present.
                    Given these orderings, we define two logical relations, the value relation Vν Aησ and the
                 term relation Tν Aησ . Both are defined in Figure 9 by well-founded recursion according to




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      32                                                       P. Bahr




                                                                    Fig. 9. Logical relation.


                      the lexicographic ordering on the triple (ν, |A| , e), where |A| is the size of A defined below,
                      and e = 1 for the term relation and e = 0 for the value relation:
                                                          |α| = |A| = |Int| = |1| = 1
                                                    |A × B| = |A + B| = |A → B| = 1 + |A| + |B|
                                                        |A| = |Fix α.A| = 1 + |A|
                         In the definition of the logical relation, we use the notation η; η to denote an infi-
                      nite sequence of heaps that starts with the heap η and then continues as the sequence η.
                      Moreover, we use the notation σ (l) to denote ηL (l) if σ is of the form ηL or ηN ηL .
                         The crucial part of the logical relation that ensures both causality and the absence of
                      space leaks is the case for A. The value relation of A at store index σ is defined as
                      all heap locations that map to computations in the term relation of A but at the store index
                      gc(σ )η. Here, gc(σ ) denotes the garbage collection of the store σ as defined in Figure 9.
                      It simply drops the ‘now’ heap if present. To see how this definition captures causality, we
                      have to look a the index η; η of future heaps. It changes to the index η, that is, all future
                      heaps are one time step closer, and the very first future heap η becomes the new ‘later’
                      heap in the store index gc(σ )η, whereas the old ‘later’ heap in σ becomes the new ‘now’
                      heap.
                         The central theorem that establishes type soundness is the so-called fundamental prop-
                      erty of the logical relation. It states that well-typed terms are in the term relation. For the
                      induction proof of this property, we also need to consider open terms and to this end, we
                      also need a corresponding context relation Cν ησ , which is given in Figure 9.

                      Theorem 5.1 (Fundamental Property). If   t : A, and γ ∈ Cν ησ , then tγ ∈ Tν Aησ .




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                            33




                                                 Fig. 10. Implementation of Rattus primitives.


                    The proof of the fundamental property is a lengthy but entirely standard induction on
                 the typing relation   t : A. Both Theorems 4.2 and 4.3 are then proved using the above
                 theorem.


                                                      6 Embedding Rattus in Haskell
                 Our goal with Rattus is to combine the operational guarantees provided by modal FRP
                 with the practical benefits of FRP libraries. Because of its Fitch-style typing rules, we can-
                 not implement Rattus as just a library of combinators. Instead, we rely on a combination
                 of a very simple library that implements the primitives of the language together with a
                 compiler plugin that performs additional checks. In addition, we also have to implement
                 the operational semantics of Rattus, which is by default call-by-value and thus different
                 from Haskell’s. This discrepancy in the operational semantics suggest a deep embedding
                 of the language. However, in order to minimise syntactic overhead and to seamlessly inte-
                 grate Rattus with its host language, we chose a shallow embedding and instead rely on the
                 compiler plugin to perform the necessary transformations to ensure the correct operational
                 behaviour of Rattus programs.
                    We start with a description of the implementation followed by an illustration of how the
                 implementation is used in practice.


                                                         6.1 Implementation of Rattus
                 At its core, our implementation consists of a very simple library that implements the prim-
                 itives of Rattus (delay, adv, box and unbox) so that they can be readily used in Haskell
                 code. The library is given in its entirety in Figure 10. Both  and  are simple wrapper
                 types, each with their own wrap and unwrap function. The constructors Delay and Box are
                 not exported by the library, that is,  and  are treated as abstract types.
                    If we were to use these primitives as provided by the library, we would end up with the
                 problems illustrated in Section 2. Such an implementation of Rattus would enjoy none of
                 the operational properties we have proved. To make sure that programs use these primitives
                 according to the typing rules of Rattus, our implementation has a second component: a
                 plugin for the GHC Haskell compiler that enforces the typing rules of Rattus.
                    The design of this plugin follows the simple observation that any Rattus program is
                 also a Haskell program but with more restrictive rules for variable scope and for where
                 Rattus’s primitives may be used. So type-checking a Rattus program boils down to first
                 type-checking it as a Haskell program and then checking that it follows the stricter variable
                 scope rules. That means, we must keep track of when variables fall out of scope due to the




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      34                                                       P. Bahr




                       Fig. 11. Compiler phases of GHC (simplified) extended with Rattus plugin (highlighted in bold).



                      use of delay, adv and box, but also due to guarded recursion. Additionally, we must make
                      sure that both guarded recursive calls and adv only appear in a context where  is present.
                         To enforce these additional simple scope rules, we make use of GHC’s plugin API
                      which allows us to customise part of GHC’s compilation pipeline. The different phases of
                      GHC are illustrated in Figure 11 with the additional passes performed by the Rattus plugin
                      highlighted in bold.
                         After type-checking the Haskell abstract syntax tree (AST), GHC passes the resulting
                      typed AST on to the scope-checking component of the Rattus plugin, which checks the
                      above-mentioned stricter scoping rules. GHC then desugars the typed AST into the inter-
                      mediate language Core. GHC then performs a number of transformation passes on this
                      intermediate representation, and the first two of these are provided by the Rattus plu-
                      gin: first, we exhaustively apply the two rewrite rules from Section 4.2.1 to transform the
                      program into a single-tick form according to the typing rules of λ . Then we transform
                      the resulting code so that Rattus programs adhere to the call-by-value semantics. To this
                      end, the plugin’s strictness pass transforms all lambda abstractions so that they evaluate
                      their arguments to weak head normal form, and all let bindings so that they evaluate the
                      bound expression into weak head normal form. This is achieved by transforming lambda
                      abstractions and let bindings as follows:
                                                           λx.t is replaced by λx.case x of → t, and
                                                let x = s in t is replaced by case s of x → t
                      In Haskell Core, case expressions always evaluate the scrutinee to weak head normal form
                      even if there is only a default clause. Hence, this transformation will force the evaluation
                      of x in the lambda abstraction λx.t, and the evaluation of s in the let binding let x = s in t.
                      In addition, this strictness pass also checks that Rattus code only uses strict data types
                      and issues a warning if lazy data types are used, for example, Haskell’s standard list and
                      pair types. Taken together, the transformations and checks performed by the strictness
                      pass ensure that lambda abstractions, let bindings and data constructors follow the opera-
                      tional semantics of λ . The remaining components of the language are either implemented
                      directly as Haskell functions (delay, adv, box and unbox) and thus require no transforma-
                      tion or use Haskell’s recursion syntax that matches the semantics of the corresponding
                      fixed point combinator in λ .
                         However, the Haskell implementation of delay and adv do not match the operational
                      semantics of λ exactly. Instead of explicit allocation of delayed computations on a heap
                      that then enables the eager garbage collection strategy of the λ operational semantics,




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                              35




                                                       Fig. 12. Complete Rattus program.



                 we rely on Haskell’s lazy evaluation for delay and GHC’s standard garbage collection
                 implementation. Since our metatheoretical results show that old temporal data is no longer
                 referenced after each time step, such data will indeed be garbage collected by the GHC
                 runtime system.
                    Not pictured in Figure 11 is a second scope-checking pass that is performed after the
                 strictness pass. After the single tick pass and thus also after the strictness pass, we expect
                 that the code is typable according to the more restrictive typing rules of λ . This sec-
                 ond scope-checking pass checks this invariant for the purpose of catching implementation
                 bugs in the Rattus plugin. The Core intermediate language is much simpler than the full
                 Haskell language, so this second scope-checking pass is much easier to implement and
                 much less likely to contain bugs. In principle, we could have saved ourselves the trou-
                 ble of implementing the much more complicated scope-checking at the level of the typed
                 Haskell AST. However, by checking at this earlier stage of the compilation pipeline, we
                 can provide much more helpful type error messages.
                    One important component of checking variable scope is checking whether types are
                 stable. This is a simple syntactic check: a type τ is stable if all occurrences of  or function
                 types in τ are nested under a . However, Rattus also supports polymorphic types with
                 type constraints such as in the const function:
                 const :: Stable a ⇒ a → Str a
                 const x = x ::: delay (const x)
                 The Stable type class is defined as a primitive in the Rattus library (see Figure 10). The
                 library does not export the underlying StableInternal type class so that the user cannot
                 declare any instances for Stable. Our library does not declare instances of the Stable class
                 either. Instead, such instances are derived by the Rattus plugin that uses GHC’s type
                 checker plugin API, which allows us to provide limited customisation to GHC’s type-
                 checking phase (see Figure 11). Using this API, one can give GHC a custom procedure
                 for resolving type constraints. Whenever GHC’s type checker finds a constraint of the
                 form Stable τ , it will send it to the Rattus plugin, which will resolve it by performing the
                 above-mentioned syntactic check on τ .


                                                                 6.2 Using Rattus
                 To write Rattus code inside Haskell, one must use GHC with the flag
                 -fplugin=Rattus.Plugin, which enables the Rattus plugin described above. Figure 12




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      36                                                       P. Bahr

                      shows a complete program that illustrates the interaction between Haskell and Rattus. The
                      language is imported via the Rattus module, with the Rattus.Stream module providing a
                      stream library (of which we have seen an excerpt in Figure 1). The program contains only
                      one Rattus function, summing, which is indicated by an annotation. This function uses the
                      scan combinator to define a stream transducer that sums up its input stream. Finally, we
                      use the runTransducer function that is provided by the Rattus.ToHaskell module. It turns
                      a stream function of type Str a → Str b into a Haskell value of type Trans a b defined as
                      follows:
                      data Trans a b = Trans (a → (b, Trans a b))
                      This allows us to run the stream function step by step as illustrated in the main function: It
                      reads an integer from the console passes it on to the stream function, prints out the response
                      and then repeats the process.
                         Alternatively, if a module contains only Rattus definitions, we can use the annotation
                        {-# ANN module Rattus #-}
                      to declare that all definitions in a module are to be interpreted as Rattus code.



                                                                       7 Related work
                      Embedded FRP languages. The central ideas of FRP were originally developed for the
                      language Fran (Elliott and Hudak, 1997) for reactive animation. These ideas have since
                      been developed into general-purpose libraries for reactive programming, most prominently
                      the Yampa library (Nilsson et al., 2002) for Haskell, which has been used in a variety of
                      applications including games, robotics, vision, GUIs and sound synthesis.
                         More recently Ploeg and Claessen (2015) have developed the FRPNow! library for
                      Haskell, which – like Fran – uses behaviours and events as FRP primitives (as opposed to
                      signal functions) but carefully restricts the API to guarantee causality and the absence of
                      implicit space leaks. To argue for the latter, the authors construct a denotational model and
                      show using a logical relation that their combinators are not ‘inherently leaky’. The latter
                      does not imply the absence of space leaks, but rather that in principle it can be implemented
                      without space leaks.
                         While these FRP libraries do not allow direct manipulation of signals since they lack a
                      bespoke type system to make that safe, they do have a practical advantage: their reliance
                      on the host language’s type system makes their implementation and maintenance markedly
                      easier. Moreover, by embedding FRP libraries in host languages with a richer type system,
                      such as full dependent types, one can still obtain some operational guarantees, including
                      productivity (Sculthorpe and Nilsson, 2009).

                      Modal FRP calculi. The idea of using modal type operators for reactive programming goes
                      back to Jeffrey (2012), Krishnaswami and Benton (2011), and Jeltsch (2013). One of the
                      inspirations for Jeffrey (2012) was to use linear temporal logic (Pnueli, 1977) as a program-
                      ming language through the Curry–Howard isomorphism. The work of Jeffrey and Jeltsch
                      has mostly been based on denotational semantics, and to our knowledge Krishnaswami and
                      Benton (2011), Krishnaswami et al. (2012), Krishnaswami (2013), Cave et al. (2014), and




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                            37

                 Bahr et al. (2019, 2021) are the only works giving operational guarantees. In addition, the
                 calculi of Cave et al. (2014) and Bahr et al. (2021) can encode liveness properties in types
                 by distinguishing between data (least fixed points, e.g., events that must happen within a
                 finite number of steps) and codata (greatest fixed points, e.g., streams). With the help of
                 both greatest and least fixed points, one can express liveness properties of programs in
                 their types (e.g., a server that will always respond within a finite number of time steps).
                 Temporal logic has also been used directly as a specification language for property-based
                 testing and runtime verification of FRP programs (Perez and Nilsson, 2020).
                    Guarded recursive types and the guarded fixed point combinator originate with Nakano
                 (2000) but have since been used for constructing logics for reasoning about advanced pro-
                 gramming languages (Birkedal et al., 2011) using an abstract form of step-indexing (Appel
                 and McAllester, 2001). The Fitch-style approach to modal types (Fitch, 1952; Clouston,
                 2018) has been used for guarded recursion in Clocked Type Theory (Bahr et al.,
                 2017), where contexts can contain multiple, named ticks. Ticks can be used for rea-
                 soning about guarded recursive programs. The denotational semantics of Clocked Type
                 Theory (Mannaa and Møgelberg, 2018) reveals the difference from the more standard
                 dual-context approaches to modal logics, such as Dual Intuitionistic Linear Logic (Barber,
                 1996): in the latter, the modal operator is implicitly applied to the type of all variables in
                 one context, in the Fitch-style, placing a tick in a context corresponds to applying a left
                 adjoint to the modal operator to the context. Guatto (2018) introduced the notion of time
                 warp and the warping modality, generalising the delay modality in guarded recursion, to
                 allow for a more direct style of programming for programs with complex input–output
                 dependencies.

                 Space leaks. The work by Krishnaswami (2013) and Bahr et al. (2019, 2021) is the clos-
                 est to the present work. Both present a modal FRP language with a garbage collection
                 result similar to ours. Krishnaswami (2013) pioneered this approach to prove the absence
                 of implicit space leaks but also implemented a compiler for his language, which translates
                 FRP programs into JavaScript. Bahr et al. (2019) were the first to mechanise the metathe-
                 ory for a modal FRP calculus, and our mechanisation is based on their work. For a more
                 detailed comparison with these calculi, see Section 4.5.
                    Krishnaswami et al. (2012) approached the problem of space leaks with an affine type
                 system that keeps track of permission tokens for allocating a stream cons cell. This typ-
                 ing discipline ensures that space usage is bounded by the number of provided permission
                 tokens and thus provides more granular static checks of space usage.
                    Synchronous dataflow languages, such Esterel (Berry and Cosserat, 1985), Lustre (Caspi
                 et al., 1987) and Lucid Synchrone (Pouzet, 2006), provide even stronger static guarantees –
                 not only on space usage but also on time usage. This feature has made these languages
                 attractive in resource-constrained environments such as hardware synthesis and embed-
                 ded control software. Their computational model is based on a fixed network of stream
                 processing nodes where each node consumes and produces a statically known number of
                 primitive values at each discrete time step. As a trade-off for these static guarantees, syn-
                 chronous dataflow languages support neither time-varying values of arbitrary types nor
                 dynamic switching.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      38                                                       P. Bahr

                                                             8 Discussion and future work
                      We have shown that modal FRP with strong operational guarantees can be seamlessly
                      integrated into the Haskell programming language. Two main ingredients are central to
                      achieving this integration: (1) the use of Fitch-style typing to simplify the syntax for inter-
                      acting with the two modalities and (2) lifting some of the restrictions found in previous
                      work on Fitch-style typing systems.
                         This paper opens up several avenues for future work both on the implementation side
                      and the underlying theory. We chose Haskell as our host language as it has a compiler
                      extension API that makes it easy for us to implement Rattus and convenient for program-
                      mers to start using Rattus with little friction. However, we think that implementing Rattus
                      in call-by-value languages like OCaml or F# should be easily achieved by a simple post-
                      processing step that checks the Fitch-style variable scope. This can be done by an external
                      tool (not unlike a linter) that does not need to be integrated into the compiler. Moreover,
                      while the use of the type class Stable is convenient, it is not necessary as we can always
                      use the  modality instead (cf. const versus constBox). When a program transformation
                      approach is not feasible or not desirable, one can also use λ rather than λ as the under-
                      lying calculus. We suspect that most function definitions do not need the flexibility of λ
                      and those that do can be transformed by the programmer with only little syntactic clut-
                      ter. One could imaging that the type checker could suggest these transformations to the
                      programmer rather than silently performing them itself.
                         FRP is not the only possible application of Fitch-style type systems. However, most of
                      the interest in Fitch-style system has been in logics and dependent type theory (Clouston,
                      2018; Birkedal et al., 2018; Bahr et al., 2017; Borghuis, 1994) as opposed to programming
                      languages. Rattus is to our knowledge the first implementation of a Fitch-style program-
                      ming language. We would expect that programming languages for information control
                      flow (Kavvos, 2019) and recent work on modalities for pure computations Chaudhury and
                      Krishnaswami (2020) admit a Fitch-style presentation and could be implemented similarly
                      to Rattus.
                         While Fitch-style modal FRP languages promise strong static guarantees with only mod-
                      erate syntactic overhead, we still lack empirical evidence that they help practical adoption
                      of FRP. Our goal with Rattus is to provide an implementation with a low entry barrier
                      and with access to a mature and practical software development ecosystem. We hope that
                      this enables experimentation with Rattus not only by researchers, but also students and
                      practitioners.
                         For example, we have looked at only a small fragment of Yampa’s comprehensive arro-
                      wised FRP library. A thorough re-implementation of Yampa in Rattus could provide a
                      systematic comparison of their relative expressiveness. For such an effort, we expect that
                      the signal function type be refined to include the  modality:
                      data SF a b = SF ! ((Float → a → ((SF a b) ⊗ b)))
                      This enables the implementation of Yampa’s powerful switch combinators in Rattus,
                      which among other things can be used to stop a signal function and then resume it (with all
                      its internal state) at a later point. By contrast, it is not clear how such a switch combinator
                      can be provided in an FRP library with first-class streams as presented in Section 3.1.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                39

                    Part of the success of FRP libraries such as Yampa and FRPNow! is due to the fact
                 that they provide a rich and highly optimised API that integrates well with its host lan-
                 guage. In this paper, we have shown that Rattus can be seamlessly embedded in Haskell,
                 but more work is required to design a comprehensive library and to perform the low-level
                 optimisations that are often necessary to obtain good real-world performance. For exam-
                 ple, our definition of signal functions in Section 3.3 resembles the semantics of Yampa’s
                 signal functions, but Yampa’s signal functions are implemented as generalised algebraic
                 data types (GADTs) that can handle some special cases much more efficiently and enable
                 dynamic optimisations.
                    In order to devise sound optimisations for Rattus, we also require suitable coinductive
                 reasoning principles. For example, we might want to make use of Haskell’s rewrite rules
                 to perform optimisations such as map fusion as expressed in the following equation:
                                          map f (map g xs) = map (box (unbox f ◦ unbox g)) xs
                 Such an equation could be proved sound using step-indexed logical relations, not unlike
                 the one we presented in Section 5. For this to scale, however, we need more high-level
                 reasoning principles such as a sound coinductive axiomatisation of bisimilarity or more
                 generally a suitable type theory with guarded recursion (Møgelberg and Veltri, 2019).



                                                               Conflicts of interest
                 None.



                                                            Supplementary material
                 For supplementary material for this article, please visit https://doi.org/10.1017/
                 S0956796822000132.



                                                                     References
                 Abramsky, S., Honda, K. & McCusker, G. (1998) A fully abstract game semantics for general refer-
                   ences. In Proceedings of the Thirteenth Annual IEEE Symposium on Logic in Computer Science
                   (Cat. No. 98CB36226). Indianapolis, IN, USA: IEEE Computer Society, pp. 334–344.
                 Appel, A. W. & McAllester, D. (2001) An indexed model of recursive types for foundational proof-
                   carrying code. ACM Trans. Program. Lang. Syst. 23(5), 657–683, 00283.
                 Bahr, P., Grathwohl, H. B. & Møgelberg, R. E. (2017) The clocks are ticking: No more delays!
                   In 32nd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2017, Reykjavik,
                   Iceland, June 20-23, 2017. Washington, DC, USA: IEEE Computer Society, pp. 1–12.
                 Bahr, P., Graulund, C. U. & Møgelberg, R. E. (2019) Simply RaTT: A fitch-style modal calculus for
                   reactive programming without space leaks. Proc. ACM Program. Lang. 3(ICFP), 1–27.
                 Bahr, P., Graulund, C. U. & Møgelberg, R. E. (2021) Diamonds are not forever: Liveness in reactive
                   programming with guarded recursion. In POPL 2021, to appear.
                 Barber, A. (1996) Dual intuitionistic linear logic. Technical report. University of Edinburgh.
                   Edinburgh, UK.
                 Berry, G. & Cosserat, L. (1985) The esterel synchronous programming language and its mathemat-
                   ical semantics. In Seminar on Concurrency. Berlin, Heidelberg, DE: Springer Berlin Heidelberg,
                   pp. 389–448.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      40                                                       P. Bahr

                      Birkedal, L., Clouston, R., Mannaa, B., Møgelberg, R. E., Pitts, A. M. & Spitters, B. (2018)
                         Modal dependent type theory and dependent right adjoints. arXiv:1804.05236 [cs]. 00000 arXiv:
                         1804.05236.
                      Birkedal, L., Møgelberg, R. E., Schwinghammer, J. & Støvring, K. (2011) First steps in syn-
                         thetic guarded domain theory: Step-indexing in the topos of trees. In Proceedings of of LICS.
                         Washington, DC, USA: IEEE Computer Society, pp. 55–64.
                      Borghuis, V. A. J. (1994) Coming to Terms with Modal Logic: On the Interpretation of Modalities
                         in Typed Lambda-Calculus. PhD Thesis. Technische Universiteit Eindhoven, 00034.
                      Caspi, P., Pilaud, D., Halbwachs, N. & Plaice, J. A. (1987) Lustre: A declarative language for real-
                         time programming. In Proceedings of the 14th ACM SIGACT-SIGPLAN Symposium on Principles
                         of Programming Languages. New York, NY, USA: ACM, pp. 178–188.
                      Cave, A., Ferreira, F., Panangaden, P. & Pientka, B. (2014) Fair reactive programming. In
                         Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming
                         Languages. San Diego, California, USA: ACM, pp. 361–372.
                      Chaudhury, V. & Krishnaswami, N. (2020) Recovering purity with comonads and capabilities. In
                         ICFP 2020, to appear.
                      Clouston, R. (2018) Fitch-style modal lambda calculi. In Foundations of Software Science and
                         Computation Structures. Cham: Springer. Springer International Publishing, pp. 258–275.
                      Elliott, C. & Hudak, P. (1997) Functional reactive animation. In Proceedings of the Second ACM
                         SIGPLAN International Conference on Functional Programming. New York, NY, USA: ACM,
                         pp. 263–273.
                      Fitch, F. B. (1952) Symbolic Logic, An Introduction. New York, NY, USA: Ronald.
                      Guatto, A. (2018) A generalized modality for recursion. In Proceedings of the 33rd Annual
                         ACM/IEEE Symposium on Logic in Computer Science. ACM, pp. 482–491.
                      Hudak, P., Courtney, A., Nilsson, H. & Peterson, J. (2004) Arrows, robots, and functional reactive
                         programming. In Advanced Functional Programming. Springer Berlin/Heidelberg.
                      Jeffrey, A. (2012) LTL types FRP: Linear-time temporal logic propositions as types, proofs as func-
                         tional reactive programs. In Proceedings of the sixth workshop on Programming Languages meets
                         Program Verification, PLPV 2012, Philadelphia, PA, USA, January 24, 2012. Philadelphia, PA,
                         USA: ACM,. pp. 49–60.
                      Jeffrey, A. (2014) Functional reactive types. In Proceedings of the Joint Meeting of the Twenty-
                         Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual
                         ACM/IEEE Symposium on Logic in Computer Science (LICS). New York, NY, USA: ACM,
                         pp. 54:1–54:9.
                      Jeltsch, W. (2013) Temporal logic with “until”, functional reactive programming with processes,
                         and concrete process categories. In Proceedings of the 7th Workshop on Programming Languages
                         Meets Program Verification. New York, NY, USA: ACM, pp. 69–78.
                      Järvi, J., Marcus, M., Parent, S., Freeman, J. & Smith, J. N. (2008) Property Models: From Incidental
                         Algorithms to Reusable Components. In Proceedings of the 7th International Conference on
                         Generative Programming and Component Engineering. New York, NY, USA: ACM, pp. 89–98.
                      Kavvos, G. A. (2019) Modalities, cohesion, and information flow. Proc. ACM Program. Lang.
                         3(POPL), 20:1–20:29, 00000.
                      Krishnaswami, N. R. (2013) Higher-order functional reactive programming without spacetime leaks.
                         In Proceedings of the 18th ACM SIGPLAN International Conference on Functional Programming.
                         Boston, Massachusetts, USA: ACM, pp. 221–232.
                      Krishnaswami, N. R. & Benton, N. (2011) Ultrametric semantics of reactive programs. In 2011 IEEE
                         26th Annual Symposium on Logic in Computer Science. Washington, DC, USA: IEEE Computer
                         Society, pp. 257–266.
                      Krishnaswami, N. R., Benton, N. & Hoffmann, J. (2012) Higher-order functional reactive pro-
                         gramming in bounded space. In Proceedings of the 39th ACM SIGPLAN-SIGACT Symposium on
                         Principles of Programming Languages, POPL 2012, Philadelphia, Pennsylvania, USA, January
                         22–28, 2012. Philadelphia, PA, USA: ACM, pp. 45–58.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                   41

                 Mannaa, B. & Møgelberg, R. E. (2018) The clocks they are adjunctions: Denotational semantics for
                   clocked type theory. In 3rd International Conference on Formal Structures for Computation and
                   Deduction, FSCD 2018, July 9–12, 2018, Oxford, UK. New York, NY, USA, pp. 23:1–23:17.
                 Møgelberg, R. E. & Veltri, N. (2019) Bisimulation as path type for guarded recursive types. Proc.
                   ACM Program. Lang. 3(POPL), 4:1–4:29.
                 Nakano, H. (2000) A modality for recursion. In Proceedings Fifteenth Annual IEEE Symposium
                   on Logic in Computer Science (Cat. No. 99CB36332). Washington, DC, USA: IEEE Computer
                   Society, pp. 255–266.
                 Nilsson, H., Courtney, A. & Peterson, J. (2002) Functional reactive programming, continued. In
                   Proceedings of the 2002 ACM SIGPLAN Workshop on Haskell. New York, NY, USA: ACM,
                   pp. 51–64.
                 Parent, S. (2006) A possible future for software development. Keynote talk at the Workshop of
                   Library-Centric Software Design at OOPSLA’06.
                 Paterson, R. (2001) A new notation for arrows. ACM SIGPLAN Not. 36(10), 229–240, 00234.
                 Perez, I., Bärenz, M. & Nilsson, H. (2016) Functional reactive programming, refactored. In
                   Proceedings of the 9th International Symposium on Haskell. New York, NY, USA: Association
                   for Computing Machinery, pp. 33–44.
                 Perez, I. & Nilsson, H. (2020) Runtime verification and validation of functional reactive systems.
                   J. Funct. Program. 30.
                 van der Ploeg, A. & Claessen, K. (2015) Practical principled FRP: forget the past, change the future,
                   FRPNow! In Proceedings of the 20th ACM SIGPLAN International Conference on Functional
                   Programming. Vancouver, BC, Canada: Association for Computing Machinery, pp. 302–314,
                   00019.
                 Pnueli, A. (1977) The temporal logic of programs. In Proceedings of the 18th Annual Symposium on
                   Foundations of Computer Science. Washington, DC, USA: IEEE Computer Society, pp. 46–57.
                 Pouzet, M. (2006) Lucid synchrone, version 3. Tutorial and Reference Manual. Université Paris-
                   Sud, LRI. 1, 25.
                 Sculthorpe, N. & Nilsson, H. (2009) Safe functional reactive programming through dependent types.
                   In Proceedings of the 14th ACM SIGPLAN International Conference on Functional Programming.
                   New York, NY, USA: ACM, pp. 23–34.




                                                              A Multi-tick calculus
                 In this appendix, we give the proof of Theorem 4.1, that is, we show that the program
                 transformation described in Section 4.2.1 indeed transforms any closed λ term into a
                 closed λ term.
                    Figure 13 gives the context formation rules of λ ; the only difference compared to λ
                 is the rule for adding ticks, which has a side condition so that there may be no more than
                 one tick. Figure 14 lists the full set of typing rules of λ . Compared to λ (cf. Figure 5),
                 the only rules that have changed are the rule for lambda abstraction, and the rule for delay.
                 Both rules transform the context  to ||, which removes the  in  if it has one:
                                                              || =  if  is tick-free
                                                                 
                                                       , ,    =   ,  

                   We define the rewrite relation −→ as the least relation that is closed under congruence
                 and the following rules:




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      42                                                       P. Bahr




                                                           Fig. 13. Well-formed contexts of λ .




                                                                 Fig. 14. Typing rules of λ .



                                    delay(K[adv t]) −→ let x = t in delay(K[adv x])              if t is not a variable
                                         λx.K[adv t] −→ let y = adv t in λx.(K[y])
                      where K is a term with a single occurrence of a hole [] that is not in the scope of delay adv,
                      box, fix, or a lambda abstraction. Formally, K is generated by the following grammar:
                        K ::= [] | K t | t K | let x = K in t | let x = t in K | K, t | t, K | in1 K | in2 K | π1 K | π2 K
                              | case K of in1 x.s; in2 x.t | case s of in1 x.K; in2 x.t | case s of in1 x.t; in2 x.K
                              | K + t | t + K | into K | out K | unbox K
                       We write K[t] to substitute the unique hole [] in K with the term t.
                        In the following, we show that for each  t : A, if we exhaustively apply the above
                      rewrite rules to t, we obtain a term  t : A. We prove this by proving each of the following
                      properties in turn:

                         (1) Subject reduction: If   s : A and s −→ t, then   t : A.
                         (2) Exhaustiveness: If t is a normal form for −→, then  t : A implies  t : A.
                         (3) Strong normalisation: There is no infinite −→-reduction sequence.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                 43

                                                              A.1 Subject reduction
                 We first show subject reduction (cf. Proposition A.4 below). To this end, we need a number
                 of lemmas:

                 Lemma A.1 (weakening). Let 1 , 2  t : A and  tick-free. Then 1 , , 2  t : A.

                 Proof By straightforward induction on 1 , 2  t : A.                                        

                 Lemma A.2. Given , ,    K[adv t] : A with   tick-free, then there is some type B
                 such that   t : B and , x : B, ,    K[adv x] : A.

                 Proof We proceed by induction on the structure of K.

                      • []: , ,    adv t : A and   tick-free implies that   t : A. Moreover, given
                        a fresh variable x, we have that , x : A  x : A, and thus , x : A, ,   
                        adv x : A and   .
                      • K s: , ,    K[adv t] s : A implies that there is some A with , ,    s :
                        A and , ,    K[adv t] : A → A. By induction hypothesis, the latter implies
                        that there is some B with   t : B and , x : B, ,    K[adv x] : A → A.
                        Hence, , x : B, ,    s : A , by Lemma A.1, and thus , x : B, ,   
                        K[adv x] s : A.
                      • s K: , ,    K[s adv t] : A implies that there is some A with , ,    s :
                        A → A and , ,    K[adv t] : A . By induction hypothesis, the latter implies
                        that there is some B with   t : B and , x : B, ,    K[adv x] : A . Hence,
                        , x : B, ,    s : A → A, by Lemma A.1, and thus , x : B, ,   
                        s K[adv x] : A.
                      • let y = s in K: , ,    let y = s in K[adv t] : A implies that there is some A
                        with , ,    s : A and , ,   , y : A  K[adv t] : A. By induction hypothesis,
                        the latter implies that there is some B with   t : B and , x : B, ,   , y :
                        A  K[adv x] : A. Hence, , x : B, ,    s : A , by Lemma A.1, and thus , x :
                        B, ,    let y = s in K[adv x] : A.
                      • let y = K in s: , ,    let y = K[adv t] in s : A implies that there is some A
                        with , ,   , y : A  s : A and , ,    K[adv t] : A . By induction hypothesis,
                        the latter implies that there is some B with   t : B and , x : B, ,   
                        K[adv x] : A . Hence, , x : B, ,   , y : A  s : A, by Lemma A.1, and thus
                        , x : B, ,    let y = K[adv x] in s : A.

                 The remaining cases follow by induction hypothesis and Lemma A.1 in a manner similar
                 to the cases above.                                                               

                 Lemma A.3. Let ,    K[adv t] : A and   tick-free. Then there is some type B such
                 that   adv t : B and , x : B,    K[x] : A.

                 Proof We proceed by induction on the structure of K:




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      44                                                       P. Bahr

                            • []: ,    adv t : A and   tick-free implies that there must be 1 and 2 such
                              that 2 is tick-free,  = 1 , , 2 , and 1  t : A. Hence, 1 , , 2  adv t :
                              A. Moreover, , x : A,    x : A follows immediately by the variable introduction
                              rule.
                            • K s: ,    K[adv t] s : A implies that there is some A with ,    s : A and
                              ,    K[adv t] : A → A. By induction hypothesis, the latter implies that there is
                              some B with   adv t : B and , x : B,    K[x] : A → A. Hence, , x : B,   
                              s : A , by Lemma A.1, and thus , x : B,    K[x] s : A.
                            • let y = s in K: ,    let y = s in K[adv t] : A implies that there is some A
                              with ,    s : A and ,   , y : A  K[adv t] : A. By induction hypothe-
                              sis, the latter implies that there is some B with   t : B and , x :
                              B,   , y : A  K[x] : A. Hence, , x : B,    s : A , by Lemma A.1, and thus
                              , x : B,    let y = s in K[adv x] : A.

                      The remaining cases follow by induction hypothesis and Lemma A.1 in a manner similar
                      to the cases above.                                                               

                      Proposition A.4 (subject reduction). If   s : A and s −→ t, then   t : A.

                      Proof We proceed by induction on s −→ t.

                            • Let s −→ t be due to congruence closure. Then,   t : A follows by the induction
                              hypothesis. For example, if s = s1 s2 , t = t1 s2 and s1 −→ t1 , then we know that  
                              s1 : B → A and   s2 : B for some type B. By induction hypothesis, we then have
                              that   t1 : B → A and thus   t : A.
                            • Let delay(K[adv t]) −→ let x = t in delay(K[adv x]) and   delay(K[adv t]) : A.
                              That is, A = A and ,   K[adv t] : A . Then by Lemma A.2, we obtain some
                              type B such that   t : B and , x : B,   K[adv x] : A . Hence,  
                              let x = t in delay(K[adv x]) : A.
                            • Let λx.K[adv t] −→ let y = adv t in λx.(K[y]) and   λx.K[adv t] : A. Hence, A =
                              A1 → A2 and , x : A1  K[adv t] : A2 . Then, by Lemma A.3, there is some type
                              B such that   adv t : B and , y : B, x : A1  K[y] : A2 . Hence,   let y =
                              adv t in λx.(K[y]) : A.
                                                                                                                     


                                                                    A.2 Exhaustiveness
                      Secondly, we show that any closed λ term that cannot be rewritten any further is also a
                      closed λ term (cf. Proposition A.9 below).

                      Definition A.5.

                           (i) We say that a term t is weakly adv-free iff whenever t = K[adv s] for some K and s,
                               then s is a variable.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                                    45

                    (ii) We say that a term t is strictly adv-free iff there are no K and s such that t = K[adv s].

                    Clearly, any strictly adv-free term is also weakly adv-free.
                    In the following, we use the notation t −→
                                                                to denote the fact that there is no term t with
                         
                 t −→ t ; in other words, t is a normal form.

                 Lemma A.6.

                     (i) If delay t −→,
                                       then t is weakly adv-free.
                    (ii) If λx. t −→,
                                     then t is strictly adv-free.

                 Proof Immediate, by the definition of weakly/strictly adv-free and −→.                             

                 Lemma A.7. Let   contain at least one tick, , ,    t : A, t −→,
                                                                                         and t weakly adv-
                 free. Then   ,    t : A

                 Proof We proceed by induction on , ,    t : A:

                      • , ,    adv t : A: Then there are 1 , 2 such that 2 tick-free,   = 1 , , 2 ,
                        and , , 1  t : A. Since adv t is by assumption weakly adv-free, we know
                        that t is some variable x. Since A is not stable we thus know that x : A ∈ 1 .
                        Hence,   , 1  t : A, and therefore   ,    adv t : A.
                      • , ,    delay t : A: Hence, , ,   ,   t : A. Moreover, since delay t −→,    
                        we have by Lemma A.6 that t is weakly adv-free. We may thus apply the induction
                        hypothesis to obtain that   ,   ,   t : A. Hence,   ,    delay t : A.
                      • , ,    box t : A: Hence, (, ,   )  t : A, which is the same as
                        (  ,   )  t : A. Hence,   ,    box t : A.
                      • , ,    λ x.t : A → B: That is, , ,   , x : A  t : B. Since, by assumption
                        λ x.t −→,
                                      we know by Lemma A.6 that t is strictly adv-free, and thus also weakly
                        adv-free. Hence, we may apply the induction hypothesis to obtain that   ,   , x :
                        A  t : B, which in turn implies   ,    λ x.t : A → B.
                      • , ,    fix x.t : A: Hence, (, ,   ) , x : A  t : A, which is the same as
                        (  ,   ) , x : A  t : A. Hence,   ,    fix x.t : A.
                      • , ,    x : A: Then, either    x : A or    x : A. In either case,   ,   
                        x : A follows.
                      • The remaining cases follow by the induction hypothesis in a straightforward manner.
                        For example, if , ,    s t : A, then there is some type B with , ,    s :
                        B → A and , ,    t : B. Since s t is weakly adv-free, so are s and t, and we may
                        apply the induction hypothesis to obtain that   ,    s : B → A and   ,    t :
                        B. Hence,   ,    s t : A.
                                                                                                                      

                 Lemma A.8. Let , ,    t : A, t −→
                                                        and t strictly adv-free. Then   ,    t : A.
                   (Note that this Lemma is about λ .)




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      46                                                       P. Bahr

                      Proof We proceed by induction on , ,    t : A.

                           • , ,    adv t : A: Impossible since adv t is not strictly adv-free.
                           • , ,    delay t : A: Hence,   ,   ,   t : A, which in turn implies that
                               ,    delay t : A.
                           • , ,    box t : A: Hence, (, ,   )  t : A, which is the same as
                             (  ,   )  t : A. Hence,   ,    box t : A.
                           • , ,    λ x.t : A → B: That is, , ,   , x : A  t : B. Since, by assumption
                             λ x.t −→,
                                          we know by Lemma A.6 that t is strictly adv-free. Hence, we may apply
                             the induction hypothesis to obtain that   ,   , x : A  t : B, which in turn implies
                               ,    λ x.t : A → B.
                           • , ,    fix x.t : A: Hence, (, ,   ) , x : A  t : A, which is the same as
                             (  ,   ) , x : A  t : A. Hence,   ,    fix x.t : A.
                           • , ,    x : A: Then, either    x : A or    x : A. In either case,   ,    x :
                             A follows.
                           • The remaining cases follow by the induction hypothesis in a straightforward manner.
                             For example, if , ,    s t : A, then there is some type B with , ,    s :
                             B → A and , ,    t : B. Since s t is strictly adv-free, so are s and t, and we may
                             apply the induction hypothesis to obtain that   ,    s : B → A and   ,    t : B.
                             Hence,   ,    s t : A.
                                                                                                                         
                        In order for the induction argument to go through, we generalise the exhaustiveness
                      property from closed λ terms to open λ terms in a context with at most one tick.

                      Proposition A.9 (exhaustiveness). Let   ,   t : A and t −→.
                                                                                        Then   t : A.

                      Proof We proceed by induction on the structure of t and by case distinction on the last
                      typing rule in the derivation of   t : A.

                                  ,   t : A
                           •   delay t : A :
                             We consider two cases:
                                   –  is tick-free: Hence, ,   and thus ,   t : A by induction hypothesis.
                                     Since || = , we thus have that   delay t : A.
                                   –  = 1 , , 2 and 2 tick-free: By definition, t −→
                                                                                           and, by Lemma A.6, t is
                                     weakly adv-free. Hence, by Lemma A.7, we have that 1 , 2 ,   t : A. We
                                     can thus apply the induction hypothesis to obtain that 1 , 2 ,   t : A. Since
                                     || = 1 , 2 , we thus have that   delay t : A.
                                 , x : A  t : B
                           •   λx.t : A → B :
                             By induction hypothesis, we have that , x : A  t : B. There are two cases to
                             consider:




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                                                               Modal FRP for all                             47

                           –  is tick-free: Then, || =  and we thus obtain that   λx.t : A → B.
                           –  = 1 , , 2 with 1 , 2 tick-free: Since t −→     by definition and t strictly
                              adv-free by Lemma A.6, we may apply Lemma A.8 to obtain that 1 , 2 , x :
                              A  t : B. Since || = 1 , 2 , we thus have that   λx.t : A → B.
                      • All remaining cases follow immediately from the induction hypothesis, because all
                        other rules are either the same for both calculi or the λ typing rule has an additional
                        side condition that  have at most one tick, which holds by assumption.
                                                                                                             


                                                           A.3 Strong normalisation
                 Proposition A.10 (strong normalisation). The rewrite relation −→ is strongly normalis-
                 ing.

                 Proof To show that −→ is strongly normalising, we define for each term t a natural num-
                 ber d(t) such that, whenever t −→ t , then d(t) > d(t ). A redex is a term of the form
                 delay K[adv t] with t not a variable, or a term of the form λ x.K[adv s]. For each redex
                 occurrence in a term t, we can calculate its depth as the length of the unique path that
                 goes from the root of the abstract syntax tree of t to the occurrence of the redex. Define
                 d(t) as the sum of the depth of all redex occurrences in t. Since each rewrite step t −→ t
                 removes a redex or replaces a redex with a new redex at a strictly smaller depth, we have
                 that d(t) > d(t ).                                                                     

                 Theorem A.11. For each  t : A, we can effectively construct a term t with t −→∗ t and
                  t : A.

                 Proof We construct t from t by repeatedly applying the rewrite rules of −→. By
                 Proposition A.10, this procedure will terminate and we obtain a term t with t −→∗ t and
                 t −→.
                       According to Proposition A.4, this means that  t : A, which in turn implies by
                 Proposition A.9 that  t : A.                                                        



                                                              B Coq formalisation
                 The supplementary material for this article contains a Coq formalisation of our meta-
                 theoretical results. Below we give a high-level overview of the structure of the formali-
                 sation.

                 Main results. The main results are proved in the following three Coq files:

                      • FundamentalProperty.v proves Theorem 5.1, the fundamental property of the
                        logical relation.
                      • Productivity.v proves Theorem 4.2, that is, the stream semantics is safe.
                      • Causality.v proves Theorem 4.3, that is, the stream transducer semantics is safe.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press
                      48                                                       P. Bahr

                      Language definition. The following files define the λ calculus and prove auxiliary
                      lemmas:

                           •   RawSyntax.v defines the syntax of λ .
                           •   Substitutions.v defines substitutions of types and terms.
                           •   Typing.v defines the type system of λ .
                           •   Reduction.v defines the big-step operational semantics.
                           •   Streams.v defines the small-step operational semantics for streams and stream
                               transducers.

                      Logical relation. The following files contain the central definitions and lemmas for the
                      logical relation argument:

                           • Heaps.v defines heaps.
                           • Worlds.v defines stores and the various orderings that are used in the logical
                             relation.
                           • LogicalRelation.v defines the logical relation and proves characterisation lem-
                             mas that correspond to the definition of the logical relation in Figure 9.
                           • LogicalRelationAux.v proves lemmas about the logical relation that are used in
                             the proof of the fundamental property.

                      Further details about the formalisation can be found in the accompanying README file.




https://doi.org/10.1017/S0956796822000132 Published online by Cambridge University Press