Plaintext
Cellular Automata Musification Using Python and Magenta
Austin Franklin
Louisiana State University
afran84@lsu.edu
ABSTRACT 2. VIDEO ANIMATION AND PYTHON
The video component of Life Like is written in Python and
Life Like is an audiovisual project exploring the generation visualized using the matplotlib library to display, iterate,
and musification 1 of Life-Like Cellular Automata modeled and animate subsequent generations. 2 It consists of a bi-
after John H. Conway’s Game of Life using Magenta’s nary matrix representing the cells where 1 stands for alive
GANSynth Colab Notebook. This cloud-based Jupyter note- and 0 for dead. The rules of the game can be specified
books contains pre-trained models for synthesizing timbre in the format ’BXSY’ where X and Y are the birth and
and audio files. A video and MIDI file of ”live” cells are survival conditions. Once the game begins, the number of
created from variable birth and survival conditions set be- living neighbors of each cell are returned before the game
fore generation in Python. iterates one time. This is done for each ’frame’, or iteration
of the game (Figure 1).
1. INTRODUCTION
Cellular Automata is a mathematical model consisting of
an array (usually two-dimensional) of cells that ”evolve”
according to the state of neighboring cells and a set of birth
and survival conditions. The utility of Cellular Automata
is their ability to produce complex patterns and shapes us-
ing simple rule sets, and these models can be used to sim-
ulate various complex real-world processes. ”They were
invented in the 1940’s by American mathematicians John
von Neumann and Stanislaw Ulam at Los Alamos National
Laboratory [2].”
A cellular automaton is considered life-like if it meets the
following criteria:
• The array of cells of the automaton has two dimen-
sions.
• Each cell of the automaton has two states (conven- Figure 1. A single generated frame from the Life Life video
tionally referred to as ”alive” and ”dead”).
The original Game of Life as well as Life Like have a
• The neighborhood of each cell is the Moore Neigh- rule set of ’B3S23’, meaning that if the game is run ac-
borhood consisting of the eight adjacent cell surround- cording to this rule set the outcome will be the same pro-
ing the one under consideration. vided the grid size does not change. Several other rule
sets were explored, including common alternative rule sets
• In each step of the automaton, the new state of a cell such as Day and Night (B3678S34678) and Fractal-Like
can be expressed as a function of the number of ad- (B1S123), but eventually abandoned due to their increased
jacent cell, that are in an alive state and of the cell’s likelihood of becoming a combination of still lifes, oscilla-
own state. tors, and spaceships.
The color map ’cubehelix’ was used for each frame of
1 ”The methodology of musification is concerned with both the abolute Life Like because of its minimal and simplistic aesthetic.
and programmatic elements of a work. It is a tool employed to assist with It also offers incredible contrast between the live and dead
translation and organization of the symbols of a data set into organized
cells.
musical sound. [1]”
3. AUDIO AND MIDI
Copyright: ©2022 Austin Franklin et al. This is an open-access article
distributed under the terms of the Creative Commons Attribution License The gridSize, frameNumber, and frameSpeed in ms of the
3.0 Unported, which permits unrestricted use, distribution, and reproduc- video are settings that can be changed to produce a greater
tion in any medium, provided the original author and source are credited. 2 https://matplotlib.org/
number of possibilities. For Life Like however, a grid size timbres can be constant or they can interpolate linearly
of 24 is used (the number of cells in both the X and Y over time to create smooth transitions and intermediary
dimensions) to map a pitch range of three octaves using an timbres [5].
A Lydian scale [1]. For each iteration, the number of ’live’ In particular, the GANSynth demo was run using a ran-
cells in each row are counted and their index positions are dom instrument interpolation set to 10 seconds per instru-
stores in a new array using the numpy.where() function. 3 ment. This means that every 10th second would generate
This is done for all rows in each frame and for each frame a single instrument timbre and the time in between would
in the entire game. be used to interpolate from the previous to the next instru-
This new array is used to create a .mid file consisting of ment.
all of the ’live’ cells each row in the matrix as a single Interestingly, the final generated audio file did not contain
chord. The MIDI file is written entirely in python using every pitch from the file that was uploaded to the model.
the MIDIUTIL library [3]. The tempo of the MIDI file Many gaps, or ’rests’, were present throughout the entirety
is calculated in beats per minute (BPM) from the initial of the file. This is most likely due to the fact that the MIDI
settings of the game, where each chord is a quarter note, file contains 10,962 individual MIDI events and the model
using this formula: was simply not able to either analyze or generate poly-
phonic density of this level.
60000/(f rameSpeed/gridSize + 1) (1)
Creating MIDI files in python takes a fraction of the time 4. FINAL EDITING
it takes to generate audio files. Given that the duration of The video and newly created audio and MIDI are then im-
Life Like is approximately 6 minutes, the decision to gen- ported into Reaper and layered together to create the fin-
erate .mid files as opposed to .wav file was made primarily ished product. The various audio tracks are layered to-
with regards to convenience. MIDI is also an extremely gether and faded in and out, causing the sounds to gradu-
flexible file format where note pitches and lengths and can ally transition between one another and each take a ’lead-
be adjusted and manipulated much more easily than audio ing role’ throughout one of the major sections of the piece.
files after being created. The final step is rotating the video -90 degree inside of
the DAW. This moves the ’live’ cell indexes in each row
3.1 MIDI to columns, allowing pitch register to be visualized low to
The MIDI file is then imported into Logic Pro X and rep- high on the new Y axis, and time along the new X axis.
resented using the John Cage Prepared Piano Soundset by This turns the created video with the layered audio into a
Big Fish Audio 4 . This is a sample library of the original beat sequencer.
prepared piano and the only samples ever to be authorized
by the estate of world-renown composer, John Cage. This 5. CONCLUSIONS
particular collection of sounds was created for use in his
Life Like is a project that explores the musification of John
magnum opus composition, Sonatas & Interludes (Febru-
H. Conway’s Game of Life. The video animation and MIDI
ary 1946-March 1948). Additionally, a copy of the MIDI
were generated in Python according to the original rule
file is also used with the built-in Steinway Grand Piano in-
set of the game (B3S23) and an A Lydian scale spanning
strument (Figure 2).
three octaves. The MIDI file was then used with Magenta’s
GANSynth to generate and interpolate instrument timbres.
The final product was created by layering all of the video,
audio, and MIDI files together in Reaper.
6. REFERENCES
[1] A. Coop, “Sonification, Musification, and Synthesis of
Absolute Program Music,” 2016.
[2] T. E. of Encyclopedia Britannica, Cellular Automat.
Figure 2. Piano Roll in Logic Pro X of the MIDI generated in Python Encyclopedia Britannica, 2021.
[3] M. C. Wirt, “MIDIUtil Documentation,” 2018.
3.2 Audio in Magenta [4] T. P. Van, T. M. Nguyen, N. N. Tran, H. V. Nguyen,
The MIDI file was also used in Magenta’s GANSynth Co- L. B. Doan, H. Q. Dao, and T. T. Minh, “Interpreting
lab Notebook, which generates audio using Generative Ad- the Latent Space of Generative Adversarial Networks
versarial Networks. 5 The model takes the pitch from the using Supervised Learning,” 2021.
MIDI as a conditional attribute and learns to use its latent [5] A. Brock, J. Donahue, and K. Simonyan, “Large Scale
space to synthesize different instrument timbres by encod- GAN Training for High Fidelity Natural Image Syn-
ing numerous attributes related to that timbre [4]. These thesis,” 2019.
3 https://numpy.org/doc/stable/reference/generated/numpy.where.html
4 https://www.bigfishaudio.com/John-Cage-Prepared-Piano
5 https://colab.research.google.com/notebooks/magenta/gansynth/gansynth emo.ipynb
d