Plaintext
Discovering Creative Commons Sounds in
Live Coding
ANNA XAMBÓ SEDÓ
Music, Technology and Innovation – Institute for Sonic Creativity (MTI2), De Montfort University, Leicester, UK
Email: anna.xambo@dmu.ac.uk
This article reports on a study to identify the new sonic (Fails and Olsen Jr 2003) is a human-centred approach
challenges and opportunities for live coders, computer to human–computer interaction (HCI) that allows
musicians and sonic artists using MIRLCa, a live-coding users to tune the results of the training process towards
environment powered by an artificial intelligence (AI) system. their expectations. Efforts towards bringing IML
MIRLCa works as a customisable worldwide sampler, with
concepts into the design of digital musical instruments
sounds retrieved from the collective online Creative Commons
(CC) database Freesound. The live-coding environment was as creative musical tools have been made by Rebecca
developed in SuperCollider by the author in conversation with Fiebrink and colleagues (Fiebrink and Caramiaux,
the live-coding community through a series of workshops and 2018). Notably, the Wekinator (Fiebrink, Trueman
by observing its use by 16 live coders, including the author, in and Cook 2009) allows artists to understand ML
work-in-progress sessions, impromptu performances and algorithms and embrace them in their practice.
concerts. This article presents a qualitative analysis of the Sound-based music has been identified as an
workshops, work-in-progress sessions and performances. The inclusive approach to making music with novel
findings identify (1) the advantages and disadvantages, and (2) technologies. The concept was envisioned by Leigh
the different compositional strategies that result from
Landy as ‘sound-based music 4 all’, with a lower
manipulating a digital sampler of online CC sounds in live
coding. A prominent advantage of using sound samples in live barrier of entry than traditional note-based music
coding is its low-entry access suitable for music improvisation. (Landy 2009), where ‘people of all ages, abilities and
The article concludes by highlighting future directions relevant backgrounds will be able to share sounds and sound-
to performance, composition, musicology and education. based works as well as participate together in sound-
based performance’ (ibid.: 532). By analogy, the use of
sound samples in live coding can also lower the entry
1. INTRODUCTION: SOUND-BASED LIVE point to this coding musical practice.
CODING AND AI Among the common strategies to lower the entry
access to live coding is the use of design constraints.
Live coding was initially characterised as a new form
Design constraints in digital musical systems were
of expression in computer music based on ‘real-time
considered by Thor Magnusson as a mechanism to
scripting during laptop music performance’ (Collins,
promote creativity (Magnusson 2010). A remarkable
McLean, Rohrhuber and Ward 2003: 321). Live
coding has greatly evolved, becoming an established example of a constrained live-coding system is
artistic and cultural practice (Blackwell, Cocker, Cox, Magnusson’s ixi lang (Magnusson 2011), a system
McLean and Magnusson 2022), and that welcomes built in SuperCollider (McCartney 2002) that allows
underrepresented communities to be involved in music the user to manipulate musical patterns by using a
technology, including women, non-binary individuals syntax that is simple to operate and understand.
(Armitage and Thornham 2021) and disabled identi- In this vein, the author has developed the live-
ties (Skuse 2020). coding system Music Information Retrieval in Live
We find a flourishing use of artificial intelligence Coding (MIRLC), a SuperCollider extension that
(AI) algorithms, unlocking its musical potential, as offers a constrained and easy-to-use live-coding
previously studied by the author (Xambó 2022). environment (Xambó, Lerch and Freeman 2018a).
Machine learning (ML) algorithms allow the creation The code is publicly available.1 The module
of computer programs that can learn and make MIRLCRep accesses the online sound database
predictions from experience through the training of Freesound (Font, Roma and Serra 2013) in real time.
datasets, which can be supervised or unsupervised by Freesound started in 2005 and has currently more
humans, to build models that can make predictions than 500,000 sounds. Freesound has been designed to
when faced with new data. Despite the musical promote the use of sounds among sound researchers,
potential, ML in live coding adds a layer of
complexity. Interactive machine learning (IML) 1
https://github.com/axambo/MIRLC.
Organised Sound 00(00): 1–14 © The Author(s), 2023. Published by Cambridge University Press. This is an Open Access article, distributed under the terms of the Creative
Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is
properly cited. doi:10.1017/S1355771823000262
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
2 Anna Xambó Sedó
developers and artists who can both upload and 2.1. The turntable as an instrument
download sounds. The types of sounds include
Musique concrète, pioneered by Pierre Schaeffer and
recorded and created sounds (e.g., soundscapes,
others, established the use of sound samples as raw
ambience, electronic, loops, effects, noise and voice).
materials. This was realised by manipulating turn-
The sounds are licensed under Creative Commons tables in the 1950s and continuing later with tape
(CC) (Lessig 2004), which promotes the remix recorders and digital techniques (Schaeffer [1966]
culture. 2017: 38). A follow-up relates to the creative use of
Accessing Freesound in live coding can lower the turntables and other devices in Jamaican dub in the
barrier of entry to live coding because it allows the live late 1960s–early 1970s (Toop 1995: 115–18) and early
coder to focus on the live-coding experience of hip hop in the 1970s–1980s (White 1996) to produce
manipulating sounds with no need for sampling. popular music. However, it was not until the dawn of
However, this approach can also have drawbacks affordable digital samplers in the 1980s that we find a
related to the heterogeneous nature of the sound popularisation of the use of sound samples as a
database. The most prominent challenge is to retrieve common musical practice (Rodgers 2003; Harkins
undesired sounds. To overcome this issue, the author 2019). This was generally linked to the production of
has developed the follow-up live-coding system Music pop music and later electronic music.
Information Retrieval in Live Coding Auto Harkins provides a useful working definition of
(MIRLCa). digital sampling as ‘the use of digital technologies to
MIRLCa is another SuperCollider extension that record, store, and reproduce any sound’ (Harkins
allows users to customise a worldwide sampler by 2019: 4). Tara Rodgers describes the processes
training models to retrieve only ‘desired’ sounds from involved in electronic music production as ‘selecting,
Freesound using a constrained live-coding interface. recording, editing and processing sound pieces to be
This approach promotes a hands-on understanding of incorporated into a larger musical work’ (Rodgers
ML and IML. The system has been used in 2003: 313). Rodgers also points to the characteristic of
international workshops along with performances electronic musicians of taking the dual role of
and was developed following participatory design ‘producers–consumers of pre-recorded sounds and
methodologies (Xambó, Roma, Roig and Solaz 2021). patterns that are transformed by a digital instrument
The code is publicly available.2 that itself is an object of consumption and transfor-
This article identifies the new sonic challenges and mation’ (ibid.: 315).
opportunities brought to live coders, computer
musicians and sonic artists by the use of MIRLCa,
2.2. The world as an instrument
a live-coding environment powered with an AI system
that works as a customisable sampler of CC sounds. Sound maps connect sound samples with their
Previously, we analysed two workshops and a concert geolocation and have been widely explored since the
with two performances using the system (Xambó et al. advent of online databases and geolocation technolo-
2021). We found that the workshop participants and gies. A prominent precursor is Murray Schafer’s
live coders took ownership of the code as well as acoustic ecology (Schafer 1977) and the related World
trained and used the models. Here, we complete the Soundscape Project (Schafer 1977; Truax 2002) in the
analysis with two more workshops, two more concerts 1970s at Simon Fraser University in Vancouver,
and four impromptu performances. Our analysis Canada. This collective brought international atten-
centres on (1) identifying the advantages and dis- tion to the sonic environment and raised awareness
advantages of this live-coding approach, and (2) about noise pollution through soundscapes and
examining different compositional strategies involving environmental sounds.
manipulating a digital sampler of online CC sounds in Portable digital audio recorders and the internet
live coding. Overall, we found this to be a novel brought about the possibility of creating crowd-
approach for live coding. sourced sound maps of specific locations. In 2006,
the composer Francisco López presented ‘The World
as an Instrument’ at MACBA in Barcelona, a
workshop where different artists’ work was introduced
2. THE WORLD AS A SAMPLER
to reflect the new popular practices of soundscape
This section revises foundational work on how the use composition using the ‘real world’ as a sonic palette.3
of sounds from crowdsourced libraries and the internet Embedded devices have allowed the creation of
have influenced performance practice. worldwide open live streamings, such as Locustream
Open Microphone Project, which developed
2 3
https://github.com/mirlca. www.macba.cat/en/exhibitions-activities/activities/world-instrument.
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
Discovering Creative Commons Sounds in Live Coding 3
streamboxes and mobile apps for streaming remote We can also find musical instruments that leverage
soundscapes (Sinclair 2018). Started in 2005, the cloud computing and CC sounds. For example, the
project offers a live sound map, the Locustream smart mandolin generates a soundscape-based accom-
Soundmap,4 showcasing a collection of live open paniment using Freesound (Turchet and Barthet
microphones across the globe. Liveshout (Chaves and 2018). Smart musical instruments (SMIs), such as
Rebelo 2011) is a mobile app that turns the phone into the smart mandolin, were defined by Luca Turchet as
an open, on-the-move mic that can broadcast and ‘devices dedicated to playing music, which are
contribute to the Locus Sonus soundmap with live equipped with embedded intelligence and are able to
collaborative performances (Sinclair 2018). communicate with external devices’ (Turchet 2018:
8948). SMIs are an instance of using AI principles in
new interfaces for musical expression (NIMEs).
2.3. Audio Commons for music composition and The use of CC sounds in hardware digital samplers
performance has previously been explored. For example,
SAMPLER allows users to load and play sounds
The so-called Web 2.0, also known as the social web, from Freesound (Font 2021). As with any traditional
represented a tipping point in the history of the internet, music sampler, users need to take several design
when, in the early 2000s, websites began to emphasise decisions (e.g., query the sound, shape the sound with
user-generated dynamic online content instead of an envelope or trim the start/end) before playing
traditional top-down static web pages (Bleicher 2006). samples. Although it is possible to work with multiple
This included a range of online services that provide sounds at once, there is a limit to the sounds that can
access to databases with hundreds of thousands of be loaded due to the hardware capacity.
digital media files. The legal framework of CC licences These hardware limitations are overcome with the
(Lessig 2004) was devised to promote creativity by system presented in this article by using software. The
legally allowing for new ways of managing this digital system takes a reactable-like approach (Jordà, Geiger,
content (e.g., sharing, reusing, remixing). This has led Alonso and Kaltenbrunner 2007), whereby there is no
to a new community of prosumers, still relevant today, distinction between designing a sound and performing
who both produce and consume online digital content with it. The sound is shaped while performed, aligning
(Ritzer and Jurgenson 2010), which aligns with the with process music proposed by Steve Reich, in which
acknowledged prosumption existent in the electronic the process becomes the piece of music (Reich 1965).
music culture (Rodgers 2003). Navigating through Another key difference is that, compared with
these large databases is not an easy task and requires hardware samplers, which tend to be connected to
suitable tools. musical instrument digital interface (MIDI) notes
The Audio Commons Initiative was conceived to mapping, such as in SAMPLER, our approach
promote the use of open audio content as well as to explores other avenues. This way, the creativity is
develop relevant technologies to support an ecosystem not confined to music-note input but, instead, the
of databases, media production tools and users (Font samples can lead to other musical spaces with their
et al. 2016). The use of CC sounds in linear media own rhythms, an approach inspired by Gerard Roma
production has been discussed with use cases in and colleagues’ seminal work on the Freesound Radio
composition and performance (Xambó, Font, (Roma, Herrera and Serra 2009). Our focus is on
Fazekas and Barthet 2019). providing easy-to-use software with a live-coding
Online databases such as Freesound offer an interface that fosters sound-based algorithmic music
application programming interface (API) or software fed by CC sounds.
interface to allow communication between software
applications. The Freesound API5 allows developers
to communicate with the Freesound database using a 2.4. CC sounds in live coding
provided set of tools and services to browse, search There exist multiple live-coding environments, which
and retrieve information from Freesound, which typically support different ways of performing audio
works in conjunction with the audio analysis of the synthesis, including sample operations. However,
sounds using Essentia (Bogdanov et al. 2013). access to online crowdsourced databases of sounds
Freesound Labs6 features several projects that foster is less common. Gibber is one browser-based live-
creative ways of interacting with the CC sound coding environment that allows the live coder to
database. perform audio synthesis with oscillators, synthesisers,
audio effects and samples, among others (Roberts,
4
https://locusonus.org/soundmap. Wright and Kuchera-Morin 2015). Amid the different
5
https://freesound.org/docs/api. options for manipulating samples, it is possible to
6
https://labs.freesound.org. retrieve, load, play back and manipulate sounds by
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
4 Anna Xambó Sedó
using the object Freesound (ibid.). EarSketch is a web- MIRLCa uses supervised ML algorithms provided
based learning environment that takes a sample-based by the Fluid Corpus Manipulation (FluCoMa) toolkit
approach to algorithmic music composition using (Tremblay, Roma and Green 2022). Based on the live
code and a digital audio workstation (DAW) interface coder’s musical preferences, the system learns and
(Freeman, Magerko, Edwards, Mcklin, Lee and predicts the type of sounds the live coder prefers to be
Moore 2019). With an emphasis on musical genres, retrieved from Freesound. The aim is to offer a flexible
students can work with curated samples, personal and tailorable live-coding CC sound-based music
samples or sounds from Freesound (ibid.). The audio environment. This approach allows ‘taming’ the
repurposing of CC sounds from Freesound using heterogeneous nature of sounds from crowdsourced
music information retrieval (MIR) has been explored libraries towards the live coder’s needs, enhanced with
by the author and colleagues (Xambó et al. 2018a) and the algorithmic music possibilities brought about by
forms the foundation for this work. live coding.
While the manipulation of sounds from crowd- Started in 2020, MIRLCa was built on
sourced libraries in live coding has potential, it also SuperCollider and is in ongoing development. The
has its limitations (see section 4). The main challenge is author is the main developer, informed by conversa-
navigating unknown sound databases and finding tions with the live-coding community, typically in the
appropriate sounds in live performance. One form of workshops, following a participatory design
approach to overcoming the limitations is by combin- process. Its development has been also informed by
ing personal and crowdsourced databases, which was observing and discussing its use by 16 live coders,
explored by the author and colleagues obtaining including the author, as early adopter users in work-
promising results (Xambó, Roma, Lerch, Barthet and in-progress sessions, impromptu performances and
Fazekas 2018b). Ordiales and Bruno investigated the concerts.
use of CC sounds from RedPanal.org and Freesound
combined with sounds from local databases using a
hardware interface for live coding (Ordiales and 3.2. Research question and research methods
Bruno 2017). Another approach uses AI to enhance
This article aims to identify the new sonic challenges
the retrieval of CC sounds (see section 3). It is out of
and opportunities brought to live coders, computer
the scope of this article to review the use of AI in live
musicians and sonic artists by the use of MIRLCa, a
coding. An overview of several approaches to live
live-coding environment powered by AI working as a
coding using AI was presented by the author in a
customisable sampler of sounds from around the
previous publication (Xambó 2022).
globe. Here, a reflective retrospection is undertaken to
look at the challenges and opportunities of manipu-
lating CC sounds in live coding focusing on the
3. THE ENVIRONMENT OF AI-EMPOWERED (1) advantages vs disadvantages, and (2) live-coding
MUSIC INFORMATION RETRIEVAL IN LIVE compositional strategies.
CODING We analysed text (e.g., interview blog posts,
This section presents the research context, research workshop attendees’ feedback) and video (e.g.,
question and research methods that guide this research work-in-progress sessions, concerts, impromptu per-
as well as the nature of the data collection and data formances), with most of the information publicly
analysis from workshops, work-in-progress sessions, available in the form of blog posts and videos (see
concerts and impromptu performances. References section). A total of four workshops, three
concerts with eight performances, four work-in-
progress video sessions and four impromptu perform-
ances with four groups of one solo, two duos and one
3.1. The MIRLCAuto project
trio of live coders were analysed (see sections 3.3 and
In a nutshell, the system MIRLCAuto (MIRLCa)7 3.4). These online and onsite activities involved more
was built on top of MIRLCRep (Xambó et al. 2018a), than 60 workshop participants and 16 live coders,
a user-friendly live-coding environment designed including the author. We sought permission from the
within SuperCollider and the Freesound quark8 to individuals named in the article.
query CC sounds from the Freesound online database To identify patterns of behaviour, the research
applying MIR techniques. A detailed technical methods are inspired by qualitative ethnographic
account of MIRLCa and some early findings were analysis (Rosaldo 1993) and thematic analysis
published in 2021 (Xambó et al. 2021). (Clarke and Braun 2006). Given the full-length video
material of the concerts, work-in-progress sessions and
7
https://mirlca.dmu.ac.uk. impromptu performances, we also used video analysis
8
http://github.com/g-roma/Freesound.sc. techniques (Xambó, Laney, Dobbyn and Jordà 2013).
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
Discovering Creative Commons Sounds in Live Coding 5
This research is a follow-up of our previous findings Thanks to the support of l’ull cec, TOPLAP Barcelona
from two of the workshops and one concert with two and Phonos,13 a pioneering centre in the fields of
performances (Xambó et al. 2021). While in our electronic and electroacoustic music in Spain, we
previous publication, we presented a behind-the- documented a series of four interviews and work-in-
scenes look at the system and explored the concept progress videos related to our workshop in Barcelona,
of personal musical preferences referred to as ‘situated featuring Hernani Villaseñor, Ramon Casamajó, Iris
musical actions’ (Xambó et al. 2021), here we focus on Saladino and Iván Paz.
the sonic potential to live coding that this novel In January 2022, the author together with Iván Paz
approach entails. co-organised the onsite workshop ‘Live Taming Free
Sounds’. The workshop was part of the weekend event
on-the-fly: Live Coding Hacklab at the Center for Art
3.3. The workshops and the work-in-progress sessions and Media Karlsruhe (ZKM) in Germany.14 In the
Hacklab, we acted as mentors for the topic of ML in
Overall, we organised four workshops, inviting both live coding (Figure 1).
beginners and experts in programming. Three work- The purpose of this hands-on workshop was to
shops were carried out online while the fourth allow the participants (1) to get a quick overview of
workshop was carried out onsite. Altogether, more some different approaches to applying ML in live
than 60 participants attended the workshops. coding, (2) to do a hands-on inspection of how to
The workshop ‘Performing with a Virtual Agent: classify CC sounds to use as a live digital worldwide
Machine Learning for Live Coding’ was delivered sampler, and (3) to carry out an aesthetic incursion
three times in an online format. The three workshops into sound-based music in live coding. By the end of
had originally been planned as onsite with local the workshop, the participants were able to perform in
participants but ended up becoming online due to the a group with their ML models using our system’s live-
COVID-19 pandemic, which also allowed the inclu- coding training methods, as explained in the next
sion of participants from around the world. The section.
workshop was co-organised and delivered by the
author together with Sam Roig in collaboration with
three different organisations and communities:
3.4. The concerts and impromptu performances
IKLECTIK (London), l’ull cec (Barcelona, Spain)
and Leicester Hackspace (Leicester, UK). As a follow-up to the online workshops, we adapted
The first workshop was held in December 2020 and our original idea of hosting three public onsite
it was organised in collaboration with IKLECTIK,9 a concerts to do what was possible under the pandemic
platform dedicated to supporting experimental con- circumstances. Consequently, the first concert,
temporary art and music. The other two workshops ‘Similar Sounds: A Virtual Agent in Live Coding’,
were organised in January 2021. One was organised in hosted by IKLECTIK in December 2020 in London,
collaboration with l’ull cec,10 an independent organi- was delivered online. The concert consisted of two solo
sation that coordinates activities in the fields of performances by Gerard Roma and the author,
sound art and experimental music and TOPLAP followed by a Q&A panel with the two live coders
Barcelona,11 a Barcelona-based live-coding collective. together with the live coder expert in live coding and
The last one was organised in collaboration with AI Iván Paz, and moderated by Sam Roig.
Leicester Hackspace,12 a venue for makers of digital, The second concert, ‘Different Similar Sounds: A
electronic, mechanical and creative projects. Live Coding Evening “From Scratch”’, hosted by
The purpose of these hands-on online workshops Phonos in April 2021 in Barcelona and organised in
was to allow the participants (1) to explore the collaboration with TOPLAP Barcelona and l’ull cec,
MIRLCRep2 tool (a follow-up version of was delivered with an audience limitation of 15 people
MIRLCRep), and (2) to expose them to how ML due to the pandemic restrictions. The concert
could help improve the live-coding experience when comprised four live coders associated with TOPLAP
using the MIRLCa tool. By the end of the workshops, Barcelona (Ramon Casamajó, Roger Pibernat, Iván
the participants were able to train their ML models Paz and Chigüire), who used MIRLCa ‘from scratch’,
using our system’s live-coding training methods. adapting the library to their particular approaches and
We offered tutorial sessions to help the workshop aesthetics. The concert ended with a group improvisa-
participants adapt the tool to their own practice. tion ‘from scratch’ by the four performers.
The third concert, ‘Dirty Dialogues’, was organised
9
www.iklectik.org. by the Music, Technology and Innovation – Institute
10
https://lullcec.org.
11 13
https://toplap.cat. www.upf.edu/web/phonos.
12 14
https://leicesterhackspace.org.uk. https://zkm.de/en/event/2022/01/on-the-fly-live-coding-hacklab.
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
6 Anna Xambó Sedó
Figure 1. Video screenshot of the workshop ‘Live Taming Free Sounds’ at on-the-fly: Live Coding Hacklab on 29–30 January
2022, ZKM, Karlsruhe, Germany. Video by Mate Bredan.
for Sonic Creativity (MTI2) in collaboration with l’ull with sounds from a crowdsourced library, which can
cec. This concert was pre-recorded in May 2021 due to include sounds from the live coder uploaded to the
the COVID-19 restrictions and premiered online. The online database. Generally, most of the sounds are
concert was an encounter of 11 musicians from the recorded by others, and hence the magnitude of sounds
Dirty Electronics Ensemble led by John Richards available is much larger compared with personal sound
together with Jon.Ogara and the author in a free music collections. Second, the typical activities involved in
improvisation session (Figure 2). Apart from an online digital sampling include the selection, recording, editing
release of the performance and interview with the and processing of sounds, in which the process of
musicians, a live album was also released in October collecting and preparing the samples tends to be
2021 on the Chicago netlabel pan y rosas discos (Dirty especially time-consuming (Rodgers 2003: 314).
Electronics Ensemble, Jon.Ogara and Xambó 2021). While our approach centres on the live curation,
In January 2022, and as part of the workshop at editing and processing of the sounds, resembling a DJ’s
ZKM during the on-the-fly: Live Coding Hacklab mix. Third, the use of a digital sampler operated via
weekend, the workshop attendees spent the last two live-coding commands, as opposed to hardware inter-
hours of the workshop creating teams and preparing a face buttons, shapes the ways of interacting with the
showcase event, the impromptu performances sounds. Small new programmes can be written
(Figure 3). In total, there were four groups (one relatively fast, and new computational possibilities
individual, two duos and one trio) together with the can emerge.
presentation of Naoto Hieda’s Hydra Freesound Table 1 outlines some of the advantages and
Auto,15 a self-live-coding system. Impromptu ‘from disadvantages of manipulating CC sounds in live
scratch’ sessions were performed by beginners in live coding experienced from the use of the MIRLCRep
coding together with the expert live coders Lina library (Xambó et al. 2018a, 2018b) for SuperCollider.
Bautista, Luka Frelih, Olivia Jack, Shelly Knotts and One salient advantage is the low-entry access to digital
Iván Paz. The ‘from scratch’ sessions typically sampling and live coding due to the use of a
consisted of playing live for 9 minutes starting from constrained environment with high-level live-coding
an empty screen and ending with the audience commands and an emphasis on manipulating ready-
applause (Villaseñor-Ramírez and Paz 2020). to-use sounds. Second, although often live coders
embrace the unknown in their algorithms, here the
unknown is embraced through the sound itself making
4. MANIPULATION OF ONLINE CC SOUNDS the discovery of sounds an exciting quality. Third,
IN LIVE CODING noting the use of CC sounds by showing the metadata
Our approach to live coding embraces the tradition of of the sound in real time (e.g., author, sound title) and
digital sampling in an idiosyncratic way. First, it works generating a credit list at the end of a session raises
awareness about digital commons as a valuable
resource. Fourth, the sounds available are not
15
https://labs.freesound.org/apps/2022/02/09/hydra-freesound.html. restricted anymore to the data memory of the digital
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
Discovering Creative Commons Sounds in Live Coding 7
Figure 2. A moment of the performance ‘Dirty Dialogues’ with the Dirty Electronics Ensemble, Jon.Ogara and Anna Xambó
on 17 May 2021 at PACE, De Montfort University, Leicester, UK. Photo by Sam Roig.
Figure 3. A ‘from scratch’ session with Olivia Jack (left) and the author (right) live coding with MIRLCa at on-the-fly: Live
Coding Hacklab on 30 January 2022, ZKM, Karlsruhe, Germany. Photo by Antonio Roberts.
sampler but to the number of sounds available on the and improvise immediately and to build narratives
online sound database (e.g., more than 500K in through the use of semantic queries, assuming that the
Freesound). Fifth, the live-coding interactive access to required software libraries have been installed.
the online database with content-based and semantic Finally, tinkering with the code and tailoring the
queries allows the live coder to achieve more variation environment to each live coder’s needs is possible due
together with certain control. Sixth, similar to other to hosting the environment in the free and open-source
live-coding environments, it is possible to make music SuperCollider software.
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
8 Anna Xambó Sedó
Table 1. Manipulation of CC sounds in live coding
Pros Cons
• Live code using samples – easy entry point for beginners • The sound results are not always as expected from a
in live coding and digital sampling heterogeneous database of sounds
• Discovery of digital commons sounds on-the-fly and • The system requires being connected to the internet
with a surprise factor
• ‘Infinite’ sounds at your disposal • The system is more computationally expensive than using
audio synthesis from a local computer
• Interactive access to an online database of sounds • The system depends on external libraries and APIs, which
often can get modified and can affect your environment
• Improvise with the tool right away and construct • Any changes in the structure of the code require technical
semantic narratives knowledge
• Easily collaborate with others using the provided presets/ • There is not an easy way to achieve rhythmic
tool or adapt it to your needs/integrate it into other synchronisation among samples
environments
However, this approach has also disadvantages. One experience of using MIRLCa (ibid.). Considering that
prominent issue is that the retrieved sounds from the MIRLCa is an environment still in ongoing develop-
queries may not always be as desired, thus disrupting ment, we focus here on the overall approach and
the musical flow or the live coder’s intention. Second, a disregard particular missing features or existent fail-
crowdsourced sound database tends to have a wide ures that are expected to be addressed in the future.
range of sounds of different qualities, captured by Thus, we talk about a generic classifier without
different means, which makes it heterogenous by specifying the number of categories supported. For
nature. Third, the constant downloading of sounds example, there are plans for the binary classifier to be
can become computationally more expensive than expanded to more than two categories to make it less
working with sound synthesis, yet nowadays it might limited.
be less noticeable with a standard laptop. Fourth, the On the one hand, the most visible advantages of this
technical requirements increase with the need to be approach include the potential of customising the
connected to the internet in order to search and environment to be prompted towards serendipity by
download the sounds in real time. Yet, in an training the system to learn and predict from a
increasingly connected world, this may be a minor provided set of categories, such as ‘good’ versus ‘bad’
issue. Fifth, other technical prerequisites are the sounds. Second, it is also possible to train the system
software dependencies on external libraries, which under different musical contexts or ‘situated musical
may require certain tech-savvy knowledge. This actions’, such as training a session that can predict
demand also applies if the live coder wants to customise ‘good’ rhythmic sounds versus ‘bad’ rhythmic sounds
the environment of their creative production workflow. for a part of a live-coding session. Last, to customise
However, the online documentation should be helpful the system and prior to the performance, it is possible
for those who are just starting out with live coding or to create a small annotated dataset and to generate a
digital sampling. Finally, although collaboration is neural network model for a particular use, which for
possible, synchronisation support between live coders’ the live coder can become an easy entry access point to
computers has not been implemented yet. This is not ML concepts.
seen as a priority feature given the potential for On the other hand, several drawbacks arise from
interesting rhythmic structures emerging from the this approach. Although effort can be made to
combination of the sounds without synchronising. customise the system by training the live coder’s
To mainly deal with the issue of obtaining own neural networks, the sound results can still be
unexpected sound results in live performance, we perceived as unwanted. Second, the musical proximity
devised the use of ML to train a binary classifier to with the sound-based music helps, but it is also true
return results closer to a serendipitous choice instead that the live coder will need some time to grasp the
of a random choice (Xambó et al. 2021). Table 2 workflow of the system and to obtain interesting sonic
illustrates the pros and cons of manipulating CC subspaces. Moreover, the training of the neural
sounds in live coding enhanced with ML from our network model can take a certain amount of time
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
Discovering Creative Commons Sounds in Live Coding 9
Table 2. Manipulation of curated CC sounds in live coding using ML
Pros Cons
• Train/customise your digital sampler to play sounds by • The sound results might still not be as expected
categories, e.g. ‘good’ sounds (prompt towards serendipity)
• Train and use different neural network models (according • Training/customising your sampler can take time (on
to different situated musical actions) using a live-coding average at least 1 hour), which reduces the immediacy of
approach improvisation
• Get to grips with a classifier and IML – easy entry point • The learning curve can become steep with the addition
for beginners in ML of ML
(on average at least one hour), which requires some or three sounds each. The sounds were a reminder of
planning and preparation. This could affect the everyday sounds, which were processed and assembled
improvisational spirit of live coding. However, the in creative ways ‘à la tape music’. At first, the sounds
system works with a provided model so there is no were typically retrieved randomly with the function
need for training if the live coder does not want to. random(). Then, similar sounds were searched with
Altogether, while the learning curve can become steep, the function similar(). After that, the sounds’
the potential of live ‘taming’ the free sounds should be sample rates were modified with the functions
worth it. play() or autochopped() to speed the sounds
up or down for a period. The following example
illustrates a code excerpt of this approach using two
5. LIVE-CODING STRATEGIES groups of sounds:
We identified several different live-coding composi- a = MIRLCa.new
tional strategies from manipulating a digital sampler a.random
of online CC sounds related to the themes: from a.similar
scratch, algorithmic soundscape composition, embrac- a.play(0.5)
ing the error, tailoring and DIY, the role of training the
models and collaborative constellations. b = MIRLCa.new
b.random
b.similar
5.1. From scratch b.play(0.2)
The ‘from scratch’ live-coding technique commonly b.autochopped(32, 2)
used in the live coding scenes in Mexico City and b.delay
Barcelona has been defined as ‘to write code in a blank Iris Saladino took a different approach to ‘from
document against time’ (Villaseñor-Ramírez and Paz scratch’ in her work-in-progress session. Her approach
2020: 60), and more particularly as ‘a creative was to combine two software systems. First, she used
technique that emphasises (visualises) real-time code MIRLCRep2 to search and download sounds from
writing’ (ibid.: 67). Freesound using tags (e.g., ‘sunrise’, ‘traffic’, ‘sunset’)
Many of the live coders started ‘from scratch’ with that she saved in a local folder. Second, she processed
empty canvases in their live-coding sessions with the sounds ‘from scratch’ using TidalCycles,17 a live-
MIRLCa. For example, Hernani Villaseñor and Iván coding environment designed for making patterns
Paz preferred to keep simple code programs, aligned with code. The musical result resembled generative
with the intention of a ‘from scratch’ performance in ambient music.
order ‘to find more efficient code structures and syntax
(e.g., concise, compact, succinct), to, with the least
5.2. Algorithmic soundscape composition
amount of characters, achieve developing a complete
piece’ (ibid.: 66). Needless to say, one of the key Many of the live coders took advantage of the bespoke
principles of the TOPLAP manifesto16 is functions available in MIRLCa as well as the built-in
‘Obscurantism is dangerous. Show us your screens.’ functions from their live-coding environments to
In both work-in-progress sessions, Hernani generate algorithmic music processes. As discussed
Villaseñor and Iván Paz started from a blank canvas in the previous section, the tandem random() and
and generally worked with two or three groups of two similar() functions are often used to retrieve
16 17
https://toplap.org/wiki/ManifestoDraft. https://tidalcycles.org.
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
10 Anna Xambó Sedó
sounds. Here, an initial random sound is retrieved, 5.3. Embracing the error
which is followed by a similar sound to maintain a
Learning how to embrace errors is part of the live-
consistent sonic space. Starting a session with a
coding practice. As Ramon Casamajó mentioned
random sound expresses openness and uncertainty,
about his performance organised by Phonos: ‘the error
as well as uniqueness, because the likeliness of two is part of the game’. These include minor errors, such
performances starting with the same sound is small. as the system not finding any candidate sound from a
Arguably the combination of random sounds with query, and major errors, such as getting an unwanted
other ways of retrieving sounds, such as similar sounds sound. As Iván Paz reflected from his concert hosted
or sounds by tag, shows that building a narrative from by Phonos: ‘methods such as similar can produce
random sounds is possible, despite this being ques- unpleasant surprises that you have to embrace and
tioned by Barry Truax when discussing using a control the performance on-the-fly’. In turn, this can
random collage of environmental sounds for sound- prompt free improvisation, as Chigüire commented
scape composition (Truax 2002: 6). after their performance hosted by Phonos: ‘There was
SuperCollider supports algorithmic composition some degree of unpredictability that made me feel
with a wide variety of built-in functions. Gerard comfortable with less preparation. I didn’t have any
Roma and Roger Pibernat started their sessions ‘from concrete idea in mind, I felt much freer to improvise.’
scratch’, and combined the provided algorithmic The aesthetics of imperfection in music and the arts
functions of MIRLCa with algorithmic functions or has been discussed (Hamilton and Pearson 2020). The
instructions from SuperCollider. Roger Pibernat used values of spontaneity, flaws and unfinished are
TDef in his performance hosted by Phonos to create highlighted as relevant to the improvisatory nature
time patterns that changed parts of the code at a of the creative work. Shelly Knotts remarks that live
constant rate. For example, the sample rate of a group coding is an error-driven coding activity (Knotts 2020:
of sounds was instructed to change every four seconds 198) and points out how the unknown, errors and
by randomly selecting an option from a list of three mistakes can become a sociopolitical statement in live
values: a.play([0.25, 0.5, 1]).choose. In his coding: ‘By resisting strongly defined boundaries and
performance at IKLECTIK, Gerard Roma accessed idealized forms of musical practice based on technical
the buffers of the sound samples to apply accuracy, live coding remains an inclusive, open-ended
SuperCollider functions to them. Using JITLib,18 a and evolving practice grounded in social and creative
customised unit generator PlayRnd randomly played freedom’ (ibid.: 189–90). The MIRLCa system opens
five buffers of sounds previously downloaded using up new dimensions in engaging with the unknown
the tag and similar functionality in MIRLCa. The because the live coder cannot have entire control of the
following example shows the code: incoming sounds that emerge from the real-time
queries. Instead, the live coder is prompted to sculpt
p = ProxySpace.push
the incoming new sounds.
p.fadeTime = 10
∼x.play
∼x.source = { 5.4. Tailoring and DIY
0.2*mix.ar(PlayRnd.ar((1.5),
In the live-coding community, each live coder has their
0.5, 1))!2;
set-up and environment. Some instances illustrated
}
how the live coders combined MIRLCa with their
a = MIRLCa.new usual software for live coding. Hernani Villaseñor
a.tag(“snow”, 5) superimposed ‘two editors: Emacs with a transparent
a.delay(10) background running MIRLCa and, underneath,
a.similar Atom running Hydra19 for capturing the typing with
a.printbuffers a webcam’. Ramon Casamajó and Iris Saladino used
The MIRLCa functions autochopped or play- MIRLCa/MIRLCRep2 in combination with
auto for playing randomly assigned sample rates or TidalCycles. Ramon Casamajó combined down-
similarauto for automatically obtaining similar loaded sounds from the internet using MIRLCa
sounds from a target sound are also used frequently to with sounds from TidalCycles stored in a local drive:
give some autonomous and algorithmic behaviour to ‘I’ve approached the Tidal side more rhythmically,
groups of sounds. recovering some patterns I had written for past
performances. On the MIRLCa side, I’ve looked for
vocals for the first part of the piece and noisy textures
18 19
https://doc.sccode.org/Overviews/JITLib.html. https://hydra.ojack.xyz.
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
Discovering Creative Commons Sounds in Live Coding 11
for the end.’ As a result, we could experience two sonic 5.6. Collaborative constellations
spaces or two parallel digital musical instruments.
We observed five collaborative performances that
Beyond software, Jon.Ogara explored MIRLCa
were all improvisational by nature. In the five
combined with hardware using the Kinect sensor
collaborations, each live coder had trained their own
and Max.
model previously and then performed live with their
trained models.
In the concert hosted by Phonos, the four live coders
5.5. The role of training the models ended by performing a ‘from scratch’ session alto-
The role of training the models for the performance gether. The layout was configured in such a way that
connects with the notion of specific musical contexts, the live coders created an outer ring with the audience
what we termed ‘situated musical actions’ (Xambó inside. Each live coder had two pairs of speakers
et al. 2021), which can help understand the training of configuring an 8-multichannel system, a laptop
a new model. For example, Gerard Roma trained a running MIRLCa and a projector. Ramon
model ‘to favour environmental sound and reject Casamajó mentioned here the importance of listening
human voice and synthetic music loops, which are to each other: ‘I tried to listen to quite a bit of the rest
common in Freesound’ (ibid.: 11). This was done in and look for reactive or complementary sounds to
order to then use it in performance to control the theirs, trying to leave empty spaces as well.’ Although
resulting types of sounds: ‘In each section, the results the conversation could be difficult sometimes.
from the tag searches were extended by similarity Chigüire thought about it ‘as a conversation at a loud
search and the model was used to ensure that the bar’, and Iván Paz found that ‘it was very challenging
results remained within the desired aesthetic. This to synchronise all the sounds.’
allowed a more open attitude for launching similarity In the concert at MTI2, we explored collaboration
with a combination of analogue and digital instru-
queries with unknown results, encouraging improvi-
ments, acoustic and electronic materials, as well as live
sation’ (ibid.: 11).
coding and DIY sound-making techniques. Both
Likewise, Iván Paz also agreed about the improvi-
Jon.Ogara and the author performed with MIRLCa,
sational nature of using a trained model, and the
the former as part of an acoustic and electronic
curational role of the live coder, commenting that the
ensemble while the latter performed in a live-coding
result is ‘like knowing what types of sounds you will
style. The improvisation was an exercise of listening to
receive and trying to organise/colour them on-the-fly’.
each other and making a suitable call–response. For
Iván Paz commented about the trade-off relationship
example, the improvisation started with three Dirty
between unexpected results and training accuracy:
Electronics Ensemble members performing with
‘There’s probably a sweet balance between surprises
different DIY circuits and found objects producing
and consistency within the training accuracy.’ Indeed,
incremental tides of noise, while Jon.Ogara slowly
we are only at the beginning of the possibilities that
faded in a female voice sound sample that whispered
this musical perspective can bring. ‘come back alive’ and the author retrieved a repetitive
Some of the live coders trained multiple models for sound sample of a printer printing.
different groups of sounds. For example, Jon.Ogara In the showcase at ZKM, there were two duos and
envisioned a long-term project of a diary of neural nets one trio who performed ‘from scratch’. Each live coder
(snapshots of a musical biography or life journey) had a projected screen and could connect to a mixing
based on how he reacts to events using singular words. desk with stereo output. The music ranged from
In the collaborations with more than one live coder, algorave beats to soundscape, to glitch music, with
each live coder used their trained model/s, which is some contrasting sounds that were handled as they
discussed in the next section. appeared in the scene. In the ensembles, there were
The IML approach for the training process with also a combination of expert live coders and beginners.
MIRLCa as a live-coding session blurs the division For beginners, working with sound samples seemed
between offline and on-the-fly training. In the on-the- like a suitable low-entry access to start with live
fly event at ZKM, Luka Frelih performed a ‘from coding, because they could refer to familiar sounds,
scratch’ training session for algorave sounds, including apart from sharing the performance space with experts
canvassing the audience’s opinion to help him label the seemed to be an optimal learning scenario.
sounds as ‘good’ or ‘bad’. The brief for the training Offering a performer-only audio output (via head-
was to create music that you can dance to. After phones) would be an option to allow the live coder to
obtaining a decent training accuracy, the model was test the new incoming sound before launching it.
tested live. The proof of the success of the model was Although this feature has been explored in collabora-
that some people indeed danced. tive musical interfaces (Fencott and Bryan-Kinns
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
12 Anna Xambó Sedó
2012), it would not favour the flow of process music using a creative and constrained environment that
and the surprise factor brought by MIRLCa, where provides low-entry access and a flexible approach to
the live coder shapes the new incoming sounds that using sound samples.
emerge unexpectedly. This connects with the remix
culture already anticipated with dub music and what it
meant to dub a track, ‘as if music was modelling clay Acknowledgements
rather than copyright property’ (Toop 1995: 118). The
tension between control and surprise seems to work This project was funded by the EPSRC HDI Network
well with the MIRLCa system, which promotes Plus Grant (EP/R045178/1). The author would like to
freedom and openness commonly found in music thank the anonymous reviewers for their time and
improvisation. thoughtful suggestions. The author gives special
thanks to Will Adams for proofreading and Gerard
Roma and Iván Paz for their help during the writing.
The author is grateful to the workshop attendees and
6. CONCLUSION early adopters of the tool for their participation in the
In this article, we introduced a new approach to live project and positive insights. The author thanks all the
coding and digital sampling that promotes the on-the- people and organisations involved in this project who
fly discovery of CC sounds. We presented a bespoke helped immensely in making it a reality. The analysed
system, MIRLCa, a customisable sampler of sounds footage from ZKM was filmed by Mate Bredan
from Freesound that can be enhanced with ML, and during the ‘on-the-fly: Live Coding Hacklab’ at ZKM
highlighted several challenges and opportunities. | Center for Art and Media Karlsruhe in January 2022.
We presented the feedback of live coders who tested Finally, the author thanks the live-coding and
the system and how they used the tool. The custom- Freesound communities.
isation of the sampler using ML invites the live coder
to train new models. Although customised training
models can reduce unwanted results from online REFERENCES
sound databases, it is still an uncertain space that
might not always bring the desired serendipitous Armitage, J. and Thornham, H. 2021. Don’t Touch My
MIDI Cables: Gender, Technology and Sound in Live
sound results. In performance, the system has proven
Coding. Feminist Review 127(1): 90–106.
to be suitable for free improvisation and shown that it Blackwell, A. F., Cocker, E., Cox, G., McLean, A. and
can be used in heterogeneous ensembles. Magnusson, T. 2022. Live Coding: A User’s Manual.
The combination of the sampler functionalities with Cambridge, MA: MIT Press.
coding results in a novel approach to deal with Bleicher, P. 2006. Web 2.0 Revolution: Power to the People.
‘infinite’ sounds that emerge with a certain autono- Applied Clinical Trials 15(8): 34.
mous level. This distinctive behaviour brings a risk for Bogdanov, D., Wack, N., Gómez, E., Gulati, S., Herrera, P.,
the unknown, a singular characteristic that aligns well Mayor, O., et al. 2013. Essentia: An Open-Source
with values found in music improvisation such as Library for Sound and Music Analysis. Proceedings of
freedom, openness, surprise and unexpectedness. This the 21st ACM International Conference on Multimedia.
New York: ACM, 855–8.
approach has potential but can be inconsistent
Chaves, R. and Rebelo, P. 2011. Sensing Shared Places:
sometimes given the wilderness nature of crowd- Designing a Mobile Audio Streaming Environment.
sourced online sound databases. Body, Space & Technology 10(1). http://doi.org/10.16995/
Despite this article focusing on live-coding practice, bst.85.
we can foresee the benefits of this approach in other Clarke, V. and Braun, V. 2006. Using Thematic Analysis in
areas. For example, the sampler could be used in both Psychology. Qualitative Research in Psychology 3(2):
modes of performance or training, to discover sounds 77–101.
based on semantic enquiries. This can work well with Collins, N., McLean, A., Rohrhuber, J. and Ward, A. 2003.
tasks that entail sound-based music composition or Live Coding in Laptop Performance. Organised Sound
sound design. From a musicological standpoint, the 8(3): 321–30.
Fails, J. A. and Olsen Jr., D. R. 2003. Interactive Machine
present article contributes a detailed account of the
Learning. Proceedings of the 8th International Conference
collaborative nature of the live-coding community and
on Intelligent User Interfaces. Miami, FL: Association for
describes how the knowledge is openly shared and Computing Machinery, 39–45.
embraced, including the musical aesthetics from the Fencott, R. and Bryan-Kinns, N. 2012. Audio Delivery and
use of CC sounds. We also envision that this approach Territoriality in Collaborative Digital Musical
can have benefits in education, by bringing digital Interaction, The 26th BCS Conference on Human
commons and music improvisation to the classroom Computer Interaction, Birmingham, UK, 69–78.
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
Discovering Creative Commons Sounds in Live Coding 13
Fiebrink, R. and Caramiaux, B. 2018. The Machine Reich, S. 1965. Music as a Gradual Process. In S. Reich
Learning Algorithm as Creative Musical Tool. In R. T. (ed.), Writings on Music. Oxford: Oxford University
Dean and A. McLean (eds.), The Oxford Handbook of Press, 34–6.
Algorithmic Music. Oxford: Oxford University Press, Ritzer, G. and Jurgenson, N. 2010. Production,
518–35. Consumption, Prosumption: The Nature of Capitalism
Fiebrink, R., Trueman, D. and Cook, P. R. 2009. A Meta- in the Age of the Digital ‘Prosumer’. Journal of Consumer
Instrument for Interactive, On-The-Fly Machine Culture 10(1): 13–36.
Learning. Proceedings of the International Conference Roberts, C., Wright, M. and Kuchera-Morin, J. 2015. Music
on New Interfaces for Musical Expression, Pittsburgh, Programming in Gibber. Proceedings of the International
PA, 280–5. Computer Music Conference, ICMA, 50–7.
Font, F. 2021. SOURCE: A Freesound Community Music Rodgers, T. 2003. On the Process and Aesthetics of
Sampler, Audio Mostly 2021. New York: ACM, 182–7. Sampling in Electronic Music Production. Organised
Font, F., Brookes, T., Fazekas, G., Guerber, M., La Burthe, Sound 8(3): 313–20.
A., Plans, D., et al. 2016. Audio Commons: Bringing Roma, G., Herrera, P. and Serra, X. 2009. Freesound
Creative Commons Audio Content to the Creative Radio: Supporting Music Creation by Exploration of a
Industries. Audio Engineering Society Conference: 61st Sound Database. Paper presented at Computational
International Conference: Audio for Games, Audio Creativity Support Workshop CHI09, Boston, MA.
Engineering Society. Rosaldo, R. 1993. Culture & Truth: The Remaking of Social
Font, F., Roma, G. and Serra, X. 2013. Freesound Analysis. Boston, MA: Beacon Press.
Technical Demo. Proceedings of the 21st ACM Schaeffer, P. [1966] 2017. Treatise on Musical Objects: An
International Conference on Multimedia. New York: Essay across Disciplines. Oakland, CA: University of
ACM, 411–12. California Press.
Freeman, J., Magerko, B., Edwards, D., Mcklin, T., Lee, T. Schafer, R. M. 1977. The Soundscape: Our Sonic
and Moore, R. 2019. Earsketch: Engaging Broad Environment and the Tuning of the World. Rochester,
Populations in Computing through Music. VT: Destiny Books.
Communications of the ACM 62(9): 78–85. Sinclair, P. 2018. Locustream Open Microphone Project.
Hamilton, A. and Pearson, L. (eds.) 2020. The Aesthetics of Proceedings of the International Computer Music
Imperfection in Music and the Arts: Spontaneity, Flaws Conference. Daegu, South Korea: ICMA, 271–5.
and the Unfinished. London: Bloomsbury. Skuse, A. 2020. Disabled Approaches to Live Coding, Cripping
Harkins, P. 2019. Introduction. In Digital Sampling: The the Code. Proceedings of the International Conference on Live
Design and Use of Music Technologies. Abingdon: Coding. Limerick, Ireland: ICMA, 5: 69–77.
Routledge, 1–14. Toop, D. 1995. Ocean of Sound: Aether Talk, Ambient
Jordà, S., Geiger, G., Alonso, M. and Kaltenbrunner, M. Sound and Imaginary Worlds. London: Serpent’s Tail.
2007. The reacTable: Exploring the Synergy between Live Tremblay, P. A., Roma, G. and Green, O. 2022. The Fluid
Music Performance and Tabletop Tangible Interfaces. Corpus Manipulation Toolkit: Enabling Programmatic
Proceedings of the 1st International Conference on Tangible Data Mining as Musicking. Computer Music Journal
and Embedded Interaction, New York, 139–46. 45(2): 9–23.
Knotts, S. 2020. Live Coding and Failure. In A. Hamilton Truax, B. 2002. Genres and Techniques of Soundscape
and L. Pearson (eds.), The Aesthetics of Imperfection in Composition as Developed at Simon Fraser University.
Music and the Arts: Spontaneity, Flaws and the Organised Sound 7(1): 5–14.
Unfinished. London: Bloomsbury, 189–201. Turchet, L. 2018. Smart Musical Instruments: Vision,
Landy, L. 2009. Sound-based Music 4 All. In R. T. Dean Design Principles, and Future Directions. IEEE Access,
(ed.), The Oxford Handbook of Computer Music. Oxford: 7: 8944–63.
Oxford University Press, 518–35. Turchet, L. and Barthet, M. 2018. Jamming with a Smart
Lessig, L. 2004. The Creative Commons. Montana Law Mandolin and Freesound-Based Accompaniment. 23rd
Review, 65. https://scholarworks.umt.edu/mlr/vol65/iss1/1. Conference of Open Innovations Association (FRUCT),
Magnusson, T. 2010. Designing Constraints: Composing IEEE, 375–81.
and Performing with Digital Musical Systems. Computer Villaseñor-Ramírez, H. and Paz, I. 2020. Live Coding From
Music Journal 34(4): 62–73. Scratch: The Cases of Practice in Mexico City and
Magnusson, T. 2011. The ixi lang: A SuperCollider Parasite Barcelona. Proceedings of the 2020 International
for Live Coding. Proceedings of the International Conference on Live Coding. Limerick, Ireland:
Computer Music Conference. Huddersfield, UK: University of Limerick, 59–68.
ICMA, 503–6. White, M. 1996. The Phonograph Turntable and
McCartney, J. 2002. Rethinking the Computer Music Performance Practice in Hip Hop Music,
Language: SuperCollider. Computer Music Journal Ethnomusicology OnLine 2. umbc. www.umbc.edu/eol/
26(4): 61–8. 2/white/ (accessed 30 December 2022).
Ordiales, H. and Bruno, M. L. 2017. Sound Recycling from Xambó, A. 2022. Virtual Agents in Live Coding: A Review of
Public Databases: Another BigData Approach to Sound Past, Present and Future Directions. eContact! 21(1).
Collections. Proceedings of the International Audio https://econtact.ca/21_1/xambosedo_agents.html (accessed
Mostly Conference, Trento, Italy. 19 September 2022).
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press
14 Anna Xambó Sedó
Xambó, A., Font, F., Fazekas, G. and Barthet, M. 2019. dmu.ac.uk/posts/different-similar-sounds-interview-with-
Leveraging Online Audio Commons Content for Media iris-saladino
Production. In M. Filimowicz (ed.), Foundations in Roig, S. and Xambó, A. 19 March 2021. Different Similar
Sound Design for Linear Media. Abingdon: Routledge, Sounds: An Interview with Iván Paz. https://mirlca.
248–82. dmu.ac.uk/posts/different-\similar-sounds-interview-with-
Xambó, A., Laney, R., Dobbyn, C. and Jordà, S. 2013. Video ivan-paz
Analysis for Evaluating Music Interaction: Musical Xambó, A. and Roig, A. 14 May 2021. An Interview with
Tabletops. In S. Holland, K. Wilkie, P. Mulholland and Jon Ogara. https://mirlca.dmu.ac.uk/posts/interview-
A. Seago (eds.), Music and Human-Computer Interaction. with-jon-ogara
Cham, Switzerland: Springer, 241–58. Xambó, A. 28 September 2021. Different Similar Sounds
Xambó, A., Lerch, A. and Freeman, J. 2018a. Music ‘From Scratch’: A Conversation with Ramon Casamajó,
Information Retrieval in Live Coding: A Theoretical Iván Paz, Chigüire, and Roger Pibernat. https://mirlca.
Framework. Computer Music Journal 42(4): 9–25. dmu.ac.uk/posts/different-similar-sounds-from-scratch-
Xambó, A., Roma, G., Lerch, A., Barthet, M. and Fazekas, a-conversation-with-ramon-casamajo-ivan-paz-chiguire-
G. 2018b. Live Repurposing of Sounds: MIR and-roger-pibernat
Explorations with Personal and Crowdsourced
Databases. Proceedings of the International Conference
on New Interfaces for Musical Expression. Blacksburg,
VA: Virginia Tech.
DISCOGRAPHY
Xambó, A., Roma, G., Roig, S. and Solaz, E. 2021. Live Dirty Electronics Ensemble, Jon.Ogara and Xambó, Anna. 1
Coding with the Cloud and a Virtual Agent. Proceedings October 2021. Dirty Dialogues. pan y rosas discos, pyr313.
of the International Conference on New Interfaces for www.panyrosasdiscos.org/pyr313-dirty-electronics-
Musical Expression. Shanghai, China: NYU Shanghai. ensemble-jon-ogara-and-anna-xambo-dirty-dialogues
INTERVIEWS
VIDEOGRAPHY
Roig, S. and Xambó, A. 28 January 2021. Different Similar
Sounds: An Interview with Hernani Villaseñor. https:// Different Similar Sounds: A Live Coding Evening ‘From
mirlca.dmu.ac.uk/posts/different-similar-sounds-interview- Scratch.’ September 2021. https://youtu.be/lDVsaw
with-hernani-villasenor ECK2Y
Roig, S. and Xambó, A. 4 February 2021. Different Similar Dirty Dialogues – The Performance. October 2021. https://
Sounds: An Interview with Ramon Casamajó. https:// vimeo.com/626477944
mirlca.dmu.ac.uk/posts/different-similar-sounds-interview- Dirty Dialogues – The Interview. October 2021. https://
with-ramon-casamajo vimeo.com/626564500
Roig, S. and Xambó, A. 18 February 2021. Different Similar Similar Sounds: A Virtual Agent in Live Coding. December
Sounds: An Interview with Iris Saladino. https://mirlca. 2020. https://youtu.be/ZRqNfgg1HU0
https://doi.org/10.1017/S1355771823000262 Published online by Cambridge University Press