Lucy Strauss
INFORMATION:
about ︎
publications ︎
contact ︎
about ︎
publications ︎
contact ︎

night swim ︎
amass ︎
signal space ︎
improvisation ︎
tele-improvisation ︎
sonic response ︎
darkroom performances ︎
instance ︎
video archive ︎
project blog ︎
???
Lucy Strauss is a musician and researcher drawing together viola performance, DIY machine learning using audio and bioelectric signals, and interactive performance system design. With these practices, she seeks to deepen understanding of musicmaking with new and old technologies.
Lucy has performed in improvised and experimental music projects at Pony Books (Gothenburg), the TD Vancouver International Jazz Festival, Mixtophonics Festival, 8EAST Cultural Center & NOW Society (Vancouver), De Tanker (Amsterdam), Hundred Years Gallery (London), and Theatre Arts (Cape Town). She also enjoys contributing to interdisciplinary collaborations with fellow artists. Notably, she was the interaction designer for Denise Onen’s sounding the body as a sight at The Oscillations Exhibition 2024 (Akademie der Künste, Berlin). Also notably, Lucy has coded, composed and played for installations by artist Mia Thom at Everard Read Gallery, Act of Brutal Curation Gallery, and Eclectica Contemporary (Cape Town), as well as the 2022 Biennale de l'Art Africain Contemporain (Dakar). Lucy has presented workshops on composition (University of British Columbia), improvisation (Canadian Viola Society) and interactive music technology (Bowed Electrons Festival & Symposium).
Lucy learnt to love improvising at the MusicDance021 artist residency (ZA). She further developed her improvisation practice at 8EAST & Now Society (CA). She learnt to compose music at the University of Cape Town (BMus) and perform viola at the University of British Columbia (MMus). For two years, Lucy was an Artist in Residence at the University of Johannesburg (ZA). She is currently a CHASE-funded PhD researcher in Arts & Computational Technology at Goldsmiths, University of London (UK). Lucy is based in Gothenburg (SE) and London (UK).
Gained In Translation
words prompt sounds
sounds prompt words
text as score
failure to copy


cross-modal translation as meta-composition
what is gained in translation?
*The Tech, Tea & Exchange residency was funded by Anthropic and Gucci.
models & datasets
YAMNet - a sound classification model from TensorFlow
The model is trained to predict the most probable sound in an audio waveform. We can predict sounds from a pre-recorded sound file (as I do for the tech demo version of Gained in Translation) or from an incoming stream of audio (as I do in the Gained in Translaton performance).
dataset - "AudioSet consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos" https://research.google.com/audioset/
The model is trained to predict the most probable sound in an audio waveform. We can predict sounds from a pre-recorded sound file (as I do for the tech demo version of Gained in Translation) or from an incoming stream of audio (as I do in the Gained in Translaton performance).
dataset - "AudioSet consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos" https://research.google.com/audioset/
Claude 3.7 Sonnet - a large language model created by Anthropic
dataset - "Training data includes public internet information, non-public data from third-parties, contractor-generated data, and internally created data. When Anthropic's general purpose crawler obtains data by crawling public web pages, we follow industry practices with respect to robots.txt instructions that website operators use to indicate whether they permit crawling of the content on their sites. We did not train this model on any user prompt or output data submitted to us by users or customers." - https://www.anthropic.com/transparency
dataset - "Training data includes public internet information, non-public data from third-parties, contractor-generated data, and internally created data. When Anthropic's general purpose crawler obtains data by crawling public web pages, we follow industry practices with respect to robots.txt instructions that website operators use to indicate whether they permit crawling of the content on their sites. We did not train this model on any user prompt or output data submitted to us by users or customers." - https://www.anthropic.com/transparency
Stable Audio 2.0 (text & audio-to-audio)
dataset - "AudioSparx is an industry-leading music library and stock audio web site that brings together a world of music and sound effects from thousands of independent music artists, producers, bands and publishers in a hot online marketplace." https://www.audiosparx.com/
dataset - "AudioSparx is an industry-leading music library and stock audio web site that brings together a world of music and sound effects from thousands of independent music artists, producers, bands and publishers in a hot online marketplace." https://www.audiosparx.com/
Realm of Tensors and Spruce
Realm of Tensors and Spruce is a new music project created and performed by Lucy Strauss. The project explores and extends the sonic possibilities of the viola with live acoustic playing, electroacoustics, and DIY machine learning models trained on datasets of Lucy's own playing. With this palette of practices, she builds and transforms soundworlds entirely from viola audio. The resulting performance melds between improvised and composed structures.
Spruce: a species of tonewood commonly used to make violas
Tensor: a data structure used in machine learning frameworks
Realm: an allusion to Lucy's compositional approach of building soundworlds; and the to latent (or hidden) space within the machine learning models used in this project. Lucy has implemented these models in such a way that they are entirely interactive and responsive to input, so that humans always remain in the loop towards a human-centred computing practice.
Spruce: a species of tonewood commonly used to make violas
Tensor: a data structure used in machine learning frameworks
Realm: an allusion to Lucy's compositional approach of building soundworlds; and the to latent (or hidden) space within the machine learning models used in this project. Lucy has implemented these models in such a way that they are entirely interactive and responsive to input, so that humans always remain in the loop towards a human-centred computing practice.
Past: @ Theatre Arts
Cape Town | 15 December 2024
For this show, I presented this work first in a performance setting, then as an interactive sound installation using a game controller to afford audience members the chance to explore and influence a soundworld generated in real-time with bespoke neural audio synthesis.

Past: @ Pony Books
Gothenburg | 11 October 2024
Here is a ragtag assemblage of documentation from the Pony Books performance:


The elevator pitch for this project is that every single sound comes from the viola, from the acoustic sounds in the concert space, to electroacoustic process, to neural synthesis.
The promotional material for the Pony Pooks concert. Aside from the usual digital places, the concert information was shared on an actual piece of paper for people to find in the bookshop leading up to the event.
![]()

︎︎︎
Night Swim
Check out my ︎Project Blog page on this website for some behind the scenes content on the making of Night Swim.
Night Swim is a sight-specific installation work by Mia Thom in collaboration with Lucy Strauss and Clare Patrick. The installation was premiered at the Art of Brutal Curation gallery (Cape Town, August - September 2021) and was selected for installation at the Everard Read gallery group show (Cape Town, December 2021).
![]()
Plunging participants into a monochromatic, blue environment, this experimental installation comprises of three large scale sculptures which function as speakers to lie on. Composed and coded by Lucy Strauss, these forms emit a soundscape of strings, room tone and voice, transformed by geophyscial data from the South Atlantic ocean.
![]()
The sonic materials of the soundscape comprise audio captured from broken violin, viola, cello and contrabass strings; composed fragments for voice and viola based on the acoustic modes of the installation space; and viola improvisations. The collected acoustic materials are placed in an environment constructed from geophysical data from the South Atlantic ocean. Machine learning algorithms in Wekinator trigger the collected audio materials and perform transformations in the time and frequency domain, in accordance with ocean’s fluctuations and developments in time.
![]()
Night Swim also utilizes Latent Timbre Synthesis (LTS), a new audio synthesis method using Deep Learning. We used LTS to interpolate between the timbres of the raw sonic materials, and set the interpolation curve according to fluctuations in the ocean data. This curve determines how much each sound has an effect on the resulting synthesized sound. A person experiencing Night Swim will hear the raw sonic materials, as well as synthesized audio that lies somewhere between these materials. For example, one could hear a sound that exists somewhere between two different human voices; a human voice and a viola; or a viola and a broken bass string.
![]()
Night Swim
Check out my ︎Project Blog page on this website for some behind the scenes content on the making of Night Swim.Night Swim is a sight-specific installation work by Mia Thom in collaboration with Lucy Strauss and Clare Patrick. The installation was premiered at the Art of Brutal Curation gallery (Cape Town, August - September 2021) and was selected for installation at the Everard Read gallery group show (Cape Town, December 2021).

Plunging participants into a monochromatic, blue environment, this experimental installation comprises of three large scale sculptures which function as speakers to lie on. Composed and coded by Lucy Strauss, these forms emit a soundscape of strings, room tone and voice, transformed by geophyscial data from the South Atlantic ocean.

The sonic materials of the soundscape comprise audio captured from broken violin, viola, cello and contrabass strings; composed fragments for voice and viola based on the acoustic modes of the installation space; and viola improvisations. The collected acoustic materials are placed in an environment constructed from geophysical data from the South Atlantic ocean. Machine learning algorithms in Wekinator trigger the collected audio materials and perform transformations in the time and frequency domain, in accordance with ocean’s fluctuations and developments in time.

Night Swim also utilizes Latent Timbre Synthesis (LTS), a new audio synthesis method using Deep Learning. We used LTS to interpolate between the timbres of the raw sonic materials, and set the interpolation curve according to fluctuations in the ocean data. This curve determines how much each sound has an effect on the resulting synthesized sound. A person experiencing Night Swim will hear the raw sonic materials, as well as synthesized audio that lies somewhere between these materials. For example, one could hear a sound that exists somewhere between two different human voices; a human voice and a viola; or a viola and a broken bass string.
