thank you
My PhD is supervised by Prof. Matthew Yee-King and Dr. Prashanth Thattai Ravikumar at Goldsmiths, University of London; and funded with a CHASE AHRC Studentship.
The dataset used in this research comprises audio and EMG signals collected from my own body, recorded during a studio residency with the kind support of the Studios für Elektroakustische Musik der Akademie der Künste, Berlin.
Statement on the use of AI: I used the Claude AI and Microsoft Copilot LLMs to assist with some of the ML engineering and vibe-coding for this research.
keep in touch
lstra003@gold.ac.uk
website:

- are you a dancer? participate in my user study :)
- MusicDance Cape Town performances next week
bibliography
Evans, Zach, Julian D. Parker, C. J. Carr, Zack Zukowski, Josiah Taylor, and Jordi Pons. ‘Stable Audio Open’. arXiv.org, 19 July 2024. https://arxiv.org/abs/2407.14358v2.
Kingma, Diederik P., and Max Welling. ‘Auto-Encoding Variational Bayes’. arXiv, 2014.
Noë, Alva. Action in Perception. MIT Press, 2004.
Oliveros, Pauline, Laurie Anderson, and Carole Ione. Quantum Listening. Terra Ignota. London: Ignota Books, 2022.
Pelinski, Teresa, Andrew McPherson, and Rebecca Fiebrink. ‘Ways of Knowing, Ways of Writing: Technical Practice Research in New Musical Instrument Design’. Journal of New Music Research, 8 January 2025, 1–14.