Emotion in the Signal
EEG-Driven Adaptive Music for Games, VR, and Immersive Media
This talk explores how real-time EEG can drive emotionally adaptive music in games, VR, and immersive experiences. I’ll share insights gained from building brain–computer musical interfaces (BCMIs) from scratch—including hardware selection, EEG signal processing, emotion recognition methods, and integration into game engines.
My exploration began with plant biofeedback in modular synthesis, evolving into EEG-driven audio systems using Arduino, Bitalino, and now a commercial 8-channel Neuroconcise headset. The primary challenges included interpreting EEG data effectively without formal neuroscience training, meaningfully mapping neural activity onto musical and spatial audio parameters, and reliably managing real-time data exchange between software environments.
In this talk, I’ll clearly outline my EEG workflow, covering data acquisition, neural feature extraction (spectral bands, Hjorth parameters), and machine learning approaches to classify emotional states. We’ll examine practical audio output strategies, demonstrating how distinct neural signals can dynamically shape synthesis parameters, spatial audio behaviours, and adaptive soundtrack structures. I’ll detail my current technical workflow: coding real-time EEG analysis in Max/MSP’s RNBO, compiling this into a C++ plugin for Unreal Engine 5, and leveraging Python-based machine learning models trained on EEG emotion-recognition databases to create responsive sonic environments within VR.
Finally, I’ll discuss design considerations unique to EEG-driven interaction: how this approach influences immersion, user agency, and the listener’s relationship to adaptive systems. Whether you're interested in prototyping EEG-based audio for the first time, enhancing adaptive music integration in games, or exploring innovative audio-responsive interactions, this talk provides practical insights into effectively leveraging brain signals as a novel musical interface.

Marta Rossi
Marta Rossi is an Italian composer, multimedia artist, and researcher based in Scotland, working at the intersection of immersive audio, living beings–to–computer interfaces, and philosophical aesthetics. Her practice spans spatial sound design, live performance, and experimental installation, often exploring the boundary between human agency and machine systems. Drawing from a background in sonic arts, sound and acoustic engineering, and philosophical studies, her work seeks to destabilise the subject–object hierarchy in the fruition of the artwork, and interrogates the recursive feedback between somatic response, signal processing, and spatial context.
She has released five studio albums with her A/V duo Silent Chaos and performed at festivals and venues across Italy and the UK, including Cryptic Nights and Sound Festival. Her installations employ real-time EEG, sensors, and generative systems, as in Human AutomatArt and Aletheia, a quadriphonic audiovisual installation based on self-built EEG devices. She is currently completing a PhD in immersive audio and brain–computer musical interfaces at Abertay University, developing new forms of bioresponsive and spatial music for VR.