Back To Schedule

Level Up!

Procedural Game Music and Audio

00:00 - 00:00 | Friday 31st October 2025 |
Beginner
Intermediate
Advanced

What if games had access to (and control of) every note in the soundtrack?  What are the limits for gameplay-driven music and music-driven gameplay?

How do you craft a soundtrack that dynamically adapts to the game world?
This talk presents new techniques and technologies for crafting richer, more dynamic music and sound FX to develop more immersive and interactive game soundtracks in both Unity and Unreal – and marks the launch of game music development kits for Manhattan and Klang (C++) on the Unity Store and FAB.com, which will be provided free to ADC 25 delegates.


DETAIL

After a brief moan about the state of the art, the session will provide a practical introduction to technologies for procedural music and audio (see below) before showcasing and deconstructing new game music and audio mechanics, demonstrated through two example games, developed as technology demos (playable at the nash.audio arcade, throughout the conference):

Ars Arcus (2023; video on YouTube; demo on Steam)
As featured at ADC 23 and 24, this defend-the-castle bow-and-arrow VR game was developed in Unity as a test ground for different music mechanics, from simple part layering and stingers, to significantly more sophisticated music generation and dynamic arrangement, including live data sonification (scene-driven musical cues and arrangements) and direct integration between the game and soundtrack ("diagetic" music for games, affected by player interaction) - notably placing a virtual, and somewhat mortal, NPC orchestra at the mercy of the player, inside the game world. A new version, with new music mechanics will debut at ADC 25.

Future Sound of Bristol (2025; video - playable at ADC 25)
Grander in scale, in every respect, this open-world game features a 1:1 recreation of Britain* - every road, building, and tree, accurate to the metre – developed in Unreal 5. The player starts on foot with the country in ruin and must explore, connect, rewild, and rebuild the land. The entirely procedural soundtrack for the game is driven by the environment: the tone changes from day to night, town to country, and as the player restores harmony to the land, so to the music - haunting, dissonant soundscapes are transformed to tranquil, tonal symphonies. Gameplay, too, is led by the music, providing cues (dynamic leitmotifs) players must use to find resources, vehicle parts, etc., not always visible to the eye, and subtler musical devices that portend to impending events, such as nightfall or storms. For SFX, the game uses procedural audio (in C++, using Klang) to live generate sounds for dynamic elements such as rain, wind, engines and other interactive game elements. The video not provides a brief preview of the game world, but demonstrates the procedural soundtrack - let the video loop and keep listening...

(* limited to Bristol and the surrounding 10,000km² for the ADC)

Discussion will provide a behind-the-screens look at how the games and their soundtracks were developed, edited, and integrated using Manhattan, Klang, Unity and Unreal, with emphasis on the musical processes and creative workflows.

Both games are being developed as platforms to explore new directions in game audio and music, and our hope is to provide them as a resource that other game developers, composers, and sound designers can explore, play with, and develop for.


TECHNOLOGIES

Manhattan (https://nash.audio/manhattan) is a hybrid music editor and procedural music engine combining pattern-based sequencing with powerful procedural generation features, designed to integrate traditional music composition and arrangement practices with non-linear, dynamic, reactive, and interactive elements, from simple random values to entirely generative pieces – supporting all musical styles, in applications from production to sonic art to video games. Running as a standalone editor and VST/AU plugin, with native support for Unity/Unreal (including Live Editor integration), Manhattan includes an extensive collection of instruments, tutorials, examples, designed to provide an accessible introduction to music programming.

Klang (https://nash.audio/klang) is a modern C++ dialect for audio processing, designed to support more expressive audio processing. A single-header C++17 include, Klang brings specialist audio syntax and sematics, notably including signal flow expressions, plus a library of objects, templates, and DSP reference implementations, to any C++ project on any platform (JUCE, embedded systems, Web Audio, Unreal), supporting the development of clear, concise, high-performancem, and portable audio code. Klang can be used with any C++ IDE or toolchain, but is also supported in Klang Studio, a VST/AU plugin-based C++ IDE specifically developed for the development of synthesisers and effects inside the DAW, supporting live editing, compiling, hot swapping, debugging, and audio analysis inside the running synth or effect plugin itself.


ABOUT US

nash.audio (https://nash.audio), a Bristol-based non-profit organization, founded in 2022 by Chris Nash, established to support learning and creativity in digital music, specialising in the development of new technologies for audio and music programming, such as the Manhattan, Klang, and rapIDE projects.

Chris Nash

Director / Founder

nash.audio

Chris Nash is a software developer, composer, educator and researcher in things that go beep in the night. Following a PhD on music software design at Cambridge, he has worked on technology and music projects across academia and industry, including for the BBC, Steinberg/Yamaha, and multiple start-ups, and independently develops and maintains several software projects, specialising in computer music and making music programming more accessible, including Manhattan (a hybrid DAW/programming language), Klang (a C++ dialect for audio), rapIDE (a plug-in based C++ IDE) and reViSiT (an award-winning plug-in based sound tracker).

He is currently Senior Lecturer in Software Development for Audio, Sound, and Music at UWE Bristol, and recently founded nash.audio, a non-profit organisation supporting creativity and learning in music technology. Working with London-based Point Blank Music School, Dr Nash is the architect of the forthcoming MuSE (Music Systems Engineering) course, developed in collaboration with industry to be the world's first professional audio developer degree programme.

VolumetricCondensed