GPU Based Audio Processing Platform with AI Audio Effects
Are GPUs ready for real-time processing in live sound engineering?
Parallelising real-time audio effects requires complex task management and synchronisation. GPUs are optimised for parallel processing while still retaining flexible scheduling comparable to a CPU. As a result, it combines the strengths of both DSPs and CPUs in a single device. In addition, recent trends advocate using AI audio processing algorithms, which work best on GPU architectures.
This thesis presents an implementation of an embedded GPU-based audio processing framework on an Nvidia Jetson hardware platform. It can combine neural network inference and other audio effects into signal graphs that process within periods as small as 32 frames (0.667ms).
The signal graph does not restrict the number and combination of parallel and serial audio effects as long as the real-time limit is met. Therefore, the framework has been tested on large numbers of parallel channels, as found in a mixing console, and complex routing options available in high-end audio effect processors, such as the Neural DSP Quad Cortex.
Launching GPU work using the CUDA graph API produces better stability and performance than was observed using the CUDA stream API in a 2017 study. Processing a signal graph that fully utilises the Jetson's resources by mimicking a 64-channel mixing console on a 128-frame (2.67ms) period has a higher than 99% success rate. However, occasional stalling on the GPU can produce worst-case execution times of up to 20ms, regardless of the loaded audio effects. As a result, the framework can not yet be classified as real-time capable.
Further study of the CUDA scheduler and improvements to the operating system and audio driver may be able to achieve real-time capability in the future.
Simon Schneider
Research Assistant
ZHAW Institute of Embedded Systems
Simon is a musician based in Winterthur. After completing his Bachelor's Degree in IT he realized that he could combine his passion for music and software engineering in audio development. As a result, he went on to complete a Master's Degree in Electrical Engineering to learn about signal processing and hardware programming. During this time he was working part-time in the ZHAW Institute of Embedded Systems. There he developed an audio effects processing platform on an Nvidia Jetson GPU as part of his Master Thesis.
His bandmates will say, that playing the guitar is just an excuse to justify all the money he spends on music-related gear. While this is not entirely untrue, Simon is the type of programmer who spends days automating or streamlining a repetitive process that takes minutes to complete manually.
As a result, automating guitar effect changes or building a wireless InEar monitoring rack for shorter live show setup and teardown times simply requires a lot of gear.
Furthermore, Simon has a flair for modular and customizable software architecture. Since audio gear often lacks both of these qualities, he endeavours to make them a primary focus in all his audio development projects.