Tag: GPU

GPU Based Audio Processing Platform with AI Audio Effects – Simon Schneider – ADCxGather 2024

https://audio.dev/ -- @audiodevcon​
---

GPU Based Audio Processing Platform with AI Audio Effects - Are GPUs ready for real-time processing in live sound engineering? - Simon Schneider - ADCxGather 2024
---

Parallelising real-time audio effects requires complex task management and synchronisation. GPUs are optimised for parallel processing while still retaining flexible scheduling comparable to a CPU. As a result, it combines the strengths of both DSPs and CPUs in a single device. In addition, recent trends advocate using AI audio processing algorithms, which work best on GPU architectures.

This thesis presents an implementation of an embedded GPU-based audio processing framework on an Nvidia Jetson hardware platform. It can combine neural network inference and other audio effects into signal graphs that process within periods as small as 32 frames (0.667ms).

The signal graph does not restrict the number and combination of parallel and serial audio effects as long as the real-time limit is met. Therefore, the framework has been tested on large numbers of parallel channels, as found in a mixing console, and complex routing options available in high-end audio effect processors, such as the Neural DSP Quad Cortex.

Launching GPU work using the CUDA graph API produces better stability and performance than was observed using the CUDA stream API in a 2017 study. Processing a signal graph that fully utilises the Jetson's resources by mimicking a 64-channel mixing console on a 128-frame (2.67ms) period has a higher than 99% success rate. However, occasional stalling on the GPU can produce worst-case execution times of up to 20ms, regardless of the loaded audio effects. As a result, the framework can not yet be classified as real-time capable.
Further study of the CUDA scheduler and improvements to the operating system and audio driver may be able to achieve real-time capability in the future.
---

Simon Schneider

Simon is a musician based in Winterthur. After completing his Bachelor's Degree in IT he realized that he could combine his passion for music and software engineering in audio development. As a result, he went on to complete a Master's Degree in Electrical Engineering to learn about signal processing and hardware programming. During this time he was working part-time in the ZHAW Institute of Embedded Systems. There he developed an audio effects processing platform on an Nvidia Jetson GPU as part of his Master Thesis.

His bandmates will say, that playing the guitar is just an excuse to justify all the money he spends on music-related gear. While this is not entirely untrue, Simon is the type of programmer who spends days automating or streamlining a repetitive process that takes minutes to complete manually.
As a result, automating guitar effect changes or building a wireless InEar monitoring rack for shorter live show setup and teardown times simply requires a lot of gear.

Furthermore, Simon has a flair for modular and customizable software architecture. Since audio gear often lacks both of these qualities, he endeavours to make them a primary focus in all his audio development projects.
---

ADC is an annual event celebrating all audio development technologies, from music applications and game audio to audio processing and embedded systems. ADC’s mission is to help attendees acquire and develop new audio development skills, and build a network that will support their audio developer career.
Annual ADC Conference - https://audio.dev/
https://www.linkedin.com/company/audiodevcon

https://facebook.com/audiodevcon
https://instagram.com/audiodevcon
https://www.reddit.com/r/audiodevcon/
https://mastodon.social/@audiodevcon
---

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
---

Organized and produced by JUCE: https://juce.com/
---

Special thanks to the ADCxGather Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#audioprocessing #gpu #ai #adc #audiodev #dsp #audio #conferenceaudio #audioproduction #audioprogramming #musictech #soundtech #audiotech #audiotechnology

Filed under: UncategorizedTagged with: , , , ,

Fast Audio Thread Synchronization for GPU Data – Evan Mezeske – ADC23

Join Us For ADC24 - Bristol - 11-13 November 2024
More Info: https://audio.dev/
@audiodevcon​

Fast Audio Thread Synchronization for GPU Data - Evan Mezeske - ADC 2023

While building my GPU-based physics simulation instrument Anukari (https://www.youtube.com/watch?v=nUO6iMcbao4), I had to solve a number of significant challenges, and I'll explain my solutions in this talk. The talk is not about Anukari per se; rather it's about some of the interesting solutions I developed as part of building it.

One challenge had to do with synchronizing the Anukari's data model from the GUI thread to the audio thread. Anukari models arbitrary networks of masses and springs, and can simulate close to a thousand masses and many thousands of springs. This data model is thus rather large, and it was nontrivial to provide model updates from the GUI to the audio thread. I designed a reliable wait-free approach that works without mutexes, without memory allocation, has support for transactions, and with minimal data transfer between threads. I will discuss a few technologies that I used together, including wait-free SPSC queues, the difference between wait-free and lock-free algorithms, reasons for avoiding mutexes and memory allocations, and custom data structures for avoiding memory allocations.

Another challenge was in regards to running GPU physics simulations at audio sample rates (48kHz). Memory bandwidth was a major issue, as well as kernel execution latency and cross-GPU-thread synchronization. I will discuss the OpenCL language and its limitations, the approaches I used to deal with OpenCL kernel execution latency, and the on-GPU thread synchronization, and memory optimizations..

And, of course, I will show how all of this ties together into a reliable system for synchronizing the GUI and audio threads with no waits, despite a large data model and compute-intensive physics simulation.

Prerequisite(s): Familiarity with C++ programming Familiarity with thread synchronization primitives like mutexes.
_

Evan Mezeske

Evan Mezeske is a software engineer and amateur musician based out of Arizona, USA. He spent the last 10 years working as a senior engineering leader on large-scale distributed systems at Google before defecting in early 2023 to found his music software company, Anukari Music. Anukari's upcoming eponymous flagship product will allow users to construct 3D arbitrary models of springs and masses and to manipulate them in real time to produce music, sound effects, and audio filters (https://www.youtube.com/@anukarimusic). Primarily a jazz guitarist, Evan also plays keyboards, produces electronic music, and enjoys learning obscure musical instruments (such as the two-chamber Peruvian Ocarina). Anyone interested in learning more about Anukari is encouraged to join Anukari's Discord server! https://discord.gg/ywwYfhGbDp
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #cppprogramming #dsp #audio

Filed under: UncategorizedTagged with: , , , ,

Accelerated Audio Computing: From Problem to Solution – Alexander Talashov & Alexander Prokopchuk

https://audio.dev/ -- @audiodevcon​

Accelerated Audio Computing: From Problem to Solution - Alexander Talashov & Alexander Prokopchuk - ADC 2023

Last year, we shared the vision of accelerated audio computing: from technology overview, first product reveals, and Beta testing to fields of application / verticals and some promises on the emerging trend.

In this talk, we'd like to cover our year of incredible progress:

- Technology traction. New platforms support, new features, LTS release;
- Use cases and scenarios. From plugins running locally to the fully GPU-powered products in the cloud or embedded;
- SDK vision. One extendible platform that covers it all. Code things CUDA-style once and get it working anywhere you want;
- SDK release. From problems of making public releases to the technical proposal to the Plugin Industry Standards VST, AU, AAX, CLAP etc.

Throughout this session, we encourage you to engage with us. We want to hear your thoughts, your ideas, and your vision for what our SDK can achieve. During the Q&A session, please share your insights on the functionalities you might seek in our SDK and the use cases you envision for it. Your input is invaluable as we shape the future of accelerated audio computing, powered by GPUs!

Let's get on this exciting journey together!
_

Alexander Talashov
_

Alexander Prokopchuk
_

Streamed & Edited by Digital Medium Ltd: https://online.digital-medium.co.uk
_

Organized and produced by JUCE: https://juce.com/
_

Special thanks to the ADC23 Team:

Sophie Carus
Derek Heimlich
Andrew Kirk
Bobby Lombardi
Tom Poole
Ralph Richbourg
Jim Roper
Jonathan Roper
Prashant Mishra

#adc #audiodev #gpu #audio

Filed under: UncategorizedTagged with: , , , , ,