Academic research papers contain exciting new ideas but typically no ready-to-use source code. Papers are not written for the layperson and are often filled with jargon and inscrutable equations. It's up to the reader to decipher the math and wrestle through dense theory to transform these ideas into C++ code. This can be a daunting task, especially for developers who do not have an academic or mathematics background. Patents, in particular expired ones, are also a great source of ideas and are even harder to untangle! Fortunately, with a little effort you can learn to read these perplexing documents and extract their secrets. This talk aims to bridge the gap between theory and practice and will show useful techniques for quickly making sense of scientific publications and patents, so that you can finally start turning them into code.
Conference Year: 2024
Docker for the Audio Developer
Most of you have probably heard of Docker, had to use it to deploy some service or application or are using regularily in your workflow.
The aim of this talk is to provide everyone with useful takeaways, regardless of your current level of knowledge.
We will be looking at the problems that containerisation is aiming to solve, how Docker implements it, and go through some basic use cases.
As we are all curious, we will not miss the opportunity to peek under the hood in order to understand the Linux kernel features that make Docker possible.
Finally, we will be looking at two real use cases:
- Using Docker as part of a CI system, to easily manage a build environment, and touching on the "Docker in Docker" problem
- Using Docker to host third party audio applications on a Linux powered read-time audio processing box, allowing audio routing between them.
Unpacking the Role of Audio Programmers in the Games Industry
In recent years, there has been a hike in the number of audio programming roles in the games industry, along with many misconceptions about the role. This talk will delve into the responsibilities of audio programmers in the games industry and explore the variety of tasks they perform, which can differ significantly between companies. It will address common misconceptions about the role, outline the essential skills and knowledge required and offer recommendations for platforms and resources to learn these skills.
Attendees will gain a comprehensive understanding of the day-to-day work of audio programmers in the games industry, as well as the various specialisations within the field.
JS Engines for Audio Developers
In this talk, we will explore the potential of JavaScript engines for audio developers, where C++ is traditionally the language of choice. JavaScript can complement C++ effectively for various purposes, including UI development, scripting, and leveraging an ecosystem of libraries and tools. Whilst you'll be bringing your own possible use cases and reasons for investigating integrating JavaScript in to your code base, the intention of this talk is to expose you to a number of the available engines with a goal of allowing you to make an informed decision on the direction you'd like to go in. We'll be discussing the integration process with C++, performance considerations and helpful libraries. In addition to this we'll cover some exciting developments in this space to keep your eye for the future.
Challenges in Real-Time Physical Modelling for Sound Synthesis
Although the field of physical modelling for sound synthesis goes back to the early 60s, real-time implementation of complex physical models has only recently become possible. During my research in the in the real-time physical modelling field over the last couple of years, I learned a few lessons I would like to share in this talk. Furthermore, drawing from my experience gained during the development of Atoms, a physical modelling synthesiser I developed for BABY Audio, this talk will address the main challenges of physical modelling in real-time audio plugins (or at least, those that I encountered). The presentation will include with some tips and tricks for developing physical modelling plugins using JUCE, as well as what to expect from the end-users when releasing a physical modelling plugin.
Performance Implications of Frequency Domain Crossfading for Time-Varying FIR Filtering
A common strategy to avoid audible artifacts when exchanging impulse responses used for convolution with audio signals at runtime is to use a simple crossfade. This often requires more than double the amount of operations by filtering the input signal with both impulse responses separately (often in the frequency domain), and blending the resulting signals in the time-domain. If the crossfade was applied in the frequency domain, one IFFT operation could be eliminated, at the cost of two direct convolutions between the two signal spectra with the blending functions. Most of the time however, transforming each filtered signal separately back into the time domain before blending is still significantly faster than applying the blending function through convolution in the frequency domain. With a carefully chosen blending function, direct convolution of the spectra could be faster than the time-domain multiplication. We look at why and how this technique might work and the associated performance implications.
High Performance WebView User Interfaces
No one likes unresponsive user interfaces, least of all musicians. WebViews are an increasingly common choice for building UIs because of their increased developer productivity, ability to meet the demands of graphic designers, and enormous ecosystem of UI libraries. But when done wrong, they can easily become slow and resource hungry, which is bad for both users and developers.
This talk covers patterns and best practices for creating user interfaces using both the DOM and 2D Canvas API (which is more suitable for custom interactions and graphically demanding tasks). These techniques will be discussed via a real-world case study: a DAW user interface (zoomable waveforms and all) at 60 FPS.
Aimed at all developers new to the web, demonstrations use vanilla TypeScript (no frameworks!) to teach the techniques from first principles, and to have a type system that will make C++ developers feel at home.
Squeeze JUCE and Bottle It Into Embedded Devices and More
Running JUCE applications on embedded Linux devices is not a new thing. The topic has been treated in previous ADCs and, most notably, at ADC 2016 Felipe Tonello introduced a repository for Yocto/OpenEmbedded to make the task easier.
However, several things have changed in the meantime both in JUCE and in the embedded Linux world. This talk will present a new integration between CMake-based JUCE projects and Yocto/Openembedded, targeting modern embedded Linux platforms using the Raspberry Pi as an example.
The focus will be on the entire embedded development workflow, showing how to build target filesystem that can be run on the devices, cross-compiling SDKs, and automated testing tools suitable for modern CI/CD integration.
Building a plugin assembly line
Having an efficient plugin production pipeline is crucial for delivering high-quality releases while lowering the cost and time to market.
What are the required tools and strategies needed to streamline the production of a series of plugins?
We'll present our rapid development framework, sharing our journey towards building a more efficient production workflow.
Sample Rate Conversion
Sample Rate Conversion (SRC) is a key component of digital audio processing that lets you change how many samples per second are used to represent a stream. It is fundamental when you need to make audio from one system work with another that uses a different sample rate. Getting SRC right is crucial in lots of audio applications, particularly in environments where multiple audio devices coexist, each potentially functioning with its own clock frequency.
The importance of SRC stems from three main factors: 1) preventing pitch distortion; in fact, running an audio stream at an incorrect rate can alter pitch or the relative relationships between pitches. 2) Maintaining synchronization, i.e., ensuring different devices remain in step with one another and 3) compensating for clock drift by accounting for slight frequency variations between devices nominally operating at the same frequency.
This presentation will begin with an introduction to the fundamental principles of SRC.
The goal for this talk is to give attendees a comprehensive understanding of SRC's importance in audio processing.